MPEG-2 Video

H.262[1] or MPEG-2 Part 2 (formally known as ISO/IEC 13818-2,[2] also known as MPEG-2 Video) is a digital video compression and encoding standard developed and maintained jointly by ITU-T Video Coding Experts Group (VCEG) and ISO/IEC Moving Picture Experts Group (MPEG). It is the second part of the ISO/IEC MPEG-2 standard. The ITU-T Recommendation H.262 and ISO/IEC 13818-2 documents are identical. The standard is available for a fee from the ITU-T[1] and ISO.


MPEG-2 Video is similar to MPEG-1, but also provides support for interlaced video (an encoding technique used in analog NTSC television systems). MPEG-2 video is not optimized for low bit-rates (less than 1 Mbit/s), but outperforms MPEG-1 at 3 Mbit/s and above. All standards-conforming MPEG-2 Video decoders are fully capable of playing back MPEG-1 Video streams.[3]


The ISO/IEC approval process was completed in November 1994.[4] The first edition was approved in July 1995[5] and published by ITU-T[1] and ISO/IEC in 1996.[6]

In 1996 it was extended by two amendments to include the Registration of Copyright Identifiers and the 4:2:2 Profile.[1][7] ITU-T published these amendments in 1996 and ISO in 1997.[6]

There are also other amendments published later by ITU-T and ISO.[1][2][8]


H.262 / MPEG-2 Video editions[8]
Edition Release date Latest amendment ISO/IEC standard ITU-T Recommendation Description
First edition 1995 2000 ISO/IEC 13818-2:1996[6] H.262 (07/95)
Second edition 2000 2010[1][2][9] (2011)[10] ISO/IEC 13818-2:2000[2] H.262 (02/00)

Video coding

An HDTV camera generates a raw video stream of 149,299,200 (=24*1920*1080*3) bytes per second for 24fps video. This stream must be compressed if digital TV is to fit in the bandwidth of available TV channels and if movies are to fit on DVDs. Fortunately, video compression is practical because the data in pictures is often redundant in space and time. For example, the sky can be blue across the top of a picture and that blue sky can persist for frame after frame. Also, because of the way the eye works, it is possible to delete some data from video pictures with almost no noticeable degradation in image quality.

TV cameras used in broadcasting usually generate 25 pictures a second (in Europe) or 29.97 pictures a second (in North America). Digital television requires that these pictures be digitized so that they can be processed by computer hardware. Each picture element (a pixel) is then represented by one luma number and two chrominance numbers. These describe the brightness and the color of the pixel (see YCbCr). Thus, each digitized picture is initially represented by three rectangular arrays of numbers.

A common (and old) trick to reduce the amount of data is to separate each picture into two fields upon broadcast/encoding: the "top field," which is the odd numbered horizontal lines, and the "bottom field," which is the even numbered lines. Upon reception/decoding, the two fields are displayed alternately with the lines of one field interleaving between the lines of the previous field. This format is called interlaced video; two successive fields are called a frame. The typical field rate is then 50 (Europe/PAL) or 59.94 (US/NTSC) fields per second. If the video is not interlaced, then it is called progressive video and each picture is a frame. MPEG-2 supports both options.

Another common practice to reduce the data rate is to "thin out" or subsample the two chrominance planes. In effect, the remaining chrominance values represent the nearby values that are deleted. Thinning works because the eye better resolves brightness details than chrominance details. The 4:2:2 chrominance format indicates that half the chrominance values have been deleted. The 4:2:0 chrominance format indicates that three-quarters of the chrominance values have been deleted. If no chrominance values have been deleted, the chrominance format is 4:4:4. MPEG-2 allows all three options.

MPEG-2 specifies that the raw frames be compressed into three kinds of frames: intra-coded frames (I-frames), predictive-coded frames (P-frames), and bidirectionally-predictive-coded frames (B-frames).

An I-frame is a compressed version of a single uncompressed (raw) frame. It takes advantage of spatial redundancy and of the inability of the eye to detect certain changes in the image. Unlike P-frames and B-frames, I-frames do not depend on data in the preceding or the following frames. Briefly, the raw frame is divided into 8 pixel by 8 pixel blocks. The data in each block is transformed by the Discrete Cosine Transform (DCT). The result is an 8 by 8 matrix of coefficients. The transform converts spatial variations into frequency variations, but it does not change the information in the block; the original block can be recreated exactly by applying the inverse cosine transform. The advantage of doing this is that the image can now be simplified by quantizing the coefficients. Many of the coefficients, usually the higher frequency components, will then be zero. The penalty of this step is the loss of some subtle distinctions in brightness and color. If one applies the inverse transform to the matrix after it is quantized, one gets an image that looks very similar to the original image but that is not quite as nuanced. Next, the quantized coefficient matrix is itself compressed. Typically, one corner of the quantized matrix is filled with zeros. By starting in the opposite corner of the matrix, then zigzagging through the matrix to combine the coefficients into a string, then substituting run-length codes for consecutive zeros in that string, and then applying Huffman coding to that result, one reduces the matrix to a smaller array of numbers. It is this array that is broadcast or that is put on DVDs. In the receiver or the player, the whole process is reversed, enabling the receiver to reconstruct, to a close approximation, the original frame.

Typically, every 15th frame or so is made into an I-frame. P-frames and B-frames might follow an I-frame like this, IBBPBBPBBPBB(I), to form a Group Of Pictures (GOP); however, the standard is flexible about this.


P-frames provide more compression than I-frames because they take advantage of the data in a previous I-frame or P-frame - a reference frame. To generate a P-frame, the previous reference frame is reconstructed, just as it would be in a TV receiver or DVD player. The frame being compressed is divided into 16 pixel by 16 pixel macroblocks. Then, for each of those macroblocks, the reconstructed reference frame is searched to find that 16 by 16 macroblock that best matches the macroblock being compressed. The offset is encoded as a "motion vector." Frequently, the offset is zero. But, if something in the picture is moving, the offset might be something like 23 pixels to the right and 4 pixels up. The match between the two macroblocks will often not be perfect. To correct for this, the encoder takes the difference of all corresponding pixels of the two macroblocks, and on that macroblock difference then computes the strings of coefficient values as described above. This "residual" is appended to the motion vector and the result sent to the receiver or stored on the DVD for each macroblock being compressed. Sometimes no suitable match is found. Then, the macroblock is treated like an I-frame macroblock.

The processing of B-frames is similar to that of P-frames except that B-frames use the picture in a subsequent reference frame as well as the picture in a preceding reference frame. As a result, B-frames usually provide more compression than P-frames. B-frames are never reference frames.

While the above generally describes MPEG-2 video compression, there are many details that are not discussed including details involving fields, chrominance formats, responses to scene changes, special codes that label the parts of the bitstream, and other pieces of information.

Video profiles and levels

MPEG-2 video supports a wide range of applications from mobile to high quality HD editing. For many applications, it is unrealistic and too expensive to support the entire standard. To allow such applications to support only subsets of it, the standard defines profile and level.

The profile defines the subset of features such as compression algorithm, chroma format, etc. The level defines the subset of quantitative capabilities such as maximum bit rate, maximum frame size, etc.

A MPEG application then specifies the capabilities in terms of profile and level. For example, a DVD player may say it supports up to main profile and main level (often written as MP@ML). It means the player can play back any MPEG stream encoded as MP@ML or less.

The tables below summarizes the limitations of each profile and level. There are many other constraints not listed here. Note that not all profile and level combinations are permissible.

MPEG-2 Profiles
Abbr. Name Picture Coding Types Chroma Format Aspect Ratios Scalable modes Intra DC Precision
SP Simple profile I, P 4:2:0 square pixels, 4:3, or 16:9 none 8, 9, 10
MP Main profile I, P, B 4:2:0 square pixels, 4:3, or 16:9 none 8, 9, 10
SNR SNR Scalable profile I, P, B 4:2:0 square pixels, 4:3, or 16:9 SNR (signal-to-noise ratio) scalable 8, 9, 10
Spatial Spatially Scalable profile I, P, B 4:2:0 square pixels, 4:3, or 16:9 SNR- or spatial-scalable 8, 9, 10
HP High profile I, P, B 4:2:2 or 4:2:0 square pixels, 4:3, or 16:9 SNR- or spatial-scalable 8, 9, 10, 11
422 4:2:2 profile I, P, B 4:2:2 or 4:2:0 square pixels, 4:3, or 16:9 none 8, 9, 10, 11
MVP Multi-view profile I, P, B 4:2:0 square pixels, 4:3, or 16:9 Temporal 8, 9, 10

Exempting scalability (a rarely used feature where one MPEG-2 stream augments another), the following are some of the constraints on levels:

MPEG-2 Levels
Abbr. Name Frame rates (Hz) Max horizontal resolution Max vertical resolution Max luminance samples per second (approximately height x width x framerate) Max bit rate in Main profile (Mbit/s)
LL Low Level 23.976, 24, 25, 29.97, 30 352 288 3,041,280 4
ML Main Level 23.976, 24, 25, 29.97, 30 720 576 10,368,000, except in High profile, where constraint is 14,475,600 for 4:2:0 and 11,059,200 for 4:2:2 15
H-14 High 1440 23.976, 24, 25, 29.97, 30, 50, 59.94, 60 1440 1152 47,001,600, except that in High profile with 4:2:0, constraint is 62,668,800 60
HL High Level 23.976, 24, 25, 29.97, 30, 50, 59.94, 60 1920 1152 62,668,800, except that in High profile with 4:2:0, constraint is 83,558,400 80
Common MPEG-2 Profile/Level combinations
Profile @ Level Resolution (px) Framerate max. (Hz) Sampling Bitrate (Mbit/s) Example Application
SP@LL 176 × 144 15 4:2:0 0.096 Wireless handsets
SP@ML 352 × 288 15 4:2:0 0.384 PDAs
320 × 240 24
MP@LL 352 × 288 30 4:2:0 4 Set-top boxes (STB)
MP@ML 720 × 480 30 4:2:0 15 (DVD: 9.8) DVD, SD-DVB
720 × 576 25
MP@H-14 1440 × 1080 30 4:2:0 60 (HDV: 25) HDV
1280 × 720 30
MP@HL 1920 × 1080 30 4:2:0 80 ATSC 1080i, 720p60, HD-DVB (HDTV).

(Bitrate for terrestrial transmission is limited to 19.39Mbit/s)

1280 × 720 60
422P@LL 4:2:2
422P@ML 720 × 480 30 4:2:2 50 Sony IMX using I-frame only, Broadcast "contribution" video (I&P only)
720 × 576 25
422P@H-14 1440 × 1080 30 4:2:2 80
1280 × 720 60
422P@HL 1920 × 1080 30 4:2:2 300 Sony MPEG HD422 (50 Mbit/s), Canon XF Codec (50 Mbit/s), Convergent Design Nanoflash recorder (up to 160 Mbit/s)
1280 × 720 60


Some applications are listed below.

  • DVD-Video - a standard definition consumer video format. Uses 4:2:0 color subsampling and variable video data rate up to 9.8 Mbit/s.
  • MPEG IMX - a standard definition professional video recording format. Uses intraframe compression, 4:2:2 color subsampling and user-selectable constant video data rate of 30, 40 or 50 Mbit/s.
  • HDV - a tape-based high definition video recording format. Uses 4:2:0 color subsampling and 19.4 or 25 Mbit/s total data rate.
  • XDCAM - a family of tapeless video recording formats, which, in particular, includes formats based on MPEG-2 Part 2. These are: standard definition MPEG IMX (see above), high definition MPEG HD, high definition MPEG HD422. MPEG IMX and MPEG HD422 employ 4:2:2 color subsampling, MPEG HD employs 4:2:0 color subsampling. Most subformats use selectable constant video data rate from 25 to 50 Mbit/s, although there is also a variable bitrate mode with maximum 18 Mbit/s data rate.
  • XF Codec - a professional tapeless video recording format, similar to MPEG HD and MPEG HD422 but stored in a different container file.
  • HD DVD - defunct high definition consumer video format.
  • Blu-ray Disc - high definition consumer video format.
  • Broadcast TV - in some countries MPEG-2 Part 2 is used for digital broadcast in high definition. For example, ATSC specifies both several scanning formats (480i, 480p, 720p, 1080i, 1080p) and frame/field rates at 4:2:0 color subsampling, with up to 19.4 Mbit/s data rate per channel.
  • Digital cable TV
  • Satellite TV


External links

  • Official MPEG web site
  • MPEG-2 Video Encoding (H.262) - The Library of Congress

This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for and content contributors is made possible from the U.S. Congress, E-Government Act of 2002.
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization.