노무현 대통령 배너


2007. 7. 13. 10:55

Wiki of H.264/MPEG-4 AVC

Link: http://en.wikipedia.org/wiki/H.264

H.264/MPEG-4 AVC

From Wikipedia, the free encyclopedia

(Redirected from H.264)
Jump to: navigation, search

H.264 is a standard for video compression. It is also known as MPEG-4Part10, or AVC (for Advanced Video Coding). It was written by the ITU-T Video Coding Experts Group (VCEG) together with the ISO/IEC Moving Picture Experts Group (MPEG) as the product of a partnership effort known as the Joint Video Team (JVT). The ITU-T H.264 standard and the ISO/IEC MPEG-4Part10 standard (formally, ISO/IEC14496-10) are jointly maintained so that they have identical technical content. The final drafting work on the first version of the standard was completed in May 2003.

Contents

[hide]

[edit] Overview

The H.264 name follows the ITU-T naming convention (where the standard is a member of the H.26x line of VCEG video coding standards), while the MPEG-4 AVC name relates to the naming convention in ISO/IEC MPEG (where the standard is part 10 of ISO/IEC 14496, which is the suite of standards known as MPEG-4). The standard was developed jointly in a partnership of VCEG and MPEG, after earlier development work in the ITU-T as a VCEG project called H.26L. It is thus common to refer to the standard as H.264/AVC (or AVC/H.264 or H.264/MPEG-4 AVC or MPEG-4/H.264 AVC) to emphasize the common heritage. The name H.26L, referring to its ITU-T history, is less common, but still used. Occasionally, it is also referred to as "the JVT codec", in reference to the Joint Video Team (JVT) organization that developed it. (Such partnership and multiple naming is not uncommon—for example, the video codec standard known as MPEG-2 also arose from the partnership between MPEG and the ITU-T, where MPEG-2 video is known to the ITU-T community as H.262.[1])

The intent of the H.264/AVC project was to create a standard capable of providing good video quality at substantially lower bit rates (e.g., half or less) than previous standards (e.g., relative to MPEG-2, H.263, or MPEG-4 Part 2), without increasing the complexity of design so much that it would be impractical (or excessively expensive) to implement. An additional goal was to provide enough flexibility to allow the standard to be applied to a wide variety of applications (e.g., for both low and high bit rates, and for low and high resolution video) and to make the design work effectively on a wide variety of networks and systems (e.g., for broadcast, DVD storage, RTP/IP packet networks, and ITU-T multimedia telephony systems).

The standardization of the first version of H.264/AVC was completed in May of 2003. The JVT then developed extensions to the original standard that are known as the Fidelity Range Extensions (FRExt). These extensions enable higher quality video coding by supporting increased sample bit depth precision and higher-resolution color information (including sampling structures known as YUV 4:2:2 and YUV 4:4:4). Several other features are also included in the Fidelity Range Extensions project (such as adaptive switching between 4×4 and 8×8 integer transforms, encoder-specified perceptual-based quantization weighting matrices, efficient inter-picture lossless coding, and support of additional color spaces). The design work on the Fidelity Range Extensions was completed in July of 2004, and the drafting work on them was completed in September of 2004.

Further recent extensions of the standard have included adding five new profiles intended primarily for professional applications (and deprecating one of the prior FRExt profiles that industry feedback indicated should have been designed differently), adding extended-gamut color space support, defining additional aspect ratio indicators, and defining two additional types of "supplemental enhancement information" (post-filter hint and tone mapping).

[edit] Features

H.264/AVC/MPEG-4 Part 10 contains a number of new features that allow it to compress video much more effectively than older standards and to provide more flexibility for application to a wide variety of network environments. In particular, some such key features include:

  • Multi-picture inter-picture prediction including the following features:
    • Using previously-encoded pictures as references in a much more flexible way than in past standards, allowing up to 32 reference pictures to be used in some cases (unlike in prior standards, where the limit was typically one or, in the case of conventional "Bpictures", two). This particular feature usually allows modest improvements in bit rate and quality in most scenes. But in certain types of scenes, for example scenes with rapid repetitive flashing or back-and-forth scene cuts or uncovered background areas, it allows a very significant reduction in bit rate while maintaining clarity.
    • Variable block-size motion compensation (VBSMC) with block sizes as large as 16×16 and as small as 4×4, enabling very precise segmentation of moving regions.
    • Six-tap filtering for derivation of half-pel luma sample predictions, in order to lessen the aliasing and eventually provide sharper images.
    • Quarter-pixel precision for motion compensation, enabling very precise description of the displacements of moving areas. For chroma the resolution is typically halved both vertically and horizontally (see 4:2:0) therefore the motion compensation of chroma uses one-eighth chroma pixel grid units.
    • Weighted prediction, allowing an encoder to specify the use of a scaling and offset when performing motion compensation, and providing a significant benefit in performance in special cases—such as fade-to-black, fade-in, and cross-fade transitions.
  • Spatial prediction from the edges of neighboring blocks for "intra"coding (rather than the "DC"-only prediction found in MPEG-2 Part 2 and the transform coefficient prediction found in H.263v2 and MPEG-4 Part 2).
  • Lossless macroblock coding features including:
    • A lossless PCM macroblock representation mode in which video data samples are represented directly, allowing perfect representation of specific regions and allowing a strict limit to be placed on the quantity of coded data for each macroblock.
    • An enhanced lossless macroblock representation mode allowing perfect representation of specific regions while ordinarily using substantially fewer bits than the PCM mode (not supported in all profiles).
  • Flexible interlaced-scan video coding features (not supported in all profiles), including:
    • Macroblock-adaptive frame-field (MBAFF) coding, using a macroblock pair structure for pictures coded as frames, allowing 16x16 macroblocks in field mode (vs. 16x8 half-macroblocks in MPEG-2).
    • Picture-adaptive frame-field coding (PAFF or PicAFF) allowing a freely-selected mixture of pictures coded as MBAFF frames with pictures coded as individual single fields (half frames) of interlaced video.
  • New transform design features, including:
    • An exact-match integer 4×4 spatial block transform (conceptually similar to the well-known DCT design, but simplified and made to provide exactly-specified decoding), allowing precise placement of residual signals with little of the "ringing" often found with prior codec designs.
    • An exact-match integer 8×8 spatial block transform (conceptually similar to the well-known DCT design, but simplified and made to provide exactly-specified decoding, not supported in all profiles), allowing highly correlated regions to be compressed more efficiently than with the 4×4 transform.
    • Adaptive encoder selection between the 4×4 and 8×8 transform block sizes for the integer transform operation (not supported in all profiles).
    • A secondary Hadamard transform performed on "DC" coefficients of the primary spatial transform (for chroma DC coefficients and also luma in one special case) to obtain even more compression in smooth regions.
  • A quantization design including:
    • Logarithmic step size control for easier bit rate management by encoders and simplified inverse-quantization scaling.
    • Frequency-customized quantization scaling matrices selected by the encoder for perceptual-based quantization optimization (not supported in all profiles).
  • An in-loop deblocking filter which helps prevent the blocking artifacts common to other DCT-based image compression techniques.
  • An entropy coding design including:
    • Context-adaptive binary arithmetic coding (CABAC), an algorithm to losslessly compress syntax elements in the video stream knowing the probabilities of syntax elements in a given context (not supported in all profiles). CABAC compresses data more efficiently than CAVLC but requires considerably more processing to decode.
    • Context-adaptive variable-length coding (CAVLC), which is a lower-complexity alternative to CABAC for the coding of quantized transform coefficient values. Although lower complexity than CABAC, CAVLC is more elaborate and more efficient than the methods typically used to code coefficients in other prior designs.
    • A common simple and highly structured variable length coding (VLC) technique for many of the syntax elements not coded by CABAC or CAVLC, referred to as Exponential-Golomb coding (or just Exp-Golomb).
  • Loss resilience features including:
    • A network abstraction layer (NAL) definition allowing the same video syntax to be used in many network environments, including features such as sequence parameter sets (SPSs) and picture parameter sets (PPSs) that provide more robustness and flexibility than provided in prior designs.
    • Flexible macroblock ordering (FMO, also known as slice groups and not supported in all profiles) and arbitrary slice ordering (ASO), which are techniques for restructuring the ordering of the representation of the fundamental regions (called macroblocks) in pictures. Typically considered an error/loss robustness feature, FMO and ASO can also be used for other purposes.
    • Data partitioning (DP), a feature providing the ability to separate more important and less important syntax elements into different packets of data, enabling the application of unequal error protection (UEP) and other types of improvement of error/loss robustness (not supported in all profiles).
    • Redundant slices (RS), an error/loss robustness feature allowing an encoder to send an extra representation of a picture region (typically at lower fidelity) that can be used if the primary representation is corrupted or lost (not supported in all profiles).
    • Frame numbering, a feature that allows the creation of "sub-sequences" (enabling temporal scalability by optional inclusion of extra pictures between other pictures), and the detection and concealment of losses of entire pictures (which can occur due to network packet losses or channel errors).
  • Switching slices (called SP and SI slices and not supported in all profiles), features that allow an encoder to direct a decoder to jump into an ongoing video stream for such purposes as video streaming bit rate switching and "trick mode" operation. When a decoder jumps into the middle of a video stream using the SP/SI feature, it can get an exact match to the decoded pictures at that location in the video stream despite using different pictures (or no pictures at all) as references prior to the switch.
  • A simple automatic process for preventing the accidental emulation of start codes, which are special sequences of bits in the coded data that allow random access into the bitstream and recovery of byte alignment in systems that can lose byte synchronization.
  • Supplemental enhancement information (SEI) and video usability information (VUI), which are extra information that can be inserted into the bitstream to enhance the use of the video for a wide variety of purposes.
  • Auxiliary pictures, which can be used for such purposes as alpha compositing.
  • Support of Monochrome, 4:2:0, 4:2:2, and 4:4:4 color sampling structures (depending on the selected profile).
  • Support of sample bit depth precision ranging from 8 to 14 bits per sample (depending on the selected profile).
  • The ability to encode individual color planes as distinct pictures with their own slice structures, macroblock modes, motion vectors, etc., allowing encoders to be designed with a simple parallelization structure (supported only in the three 4:4:4-capable profiles).
  • Picture order count, a feature that serves to keep the ordering of the pictures and the values of samples in the decoded pictures isolated from timing information (allowing timing information to be carried and controlled/changed separately by a system without affecting decoded picture content).

These techniques, along with several others, help H.264 to perform significantly better than any prior standard can, under a wide variety of circumstances in a wide variety of application environments. H.264 can often perform radically better than MPEG-2 video—typically obtaining the same quality at half of the bit rate or less.

Like other ISO/IEC MPEG video standards, H.264/AVC has a reference software implementation that can be freely downloaded[1]. Its main purpose is to give examples of H.264/AVC features, rather than being a useful application per se. (See the links section for a pointer to that software.) Some reference hardware design work is also under way in MPEG.

[edit] Profiles

The standard includes the following seven sets of capabilities, which are referred to as profiles, targeting specific classes of applications:

  • Baseline Profile (BP): Primarily for lower-cost applications with limited computing resources, this profile is used widely in videoconferencing and mobile applications.
  • Main Profile (MP): Originally intended as the mainstream consumer profile for broadcast and storage applications, the importance of this profile faded when the High profile was developed for those applications.
  • Extended Profile (XP): Intended as the streaming video profile, this profile has relatively high compression capability and some extra tricks for robustness to data losses and server stream switching.
  • High Profile (HiP): The primary profile for broadcast and disc storage applications, particularly for high-definition television applications (this is the profile adopted into HD DVD and Blu-ray Disc, for example).
  • High 10 Profile (Hi10P): Going beyond today's mainstream consumer product capabilities, this profile builds on top of the High Profile—adding support for up to 10 bits per sample of decoded picture precision.
  • High 4:2:2 Profile (Hi422P): Primarily targeting professional applications that use interlaced video, this profile builds on top of the High 10 Profile—adding support for the 4:2:2 chroma sampling format while using up to 10 bits per sample of decoded picture precision.
  • High 4:4:4 Predictive Profile (Hi444PP): This profile builds on top of the High 4:2:2 Profile—supporting up to 4:4:4 chroma sampling, up to 14 bits per sample, and additionally supporting efficient lossless region coding and the coding of each picture as three separate color planes.

In addition, the standard now contains four additional all-Intra profiles, which are defined as simple subsets of other corresponding profiles. These are mostly for professional (e.g., camera and editing system) applications:

  • High 10 Intra Profile: The High 10 Profile constrained to all-Intra use.
  • High 4:2:2 Intra Profile: The High 4:2:2 Profile constrained to all-Intra use.
  • High 4:4:4 Intra Profile: The High 4:4:4 Profile constrained to all-Intra use.
  • CAVLC 4:4:4 Intra Profile: The High 4:4:4 Profile constrained to all-Intra use and to CAVLC entropy coding (i.e., not supporting CABAC).
BaselineExtendedMainHighHigh 10High 4:2:2High 4:4:4

Predictive

I and P SlicesYesYesYesYesYesYesYes
B SlicesNoYesYesYesYesYesYes
SI and SP SlicesNoYesNoNoNoNoNo
Multiple Reference FramesYesYesYesYesYesYesYes
In-Loop Deblocking FilterYesYesYesYesYesYesYes
CAVLC Entropy CodingYesYesYesYesYesYesYes
CABAC Entropy CodingNoNoYesYesYesYesYes
Flexible Macroblock Ordering (FMO)YesYesNoNoNoNoNo
Arbitrary Slice Ordering (ASO)YesYesNoNoNoNoNo
Redundant Slices (RS)YesYesNoNoNoNoNo
Data PartitioningNoYesNoNoNoNoNo
Interlaced Coding (PicAFF, MBAFF)NoYesYesYesYesYesYes
4:2:0 Chroma FormatYesYesYesYesYesYesYes
Monochrome Video Format (4:0:0)NoNoNoYesYesYesYes
4:2:2 Chroma FormatNoNoNoNoNoYesYes
4:4:4 Chroma FormatNoNoNoNoNoNoYes
8 Bit Sample DepthYesYesYesYesYesYesYes
9 and 10 Bit Sample DepthNoNoNoNoYesYesYes
11 to 14 Bit Sample DepthNoNoNoNoNoNoYes
8x8 vs. 4x4 Transform AdaptivityNoNoNoYesYesYesYes
Quantization Scaling MatricesNoNoNoYesYesYesYes
Separate Cb and Cr QP controlNoNoNoYesYesYesYes
Separate Color Plane CodingNoNoNoNoNoNoYes
Predictive Lossless CodingNoNoNoNoNoNoYes
BaselineExtendedMainHighHigh 10High 4:2:2High 4:4:4

Predictive

[edit] Levels

Level numberMax macroblocks per secondMax frame size (macroblocks)Max video bit rate (VCL) for Baseline, Extended and Main ProfilesMax video bit rate (VCL) for High ProfileMax video bit rate (VCL) for High 10 ProfileMax video bit rate (VCL) for High4:2:2 and High4:4:4 Predictive ProfilesExamples for high resolution @
frame rate
(max stored frames)
in Level
114859964 kbit/s80 kbit/s192 kbit/s256 kbit/s128x96@30.9 (8)
176x144@15.0 (4)
1b148599128 kbit/s160 kbit/s384 kbit/s512 kbit/s128x96@30.9 (8)
176x144@15.0 (4)
1.13000396192 kbit/s240 kbit/s576 kbit/s768 kbit/s176x144@30.3 (9)
320x240@10.0 (3)
352x288@7.5 (2)
1.26000396384 kbit/s480 kbit/s1152 kbit/s1536 kbit/s320x240@20.0 (7)
352x288@15.2 (6)
1.311880396768 kbit/s960 kbit/s2304 kbit/s3072 kbit/s320x240@36.0 (7)
352x288@30.0 (6)
2118803962 Mbit/s2.5 Mbit/s6 Mbit/s8 Mbit/s320x240@36.0 (7)
352x288@30.0 (6)
2.1198007924 Mbit/s5 Mbit/s12 Mbit/s16 Mbit/s352x480@30.0 (7)
352x576@25.0 (6)
2.22025016204 Mbit/s5 Mbit/s12 Mbit/s16 Mbit/s352x480@30.7(10)
352x576@25.6 (7)
720x480@15.0 (6)
720x576@12.5 (5)
340500162010 Mbit/s12.5 Mbit/s30 Mbit/s40 Mbit/s352x480@61.4 (12)
352x576@51.1 (10)
720x480@30.0 (6)
720x576@25.0 (5)
3.1108000360014 Mbit/s17.5 Mbit/s42 Mbit/s56 Mbit/s720x480@80.0 (13)
720x576@66.7 (11)
1280x720@30.0 (5)
3.2216000512020 Mbit/s25 Mbit/s60 Mbit/s80 Mbit/s1280x720@60.0 (5)
1280x1024@42.2 (4)
4245760819220 Mbit/s25 Mbit/s60 Mbit/s80 Mbit/s1280x720@68.3 (9)
1920x1088@30.1 (4)
2048x1024@30.0 (4)
4.1245760819250 Mbit/s62.5 Mbit/s150 Mbit/s200 Mbit/s1280x720@68.3 (9)
1920x1088@30.1 (4)
2048x1024@30.0 (4)
4.2522240870450 Mbit/s62.5 Mbit/s150 Mbit/s200 Mbit/s1920x1088@64.0 (4)
2048x1088@60.0 (4)
558982422080135 Mbit/s168.75 Mbit/s405 Mbit/s540 Mbit/s1920x1088@72.3(13)
2048x1024@72.0(13)
2048x1088@67.8(12)
2560x1920@30.7(5)
3680x1536/26.7 (5)
5.198304036864240 Mbit/s300 Mbit/s720 Mbit/s960 Mbit/s1920x1088@120.5(16)
4096x2048@30.0(5)
4096x2304@26.7(5)
Level numberMax macroblocks per secondMax frame size (macroblocks)Max video bit rate (VCL) for Baseline, Extended and Main ProfilesMax video bit rate (VCL) for High ProfileMax video bit rate (VCL) for High 10 ProfileMax video bit rate (VCL) for High4:2:2 and High4:4:4 Predictive ProfilesExamples for high resolution @
frame rate
(max stored frames)
in Level

[edit] Standardization Committee and History

In early 1998 the Video Coding Experts Group (VCEG – ITU-T SG16 Q.6) issued a call for proposals on a project called H.26L, with the target to double the coding efficiency (which means halving the bit rate necessary for a given level of fidelity) in comparison to any other existing video coding standards for a broad variety of applications. VCEG was chaired by Gary Sullivan (Microsoft [formerly PictureTel], USA). The first draft design for that new standard was adopted in August 1999. In 2000, Thomas Wiegand (Heinrich Hertz Institute, Germany) became VCEG co-chair. In December of 2001, VCEG and the Moving Picture Experts Group (MPEG – ISO/IEC JTC 1/SC 29/WG 11) formed a Joint Video Team (JVT), with the charter to finalize the video coding standard. Formal approval of the specification came in March 2003. The JVT was (is) chaired by Gary Sullivan, Thomas Wiegand, and Ajay Luthra (Motorola, USA). In June 2004, the Fidelity range extensions (FRExt) project was finalized. Since January 2005, the JVT has been working on an extension of H.264/AVC towards scalability by an Annex called Scalable Video Coding (SVC). The JVT management team was extended by Jens-Reiner Ohm (Aachen University, Germany). Since July 2006, the JVT works on an extension of H.264/AVC towards multi-view video coding (MVC).

[edit] Versions

Versions of the H.264/AVC standard include the following completed revisions (dates are final approval dates in ITU-T, while final "International Standard" approval dates in ISO/IEC are somewhat different and later in most cases):

  • First version containing Baseline, Extended, and Main profiles (May 2003).
  • Corrigendum containing various minor corrections (May 2004).
  • Second major version containing Fidelity Range Extensions (FRExt) containing High, High 10, High 4:2:2, and High 4:4:4 profiles (March 2005).
  • Corrigendum containing various minor corrections and adding three aspect ratio indicators (September 2005).
  • Amendment containing various minor changes (June 2006):
    • Removal of prior High 4:4:4 profile (processed as a corrigendum in ISO/IEC).
    • Minor extension adding extended-gamut color space support (bundled with above-mentioned aspect ratio indicators in ISO/IEC).
  • Addition of High 4:4:4 Predictive and four Intra-only profiles (High 10 Intra, High 4:2:2 Intra, High 4:4:4 Intra, and CAVLC 4:4:4 Intra) (April 2007).

Planned additions:

  • Scalable video coding (SVC) – not yet completed.
  • Corrigendum containing various minor corrections – not yet completed.
  • Multi-view coding (MVC) – not yet completed.

[edit] Patent licensing

In countries where software patent regulations are upheld, the vendors of products which make use of H.264/AVC are expected to pay patent licensing royalties for the patented technology that their products use. A private organization known as MPEG LA, which is not affiliated in any way with the MPEG standardization organization, administers the licenses for patents applying to this standard, as well as the patent pools for MPEG-2 Part 1 Systems, MPEG-2 Part 2 Video, MPEG-4 Part 2 Video, and other technologies.

In January 2007, a U.S. District court jury gave an advisory opinion that one patent owned by Qualcomm should be invalidated.[2] Qualcomm had claimed that the patent had been incorporated in H.264 in violation of its patent.[3][4] The U.S. District Court judge has yet to rule on the verdict.[5]

[edit] Open Source/Free Software licensing

Discussions are often held regarding the legality of free software implementations of codecs like H.264, especially concerning its legal use GNU LGPL and GPL implementations of H.264 and other patented codecs. Consensus in discussions is that the allowable use depends on the laws of local jurisdictions. If operating and/or shipping a product in a country or group of countries where none of the patents covering H.264 apply, then using, for example, an LGPL implementation of the codec is not a problem: There is no conflict between the software license and the (non-existent) patent license.

Conversely, shipping a product in the US which includes an LGPL H.264 decoder/encoder would be in violation of the software license of the codec implementation. In simple terms, the LGPL and GPL licenses require that any rights held in conjunction with distributing and using the code also apply to anyone receiving the code, and no further restrictions are put on distribution or use. If there is a requirement for a patent license to be sought, this is a clear violation of both the GPL and LGPL terms. Thus, the right to distribute patent-encumbered code under those licenses as part of the product is revoked per the terms of the GPL and LGPL.

There have been no known court cases testing this legal interpretation to be correct, however its interpretation fits best with statements regarding the topic made by the Free Software Foundation on this patent rights issue, in cases likely to use an expert/authoritative source on interpretation of the GPL and LGPL in a possible lawsuit.[citation needed]

[edit] Applications

H.264/AVC experienced widespread adoption within a few years of the completion of the standard. It is employed widely in applications ranging from television broadcast to video for mobile devices. In order to ensure compatibility and problem-free adoption of H.264/AVC, many standards bodies have amended or added to video standards so that users of these standards can employ H.264/AVC.

Both of the major candidate next-generation DVD rival formats deployed in 2006 include the H.264/AVC High Profile as a mandatory player feature—specifically:

  • The HD DVD format of the DVD Forum
  • The Blu-ray Disc format of the Blu-ray Disc Association (BDA)

The Digital Video Broadcast (DVB) standards body in Europe approved the use of H.264/AVC for broadcast television in Europe in late 2004. The Advanced Television Systems Committee (ATSC) standards body in the United States is considering the possibility of specifying one or two advanced video codecs for its optional Enhanced-VSB (E-VSB) transmission mode for use in U.S. broadcast television. It has included H.264/AVC and VC-1 into Candidate Standards as CS/TSG-659r2[6] and CS/TSG-658r1[7] respectively for this purpose. The status of terrestrial broadcast adoption in some specific countries is as follows:

  • The prime minister of France announced the selection of H.264/AVC as a requirement for receivers of HDTV and pay TV channels for digital terrestrial broadcast television services (referred to as "TNT") in France in late 2004.
  • The terrestrial broadcast systems in Brazil, Estonia and Slovenia are expected to use H.264/AVC for all digital television services.
  • Its use has begun in Lithuania.
  • It is expected that countries like Ukraine which, as of April 2007, have not launched nationwide DVB-T services (and thus don't have an installed base of legacy MPEG-2 receivers) will use H.264/AVC for their DVB-T broadcasts.
  • The Digital Multimedia Broadcast (DMB) service in the Republic of Korea will use H.264/AVC.
  • Mobile-segment terrestrial broadcast services of ISDB-T in Japan will use the H.264/AVC codec, including major broadcasters such as NHK and Fuji Television.
  • Norwegian NTV will use H.264/AVC for it's national DVB-T broadcasting starting October 2007 in central southern areas of Norway. Norway will be among the first to use MPEG-4/AVC exclusively in all its terrestrial television broadcasts, when the analogue transponders are switched off in 2009.
  • Hong Kong leading broadcaster,TVB, selected H.264 in for new digital services there, including HDTV service, in the China DMB-T/H system environment, starting from the end of 2007.[8]

Direct broadcast satellite TV services will use the new standard, including:

'Interface' 카테고리의 다른 글

I2C Bus  (0) 2008.07.09
Overview of H.264/AVC and MPEG-4 Codec Products  (1) 2007.07.13
USB 참고자료  (0) 2006.07.20
[본문스크랩] I2C : 보드 내의 시리얼 통신 규약  (0) 2006.06.02
[본문스크랩] RS 232/422/485 연결  (0) 2006.06.02