Home > Articles > Hardware

  • Print
  • + Share This
This chapter is from the book

2.2 Analog Video

We used to live in a world of analog images and video, where we dealt with photographic film, analog TV sets, videocassette recorders (VCRs), and camcorders. For video distribution, we relied on analog TV broadcasts and analog cable TV, which transmitted predetermined programming at a fixed rate. Analog video, due to its nature, provided a very limited amount of interactivity, e.g., only channel selection on the TV and fast-forward search and slow-motion replay on the VCR. Additionally, we had to live with the NTSC/PAL/SECAM analog signal formats with their well-known artifacts and very low still-frame image quality. In order to display NTSC signals on computer monitors or European TV sets, we needed expensive transcoders. In order to display a smaller version of the NTSC picture in a corner of the monitor, we first had to digitize the whole picture and then digitally reduce its size. Searching a video archive for particular footage required tedious visual scanning of a whole bunch of videotapes. Motion pictures were recorded on photographic film, which is a high-resolution analog medium, or on laser discs as analog signals using optical technology. Manipulation of analog video is not an easy task, since it requires digitization of the analog signal into digital form first.

Today almost all video capture, processing, transmission, storage, and search are in digital form. In this section, we describe the nature of the analog-video signal because an understanding of history of video and the limitations of analog video formats is important. For example, interlaced scanning originates from the history of analog video. We note that video digitized from analog sources is limited by the resolution and the artifacts of the respective analog signal.

2.2.1 Progressive vs. Interlaced Scanning

The analog-video signal refers to a one-dimensional (1D) signal s(t) of time that is obtained by sampling sc(x1, x2, t) in the vertical x2 and temporal coordinates. This conversion of 3D spatio-temporal signal into a 1D temporal signal by periodic vertical-temporal sampling is called scanning. The signal s(t), then, captures the time-varying image intensity sc(x1, x2, t) only along the scan lines. It also contains the timing information and blanking signals needed to align pictures.

The most commonly used scanning methods are progressive scanning and interlaced scanning. Progressive scan traces a complete picture, called a frame, at every Δt sec. The spot flies back from B to C, called the horizontal retrace, and from D to A, called the vertical retrace, as shown in Figure 2.5(a). For example, the computer industry uses progressive scanning with Δt=1/72 sec for monitors. On the other hand, the TV industry uses 2:1 interlaced scan where the odd-numbered and even-numbered lines, called the odd field and the even field, respectively, are traced in turn. A 2:1 interlaced scanning raster is shown in Figure 2.5(b), where the solid line and the dotted line represent the odd and the even fields, respectively. The spot snaps back from D to E, and from F to A, for even and odd fields, respectively, during the vertical retrace intervals.

Figure 2.5

Figure 2.5 Scanning raster: (a) progressive scan; (b) interlaced scan.

2.2.2 Analog-Video Signal Formats

Some important parameters of the video signal are the vertical resolution, aspect ratio, and frame/field rate. The vertical resolution is related to the number of scan lines per frame. The aspect ratio is the ratio of the width to the height of a frame. As discussed in Section 2.1.3, the human eye does not perceive flicker if the refresh rate of the display is more than 50 Hz. However, for analog TV systems, such a high frame rate, while preserving the vertical resolution, requires a large transmission bandwidth. Thus, it was determined that analog TV systems should use interlaced scanning, which trades vertical resolution to reduced flickering within a fixed bandwidth.

An example analog-video signal s(t) is shown in Figure 2.6. Blanking pulses (black) are inserted during the retrace intervals to blank out retrace lines on the monitor. Sync pulses are added on top of the blanking pulses to synchronize the receiver’s horizontal and vertical sweep circuits. The sync pulses ensure that the picture starts at the top-left corner of the receiving monitor. The timing of the sync pulses is, of course, different for progressive and interlaced video.

Figure 2.6

Figure 2.6 Analog-video signal for one full line.

Several analog-video signal standards, which are obsolete today, have different image parameters (e.g., spatial and temporal resolution) and differ in the way they handle color. These can be grouped as: i) component analog video; ii) composite video; and iii) S-video (Y/C video). Component analog video refers to individual red (R), green (G), and blue (B) video signals. Composite-video format encodes the chrominance components on top of the luminance signal for distribution as a single signal that has the same bandwidth as the luminance signal. Different composite-video formats, e.g., NTSC (National Television Systems Committee), PAL (Phase Alternation Line), and SECAM (Systeme Electronique Color Avec Memoire), have been used in different regions of the world. The composite signal usually results in errors in color rendition, known as hue and saturation errors, because of inaccuracies in the separation of the color signals. S-video is a compromise between the composite video and component video, where we represent the video with two component signals, a luminance and a composite chrominance signal. The chrominance signals have been based on (I,Q) or (U,V) representation for NTSC, PAL, or SECAM systems. S-video was used in consumer-quality videocasette recorders and analog camcorders to obtain image quality better than that of composite video. Cameras specifically designed for analog television pickup from motion picture film were called telecine cameras. They employed frame-rate conversion from 24 frames/sec to 60 fields/sec.

2.2.3 Analog-to-Digital Conversion

The analog-to-digital (A/D) conversion process consists of pre-filtering (for anti-aliasing), sampling, and quantization of component (R, G, B) signal or composite signal. The ITU (International Telecommunications Union) and SMPTE (Society of Motion Picture and Television Engineers) have standardized sampling parameters for both component and composite video to enable easy exchange of digital video across different platforms. For A/D conversion of component signals, the horizontal sampling rate of 13.5 MHz for the luma component and 6.75 MHz for two chroma components were chosen, because they satisfy the following requirements:

  1. Minimum sampling frequency (Nyquist rate) should be 4.2 × 2 = 8.4 MHz for 525/30 NTSC luma and 5 × 2 = 10 MHz for 625/50 PAL luma signals.
  2. Sampling rate should be an integral multiple of the line rate, so samples in successive lines are correctly aligned (on top of each other).
  3. For sampling component signals, there should be a single rate for 525/30 and 625/50 systems; i.e., the sampling rate should be an integral multiple of line rates (lines/sec) of both 29.97 × 525 = 15,734 and 25 × 625 = 15,625.

For sampling the composite signal, the sampling frequency must be an integral multiple of the sub-carrier frequency to simplify composite signal to RGB decoding of sampled signal. It is possible to operate at 3 or 4 times the subcarrier frequency, although most systems choose to employ 4 × 3.58 = 14.32 MHz for NTSC and 4 × 4.43 = 17.72 MHz for PAL signals, respectively.

  • + Share This
  • 🔖 Save To Your Account