Between the Lines
The evolution of revolutionary broadcast technology
Introductions to the topic of data acquisition eventually include a discussion of sampling. This is not due to outmoded homage to tradition, but simply because we do not have the technology to record an infinite number of items. Our present paradigm of the universe holds that time is a continuous variable and, as such, can be divided into ever smaller increments until the width of each interval approaches the limit of zero. With some experimental effort, spectroscopists can measure time slices on the order of femtoseconds (10-15 seconds) and, as far as we can tell, not much happens in the physical world between subsequent measurements at this time scale. If we desired to record events occurring at this resolution, we would need at least one million gigabits of storage space every second — a pretty good practical definition of infinity at our current level of technology.
Acquiring and storing discrete data values over time often requires us to sample events at a much lower rate than they actually occur; a process known as frequency down-conversion. The well-known Nyquist-Shannon sampling theorem states that the sampling rate must be greater than twice the frequency of the event we wish to capture and analyze. Processes occurring faster than half the sampling rate produce nonsensical artifacts in the collected data as illustrated by the “wagon-wheel effect.” When the spokes of a rotating wagon wheel are photographed by a motion picture camera operating at 24 frames-per-second (fps), the rotation appears as it should until the wheel achieves 12 rotations-per-second, after which the wheel looks as if it is rapidly spinning in the opposite direction. This optical illusion continues until the wheel appears to slow down and stop when it reaches the same rotation rate as the frame rate of the camera.
Even though 35-mm motion picture and still photography film are manufactured in continuous rolls, each camera advances the film while the shutter is closed and then holds the film stationary during a finite exposure. During playback, a cinematic projector also closes its shutter as the film is advanced, and then displays the next image for one twenty-fourth of a second. While this technology is still in wide use today, the Human Vision System (HVS) is capable of perceiving processes occurring faster than 24 fps. When displayed at its native acquisition rate, the HVS notices the shutter frequency as image “flicker.” To minimize flicker, movie projectors operate their shutters at double (48 fps) or even triple (72 fps) the frame rate of the film facilitating frequency “up-conversion” to a rate faster than the HVS can perceive.
The electronic broadcast of black and white motion pictures in the U.S. was standardized by the National Television System Committee (NTSC) in 1941. Television cameras captured images as a stack of intensity scan lines at the same frequency as their 60-Hz AC power supplies. Limited by the vacuum tube technology of the day, the electronics could only record or display a little over 240 scan lines in one sixtieth of a second. The resulting images appeared flicker-free, but very small. The NTSC doubled the image height to 484 scan lines and mandated the use of image scan line “interlacing” developed by RCA in the 1920s. Given a vertical stack of scan lines, this technique alternately captures the 242 odd and 242 even lines of an image at 60 Hz, ultimately producing an “interlaced” image having an effective frame rate of 30 fps and a flicker-free projection rate of 60 Hz.
In addition to being fodder for a documentary on the history of film and broadcast television, frame rates and interlacing are currently hot topics in the area of digital signal processing (DSP). The cathode ray tube (CRT) monitors of the 1980s followed closely by modern liquid crystal displays (LCDs), plasma display panels (PDPs) and digital light processing (DLP) systems employ technology capable of updating the entire image in a single pass, known as progressive scanning. Current standards for high-definition television (HDTV) include the transmission of progressively-scanned 1280 (w) x 720 (h) images; a format known as 720p, and 1920 (w) x 1080 (h) images; known as 1080p.
The technical challenge for display manufacturers is how to convert the 440 (w) x 484 (h) interlaced images of legacy NTSC (480i) into high-quality content for display on expensive, flat-panel HDTVs. There are several popular methods used to accomplish this task, including ones for “deinterlacing” the images into progressive scan, interpolating image pixels to increase the spatial resolution, and predictive motion algorithms for frequency up-converting the 30 fps to as high as 60 fps. Each of these processes is afflicted by new “wagon-wheel” effects of their own, and DSP developers are racing to provide solutions in a bid to become part of a new standard.
Bill Weaver is an assistant professor in the Integrated Science, Business and Technology Program at La Salle University. He may be contacted at [email protected].