Color Signal

Color Signals The use of so-called YUV and YIQ spaces (rather than the RGB space) in color image processing provides statistical as well as perceptual efficiency.

From: Handbook of Visual Communications , 1995

Digital systems

Martin Plonus , in Electronics and Communications for Scientists and Engineers (Second Edition), 2020

9.6.3 Color analog TV

In 1953, the NTSC, the standard in the USA, defined a TV picture to have 525 horizontal lines, 30   frames/s and a 6   MHz bandwidth. In other regions of the World, the standards are different. For instance, in Europe the standard is 625 lines, 25   fr/s, 8   MHz. Analog broadcasting in the US ended in 2009; only digital transmissions were used from then on.

For analog television to display color, we need the primary colors: red, green, blue (RGB) which can be combined to give any color. To stream three additional color signals was unacceptable as it would require additional bandwidth of 18  MHz (like a monochrome signal requires 6   MHz of frequency space, each color signal would require the same bandwidth). Also, since RGB components are correlated, transmitting RGB components separately would be redundant and not efficient use of bandwidth. Also, black and white TVs needed a backward compatible color signal, so the black-white portion of a color broadcast could be viewed. This was a difficult task, in addition to the even more difficult task of transmitting the new composite color signal, which contains considerably more information than a black-white signal, over the same bandwidth of 6   MHz. Before this extraordinary engineering task could be realized, several crucial observations were made. The first observation was just how low a resolution the color can be and still make a very good image. The human eye has 20–30 times more rods (brightness detectors) than cones (color detectors), so we have much more resolution and sensitivity to brightness (luminance) than color (chrominance). For example, the human eye cannot distinguish color for small objects and can only respond to changes in brightness. In practice it means that surprisingly little color needs to be added to a Black/White signal to produce a good color TV image. It also means that a Black/White TV will produce a good monochrome image, because the small color component will have little effect on the image (sometimes, a mildly objectionable dot pattern would appear in areas of the screen where the colors are particularly strong).

Our eyes ability to not need a high a resolution of color information, is the key to how compatible color transmissions work. As pointed out before, a CRT in a Black/White TV displays an image by scanning a beam of electrons across the screen in a pattern of horizontal lines known as a raster. As the electron beam passes each point on the screen, the intensity of the beam is varied, varying the luminance at that spot. A color television system is identical except that the additional chrominance information in the received signal (after the RGB signals are extracted from it) also controls the color at that spot (a color television has a color-capable cathode ray tube; it has three guns, one for each RGB color; hence, three electron beams move simultaneously across the screen; each spot on the screen is divided into three RGB sensitive phosphors—each pixel is composed of a red, green, and blue subpixel—which light up when hit by electrons from the appropriate gun). To accomplish this latter task, the primary RGB signals in color imaging systems were first transformed into a luminance signal (Y) and chrominance signals (I, Q). As noted above, the human eye responds primarily to changes in luminance which implies that less bandwidth is required to encode chrominance information than luminance information (which requires a band of 4.2   MHz). Nevertheless, the additional color bandwidth would require a larger channel than 6   MHz which simply was not available. However, by clever frequency interlacing, the luminance and chrominance signals were able to share the same 4.2   MHz video band. The essential aspect of color television is therefore the way color information coexists with monochrome information. How was this achieved? By using luminance—chrominance color coordinates YIQ and by QAM modulating the chrominance components and placing them at the high end of the 4.2   MHz luminance spectrum, as described next.

To encode the color signal, two color difference signals (R-Y) and (B-Y) were formed, where Y is the luminance signal (lightness and darkness of colors) and R, B are red, blue signals. Since most of the information in an image is in the luminance, the color difference signals are normally small. Also the color difference signal (G-Y) is mostly redundant, requiring only the two color difference signals. Next, the two difference signals were combined to create two new color signals: I (in phase) and Q (in quadrature) which encode the chrominance information. If these two signals were used to amplitude modulate a carrier frequency, each signal would generate its own two sidebands (upper/lower) and four sidebands would be too wide to fit in the existing video band. To transmit (broadcast) the color information, an efficient modulation technique is therefore needed. Quadrature amplitude modulation 65 (QAM) was chosen and is a technique that can transmit two analog signals, I and Q, by modulating the amplitudes and the phase of a single subcarrier wave (it is equivalent to having two carriers at the same frequency which differ in phase by 90°, hence the name quadrature). QAM, by combining the two carriers and sending the combined signals in a single transmission, essentially doubles the effective bandwidth of a channel, thus the color signal (subcarrier plus sidebands) will require less space in the luminance bandwidth (video band). To restate, the QAM technique simultaneously amplitude modulates a 3.6   MHz subcarrier by the in phase sine signal I (saturation of color) and by the out of phase quadrature cosine signal Q (hue of color). 66 Because these sideband frequencies are within the luminance signal band is why they are called "subcarrier" sidebands and not "carrier" sidebands.

The bandwidth required for color information must be held to a minimum because taking too much space from the luminance band to allocate space for the chrominance signal will create adverse effects on image quality. The color bandwidth is again determined by human eye sensitivity to brightness versus color. For colors encoded by the I signal, where I  =   0.7(R-Y)     0.3(B-Y), the eyes response is such that good picture rendition can be obtained with I band-limited to about 1.5   MHz. Similarly, for the Q   =   0.5(R-Y)   +   0.4(B-Y) color signal, Q is band-limited to about 0.5   MHz. Above these frequencies, the eye barely resolves color and all gray information for an image is contained in the Y signal. The color bandwidth placements in the video band are shown in Fig. 9.35. The Q signal is double sideband about the subcarrier and the I signal has a lower sideband but the upper is a vestigial sideband filtered to about 0.5   MHz

Eventually the luminance signal Y and the two modulated chrominance signals are added and this final signal is then used to modulate the 1.25   MHz video RF carrier which is ultimately transmitted (broadcast) over the air for reception. At the receiving end, The RGB signals are reconstructed from the Y, and I, Q signals by the television receiver and control a color CRT screen which displays the transmitted image for viewing.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128170083000097

Variability in normal and defective colour vision

M. Rodriguez-Carmona , J.L. Barbur , in Colour Design (Second Edition), 2017

3.4.1 The CIE (x,y) standard observer

The strength of colour signals is more difficult to measure since colour contrast can involve signal changes in each class of cone photoreceptor. In addition, visual stimuli are rarely defined only by colour contrast and this makes it difficult to establish the extent to which colour contrast components contribute to perceived object conspicuity. The CIE (x,y) chromaticity chart can be used to plot the relative chromaticities of any wavelength radiance distribution. Fundamentally, a triplet of stimulus chromaticities, (x,y,z), can be linked back to the relative amplitude of signals generated in the three classes of cone photoreceptor in the eye when the retina is exposed to light of known spectral distribution (Wyszecki and Stiles, 1982). Although a large percentage of normal trichromats differ both in relative numbers and the exact spectral tuning of cone photoreceptors in the eye, the introduction of the CIE (x,y) 1931 'standard' observer and the corresponding chromaticity chart has contributed significantly to the advancement of vision science and benefited greatly the development and introduction of vision standards in many occupational environments. Since linear displacements in the CIE (x,y) chromaticity chart are often used to quantify colour differences, it is of interest to examine how such displacements relate to the corresponding photoreceptor contrasts when a coloured stimulus is viewed against a uniform surround. Fig. 3.3 shows an enlarged, centre section of the CIE (x,y) 1931 chart with the centre cross indicating the chromaticity of the 'white light' (i.e. 0.305, 0.323) used by MacAdam in his pioneering experiments on colour detection thresholds (MacAdam, 1942). The distance between the centre cross and any point on the black ellipse represents the chromatic displacement the average young, normal trichromat needs to detect a stimulus defined only by colour signals (see Section 3.8.1 for details of the experimental methods involved). Different displacement directions are measured with respect to the horizontal axis and correspond to different hues whilst the actual size of the chromatic displacement away from background chromaticity appears to correlate more with chromatic saturation. By making use of the known spectral responsivities of each class of photoreceptor in the eye (see Fig. 3.3C), it is possible to calculate the corresponding cone contrasts (see Fig. 3.3D) along any line away from background chromaticity (as shown in Fig. 3.3A). The graph in Fig. 3.3E shows an almost linear relationship between the linear distance measured away from background chromaticity along the 345 degrees direction and the corresponding photoreceptor contrasts. The angle of each direction of interest is measured anticlockwise with respect to the horizontal axis. These observations suggest that chromatic displacement measured in this way may be a good indicator of colour signal strength since we know that the colour saturation of the stimulus appears to increase with chromatic displacement distance.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780081012703000035

Color and Multispectral Image Representation and Display

H.J. Trussell , in The Essential Guide to Image Processing, 2009

8.6 SAMPLING OF COLOR SIGNALS AND SENSORS

It has been assumed in most of this chapter that the color signals of interest can be sampled sufficiently well to permit accurate computation using discrete arithmetic. It is appropriate to consider this assumption quantitatively. From the previous sections, it is seen that there are three basic types of color signals to consider: reflectances, illuminants, and sensors. Reflectances usually characterize everyday objects but occasionally man-made items with special properties such as filters and gratings are of interest. Illuminants vary a great deal from natural daylight or moonlight to special lamps used in imaging equipment. The sensors most often used in color evaluation are those of the human eye. However, because of their use in scanners and cameras, CCD's and photomultiplier tubes are of great interest.

The most important sensor characteristics are the cone sensitivities of the eye or equivalently, the color-matching functions, e.g., Fig. 8.6. It is easily seen that the functions in Figs. 8.4, 8.6, and 8.7 are very smooth functions and have limited bandwidths. A note on bandwidth is appropriate here. The functions represent continuous functions with finite support. Because of the finite support constraint, they cannot be bandlimited. However, they are clearly smooth and have very low power outside of a very small frequency band. Using 2 nm representations of the functions, the power spectra of these signals are shown in Fig. 8.8. The spectra represent the Welch estimate where the data is first windowed, then the magnitude of the DFT is computed [2]. It is seen that 10 nm sampling produces very small aliasing error.

FIGURE 8.8. Power spectrum of CIE XYZ color-matching functions.

In the context of cameras and scanners, the actual photo-electric sensor should be considered. Fortunately, most sensors have very smooth sensitivity curves which have bandwidths comparable to those of the color-matching functions. See any handbook of CCD sensors or photomultiplier tubes. Reducing the variety of sensors to be studied can also be justified by the fact that filters can be designed to compensate for the characteristics of the sensor and bring the combination within a linear combination of the color-matching functions.

The function r(λ), which is sampled to give the vector r used in the Colorimetry section, can represent either reflectance or transmission. Desktop scanners usually work with reflective media. There are, however, several film scanners on the market which are used in this type of environment. The larger dynamic range of the photographic media implies a larger bandwidth. Fortunately, there is not a large difference over the range of everyday objects and images. Several ensembles were used for a study in an attempt to include the range of spectra encountered by image scanners and color measurement instrumentation [21]. The results showed again that 10 nm sampling was sufficient [15].

There are three major types of viewing illuminants of interest for imaging: daylight, incandescent, and fluorescent. There are many more types of illuminants used for scanners and measurement instruments. The properties of the three viewing illuminants can be used as a guideline for sampling and signal processing which involves other types. It has been shown that the illuminant is the determining factor for the choice of sampling interval in the wavelength domain [15].

Incandescent lamps and natural daylight can be modeled as filtered blackbody radiators. The wavelength spectra are relatively smooth and have relatively small bandwidths. As with previous color signals they are adequately sampled at 10 nm. Office lighting is dominated by fluorescent lamps. Typical wavelength spectra and their frequency power spectra are shown in Figs. 8.9 and 8.10.

FIGURE 8.9. Cool white fluorescent and warm white fluorescent.

FIGURE 8.10. Power spectra of cool white fluorescent and warm white fluorescent.

It is with the fluorescent lamps that the 2 nm sampling becomes suspect. The peaks that are seen in the wavelength spectra are characteristic of mercury and are delta function signals at 404.7 nm, 435.8 nm, 546.1 nm, and 578.4 nm. The flourescent lamp can be modeled as the sum of a smoothly varying signal and a delta function series:

(8.42) l ( λ ) = l d ( λ ) + k = 1 q α k δ ( λ λ k ) ,

where α k represents the strength of the spectral line at wavelength λ k . The wavelength spectra of the phosphors is relatively smooth as seen from Fig. 8.9.

It is clear that the fluorescent signals are not bandlimited in the sense used previously. The amount of power outside of the band is a function of the positions and strengths of the line spectra. Since the lines occur at known wavelengths, it remains only to estimate their power. This can be done by signal restoration methods which can use the information about this specific signal. Using such methods, the frequency spectrum of the lamp may be estimated by combining the frequency spectra of its components

(8.43) L ( ω ) = L d ( ω ) + k = 1 q α k e j ω ( λ 0 λ k ) ,

where λ 0 is an arbritrary origin in the wavelength domain. The bandlimited spectra L d ( ω ) can be obtained from the sampled restoration and is easily represented by 2 nm sampling.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123744579000081

NTSC, PAL, and SECAM

Keith Jack , in Digital Video and DSP, 2008

NTSC Overview

The first color television system was developed in the United States, and on December 17, 1953, the Federal Communications Commission (FCC) approved the transmission standard, with broadcasting approved to begin January 23, 1954. Most of the work for developing a color transmission standard that was compatible with the (then current) 525-line, 60-field-per-second, 2:1interlaced monochrome standard was done by the National Television System Committee (NTSC).

Luminance Information

The monochrome luminance (Y) signal is derived from gamma-corrected red, green, and blue (R′G′B′) signals:

Y = 0.299 R + 0.587 G + 0.114 B

Technology Trade-offs

Due to the sound subcarrier at 4.5 MHz, a requirement was made that the color signal fit within the same bandwidth as the monochrome video signal (0–4.2  MHz). For economic reasons, another requirement was made that monochrome receivers must be able to display the black and white portion of a color broadcast and that color receivers must be able to display a monochrome broadcast.

Color Information

Insider Info

The eye is most sensitive to spatial and temporal variations in luminance; therefore, luminance information was still allowed the entire bandwidth available (0–4.2   MHz). Color information, which the eye is less sensitive and which therefore requires less bandwidth, is represented as hue and saturation information.

The hue and saturation information is transmitted using a 3.58-MHz subcarrier, encoded so that the receiver can separate the hue, saturation, and luminance information and convert them back to RGB signals for display. Although this allows the transmission of color signals within the same bandwidth as monochrome signals, the problem still remains as to how to separate the color and luminance information cost-effectively, since they occupy the same portion of the frequency spectrum.

To transmit color information, U and V or I and Q "color difference" signals are used:

R - Y = 0.701 R - 0.587 G - 0.114 B B - Y = - 0.299 R - 0.587 G + 0.866 B

U = 0.492 ( B - Y ) V = 0.877 ( R - Y )

I = 0.212 R - 0.523 G + 0.311 B = V sin undefined 33 ° + U cos undefined 33 ° = 0.736 ( R - Y ) + 0.413 ( B - Y )

Q = 0.212 R - 0.523 G + 0.311 B = V sin undefined 33 ° + U cos undefined 33 ° = 0.478 ( R - Y ) + 0.413 ( B - Y )

The scaling factors to generate U and V from (B′−Y) and (R′−Y) were derived due to overmodulation considerations during transmission. If the full range of (B′−Y) and (R′−Y) were used, the modulated chrominance levels would exceed what the monochrome transmitters were capable of supporting. Experimentation determined that modulated subcarrier amplitudes of 20% of the Y signal amplitude could be permitted above white and below black. The scaling factors were then selected so that the maximum level of 75% color would be at the white level.

I and Q were initially selected since they more closely related to the variation of color acuity than U and V. The color response of the eye decreases as the size of viewed objects decreases. Small objects, occupying frequencies of 1.3–2.0   MHz, provide little color sensation. Medium objects, occupying the 0.6–1.3   MHz frequency range, are acceptable if reproduced along the orange-cyan axis. Larger objects, occupying the 0–0.6   MHz frequency range, require full three-color reproduction.

The I and Q bandwidths were chosen accordingly, and the preferred color reproduction axis was obtained by rotating the U and V axes by 33°. The Q component, representing the green-purple color axis, was band-limited to about 0.6   MHz. The I component, representing the orange-cyan color axis, was band-limited to about 1.3   MHz.

Another advantage of limiting the I and Q bandwidths to 1.3   MHz and 0.6   MHz, respectively, is to minimize crosstalk due to asymmetrical sidebands as a result of lowpass filtering the composite video signal to about 4.2   MHz. Q is a double sideband signal; however, I is asymmetrical, bringing up the possibility of crosstalk between I and Q. The symmetry of Q avoids crosstalk into I; since Q is bandwidth limited to 0.6   MHz, I crosstalk falls outside the Q bandwidth.

U and V, both bandwidth-limited to 1.3   MHz, are now commonly used instead of I and Q. When broadcast, UV crosstalk occurs above 0.6   MHz; however, this is not usually visible due to the limited UV bandwidths used by NTSC decoders for consumer equipment.

The UV and IQ vector diagram is shown in Figure 6.1.

Figure 6.1. UV and IQ Vector Diagram for 75% Color Bars.

Color Modulation

I and Q (or U and V) are used to modulate a 3.58   MHz color subcarrier using two balanced modulators operating in phase quadrature: one modulator is driven by the subcarrier at sine phase; the other modulator is driven by the subcarrier at cosine phase.

Hue information is conveyed by the chrominance phase relative to the subcarrier. Saturation information is conveyed by chrominance amplitude. In addition, if an object has no color (such as a white, gray, or black object), the subcarrier is suppressed.

Composite Video Generation

The modulated chrominance is added to the luminance information along with appropriate horizontal and vertical sync signals, blanking information, and color burst information, to generate the composite color video waveform shown in Figure 6.2.

Figure 6.2. (M) NTSC Composite Video Signal for 75% Color Bars.

The I and Q (or U and V) information can be transmitted without loss of identity as long as the proper color subcarrier phase relationship is maintained at the encoding and decoding process. A color burst signal, consisting of nine cycles of the subcarrier frequency at a specific phase, follows most horizontal sync pulses, and provides the decoder a reference signal so as to be able to recover the I and Q (or U and V) signals properly.

NTSC Standards

Figure 6.3 shows the common designations for NTSC systems. The letter M refers to the monochrome standard for line and field rates (525/59.94), a video bandwidth of 4.2   MHz, an audio carrier frequency 4.5   MHz above the video carrier frequency, and an RF channel bandwidth of 6   MHz. NTSC refers to the technique to add color information to the monochrome signal.

Figure 6.3. Common NTSC Systems.

NTSC 4.43 is commonly used for multi-standard analog VCRs. The horizontal and vertical timing is the same as (M) NTSC; color encoding uses the PAL modulation format and a 4.43361875-MHz color subcarrier frequency.

Noninterlaced NTSC is a 262-line, 60 frames-per-second version of NTSC. This format is identical to standard (M) NTSC, except that there are 262 lines per frame.

Insider Info

NTSC–J, used in Japan, is the same as (M) NTSC, except there is no blanking pedestal during active video. Thus, active video has a nominal amplitude of 714   mV.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780750689755000066

A Simple VGA Interface

Peter Wilson , in Design Recipes for FPGAs (Second Edition), 2016

14.1 Introduction

The VGA interface is common to most modern computer displays and is based on a pixel map, color planes, and horizontal and vertical sync signals. A VGA monitor has three color signals (Red, Green, and Blue) that set one of these colors on or off on the screen. The intensity of each of those colors sets the final color seen on the display. For example, if the Red was fully on, but the Blue and Green off, then the color would be seen as a strong red. Each analog intensity is defined by a 2-bit digital word for each color (e.g., red0 and red1) that are connected to a simple digital-to-analog converter to obtain the correct output signal.

The resolution of the screen can vary from 480   ×   320 up to much larger screens, but a standard default size is 640   ×   480 pixels. This is 480 lines of 640 pixels in each line, so the aspect ratio is 640/480, leading to the classic landscape layout of a conventional monitor screen.

The VGA image is controlled by two signals—horizontal sync and vertical sync. The horizontal sync marks the start and finish of a line of pixels with a negative pulse in each case. The actual image data is sent in a 25.17   μs window in a 31.77   μs space between the sync pulses. (The time that image data is not sent is where the image is defined as a blank space and the image is dark.) The vertical sync is similar to the horizontal sync except that in this case the negative pulse marks the start and finish of each frame as a whole and the time for the frame (image as a whole) takes place in a 15.25   ms window in the space between pulses, which is 16.784   ms.

There are some constraints about the spacing of the data between pulses which will be considered later in this chapter, but it is clear that the key to a correct VGA output is the accurate definition of timing and data by the VHDL.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780080971292000143

Video Technology

Louis E. FrenzelJr., in Electronics Explained (Second Edition), 2018

Video Compression

Generating a color video signal produces a serial bitstream that varies at a rate of hundreds of millions of bits per second. Consider these calculations:

For each pixel, there are three color signals of 8  bits each making a total of 8   ×   3   =   24   bits per pixel.

For a standard DTV screen, there are 480 lines of 640   pixels for a total of 307,200   pixels per frame or screen.

For one full screen or frame, there are 307,200   ×   24   =   7,372,800   bits.

If we want to display 60   frames per second, then the transmission rate is 7,372,800   ×   60   =   442,368,000   bits per second, or 442.368   Mbps. That is extremely high frequency.

To store one frame of video of this format, you would need a memory with 55,296,000   bytes or 55.296   MB. And that is just one frame.

To store 1   min of video you would need a memory of 3,317,760,000   bytes or just over 3.3   gigabytes (GB). And that is just 1   min. Multiply by 60 to get the amount of memory for an hour.

Imagine the speeds and memory requirements for 1080p or a 120   frames-per-second format. And we have not even factored in the stereo audio digital signals that go along with this. Anyway, you are probably getting the picture here. First, data rates of 442.368   Mbps are not impossible, but they are impractical as they require too much bandwidth over the air and on a cable. And they are expensive. Furthermore, while memory devices such as CDs and DVDs are available to store huge amounts of data, they are still not capable of those figures. Therefore, when it comes to transmitting and storing digital video information, some technique must be used to reduce the speed of that digital signal and the amount of data that it produces.

This problem is handled by a digital technique known as compression. Digital compression is essentially a mathematical algorithm that takes the individual color pixel binary numbers and processes them in such a way as to reduce the total number of bits representing the color information. The whole compression process is way beyond the scope of this book, but suffice it to say that it is a technique that works well and produces a bitstream at a much lower rate. And, the compressed video will take up less storage in a computer memory chip.

The digital compression technique used in DTV is known as MPEG-2. MPEG refers to the Motion Picture Experts Group, an organization that develops video compression and other video standards. Another standard is MPEG-4 AVC. For 4K and 8K a compression method called high efficiency video coding is used.

What you have to think about when you consider the transmission of digital video from one place to another is that it is typically accompanied by audio. The audio is also in serial digital format. Those digital words will be transmitted along with the video digital words to create a complete TV signal.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128116418000114

Computer-Assisted Microscopy

Fatima A. Merchant , Kenneth R. Castleman , in The Essential Guide to Image Processing, 2009

27.6.4.2 Spectral Overlap

Since each of the four fluorophores is imaged in a separate channel, the problem of analyzing different color dots occurring at the same X-Y location is eliminated. However, to achieve total discrimination between different color signals, the effects of spectral overlap should be minimized. As seen in Fig. 27.16 (top panel), the RGBA image clearly shows the blue nucleus, and the red, green, and aqua dots. The individual red, green, blue, and aqua components of the image are also shown. As seen in each component image, in addition to the true color for each channel (white arrows), there is color bleed-through that occurs from neighboring spectral regions (yellow arrows). This is due to the unavoidable overlap among fluorophore emission spectra and RGB camera sensitivity spectra. The RGBA image was corrected to remove the overlap and separate the fluorophores using color compensation [34]. Figure 27.16 (bottom panel) shows the results of color compensation. The spectral bleed-through is effectively removed, and the different color dots are clearly separated in the individual color component images. Thus, using a 3-CCD color camera, along with the background subtraction and color compensation algorithms discussed above, we obtain good spectral separation with rapid image capture. Similarly, appropriate filter optics, used in conjunction with image processing, allows the capture of multicolor images. Following image capture, the cell and dot finding algorithms described above are applied to implement automatic aneuploidy screening.

FIGURE 27.16. A four-color image captured using a RGB camera and color compensation.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123744579000275

Local area networking and associated cabling

BARRY J ELLIOTT , in Cable Engineering for Local Area Networks, 2000

RGB video systems

Although video can be digitised and transported on a LAN it is still seen as expensive and heavy on bandwidth. Many users still prefer to use a dedicated point-to-point or even switched analogue video system called RGB. RGB means splitting the colour signal into its three primary colours, i.e. red, green and blue. The receiver recombines the three colour signals taking its synchronisation signal usually from the green channel. On a structured cabling system three of the four pairs carry the three colour signals. RGB on structured cabling has replaced some of the more established financial information distribution systems such as Reuters and Bloomberg, which used to require their own dedicated cabling.

A problem that RGB signals have is differential delay or asymmetric skew. Each colour component must arrive at the receiver at the same time if the original signal is to be reproduced. If they arrive separated out in time then the resulting picture will have annoying colour fringes. The latest cabling standards, such as category 5e, category 6 and category 7 all have differential delay requirements of 50ns or less. This figure basically comes from the requirements of gigabit Ethernet, but RGB video needs 20 ns differential delay or better. It is possible to buy delay lines, which will slow down the fastest signal to the rate of the slowest signal, but this is an expensive option that needs setting up and subsequent recalibrating. It is far better to specify a cable with a differential delay of 20 ns or less over a 100m cabling link.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9781855734883500119