Color Imaging Techniques
well as other color image techniques such as LRGB and CMYK imaging.
Advanced RGB Techniques
The basic idea behind RGB imaging is to combine three images taken through red, green, and blue filters. This creates a full-color image. Since color filters let less light through than a clear filter would, an individual exposure through a color filter tends to be noisier. For this reason, multiple exposures are taken in each color and combined to create a less-noisy final image. See the Combining Images section for tips on how to stack these images for best results.
Using different exposure times or using different numbers of exposures for each color is a common technique. The idea behind this is achieving a more accurate representation of the color. (See the True Color Theory section for more details.) It is common, for example, to see equal-length red and green exposures and then a blue exposure which is 50% longer.
Using Different RGB Exposure Times
The simplest method for determining which exposure times to use is to determine the quantum efficiency (or sensitivity) of the CCD chip at the primary wavelength of each color filter. Red filters typically transmit light around 650nm, green filters at around 550nm, and blue filters at about 450nm.
Above: The sensitivity curve of a typical CCD chip.
For the CCD chip above, the quantum efficiency is 86% at 650nm, 84% at 550nm, and 66% at 450nm. This means that equal exposures will detect approximately equal amounts in red and green. However, an equal exposure will pick up about 30% less in blue due to the lower sensitivity. In reality the situation is a bit more complex, but this is a good starting point. For each image to be equivalent, the blue exposure should be about 1.3 times longer than the red and green.
Experimentation can lead to a more accurate exposure calculation. It would be possible, for example, to image a pure white light source through each filter and to measure the brightness using a software function such as the Information Window in MaxIm DL. Exposure factors could then be determined from these results. However, color is fairly subjective and varies from object to object so there is probably no need to be so precise. For more details on this topic, see the True Color Theory page.
If you do not wish to take longer exposures in one color (perhaps because you are limited to a certain unguided exposure time), it is also possible to simply take more exposures in that color. Suppose you have taken two 120-second exposures in both red and green and that you need a 1.5x exposure factor for blue. Instead of taking two 180-second blue exposures, you could simply take three 120-second shots. This method only works, however, if you sum the images. If you median combine the images there is no increase in the signal, only a reduction in the noise, whereas summing images increases the signal. And median combining is certainly preferable to summing, so adjusting the exposure times is the best method.
LRGB Imaging Techniques
The L in LRGB stands for Luminance. In an LRGB image, a luminance layer is added to a standard RGB image. The advantage of this is that the human eye gets all of its spatial information (detail) from the luminosity of an image. Color does not contribute to image detail. This means that a low-resolution color image can be combined with a high-resolution black-and-white image (the luminance image) to create a high-res final color image.
Above: A high-resolution black-and-white image and a low-res color image of M51, the Whirlpool Galaxy.
Above: By using the high-res black-and-white file as a luminance layer, a high-res final color image is created. The color file essentially "paints" the luminance image.
One advantage of LRGB imaging is that binning the CCD chip increases sensitivity. By binning the color components of the image, the lower sensitivity of using color filters can be overcome. Since the resolution of the color files does not matter in an LRGB image, there is no problem having low-res color images. In fact, it is even possible to use a different CCD camera or different telescope to take the color files than was used for the luminance image. You could even combine film and CCD images in this way.
At first it might seem possible to use a single exposure in each color to create an LRGB image, since the noise in each color image should not affect the final image. However, dead pixels (dark spots) and hot pixels and cosmic rays (bright spots) in the originals will appear in the final image as multicolored specks and streaks. It is best to take at least three images in each color and median combine them to first reduce the artifacts in the color files.
Just about anything can be used for the luminance image. The traditional method is simply to take multiple exposures through a clear filter. However, the use of narrowband filters is becoming popular for imaging emission nebulae. In this case, a filter such as a hydrogen-alpha filter is used to isolate the primary wavelength of light emitted by the nebula. This enhances the final image which can then be used as a luminance layer in an LRGB image.
A red-filtered image can also be used for nebulae or other objects in which this color needs to be enhanced. Blue-filtered images can be used for luminance layers in galaxy images, since galaxies often emit most of their light in the blue portion of the spectrum.
Another color imaging method which is sometimes used is CMY imaging. In this method, cyan, magenta, and yellow filters are used instead of red, green, and blue filters. CMY is a subtractive methods, rather than the additive RGB. Basically this means that if you subtract the red component from an image you get cyan. Subtracting green gives magenta, and subtracting blue leaves yellow. A red filter transmits only about 1/3 of the visible spectrum, as only red light gets through. However, a cyan filter transmits 2/3 of the spectrum because everything but red is coming through.
This means that exposures can be half as long for CMY filters than RGB filters because CMY filters transmit twice as much light. Using software to create accurate color images can be more difficult, and CMY filters are harder to come by that RGB, so this method is not often seen. The big advantage of CMY filters is actually for planetary imaging. Transmitting more light leads to shorter exposures, effectively stopping the effects of seeing conditions, and allowing more images to be taken before rotation effects blur the image (such as is usually a problem with Jupiter). Of course, with the advent of color webcam imaging, this technique is not as advantageous as it might once have been.
Similar to LRGB imaging is CMYK imaging, which uses a K layer in a similar fashion to how an L layer is used in an LRGB image.