Color in astronomical images has long been the subject of much debate. The basic ideas behind “true color” imaging are outlined below, and tips are presented for capturing better color CCD images.
Color in Astroimaging
The real problem with color in astronomy comes from the fact that color is so subjective. Everyone has a different sensitivity to color. Often adults at star parties will comment that the Orion Nebula simply looks grey, while children usually clearly see blue or green shades. Many people see bluish-green tones in planetary nebulae, but not everyone does. One of the most frequent comments from first-time stargazers about the view of Saturn through a telescope is that the planet appears white. Experienced observers usually see shades of yellow and tan, but again not everyone detects the color. How can anyone decide what color something is supposed to be in a picture if no one can agree on what color it is visually?
The next problem comes from the fact that what film or a CCD chip “sees” is not what your eye sees. The first reason is simply that a CCD is far more sensitive than your eye and will pick up more light. Also, astrophotos involve long exposures which allow even more light to build up on the film or CCD. This means that there is not necessarily a correlation between an object’s visual appearance and what a CCD image of the object will look like.
The other difference between CCDs and your eye is in the color sensitivity, and this is the heart of the “true color” debate. Your eyes have a different sensitivity to each wavelength of light. The peak sensitivity of the human eye, in daylight, is about 550 nanometers (just about the same wavelength as the Sun’s peak energy output). At night, looking through a telescope with dark-adapted eyes, the peak sensitivity shifts to about 500 nm, in the blue-green part of the spectrum, which is why we so easily see the blue-green color of bright nebulae which emit strongly at a wavelength of 500.7 nm. However, bright emission nebulae, such as the Orion Nebula, primarily give off red light from excited hydrogen atoms at a wavelength of 656.3 nm. During daylight, the human eye’s sensitivity drops to zero at about 690 nm, while at night we have no sensitivity beyond about 620 nm. The hydrogen light at 656.3 nm eludes us entirely at night.
CCD cameras, on the other hand, are often most sensitive in the red part of the spectrum, and often can detect light well out into the near infrared portion of the spectrum, well beyond what the human eye sees. What we see and what a CCD sees are two very different things. But there are ways to get as close to “true color” as we can.
Color imaging, whether with film or CCDs, usually involves a system which tries to replicate the colors seen by the human eye. The obvious example of this is regular daylight film photography. Film manufacturers continually improve film by making it more closely approximate the color we perceive visually. If skin tones are rendered too green or too red, for example, the film is not accurately reproducing the colors as seen by our eyes. CCD imaging involves a similar idea, but with the added complication of not having any standard reference (since we can’t see the colors we’re trying to capture).
The idea behind tri-color imaging is to match the spectral sensitivity of the CCD to that of the eye. There are two basic types of light receptors in the human eye: rods and cones. Rods are not color sensitive and cover the entire light-sensitive area of the eye (the retina). Cones are the color detectors and cover only the central portion of the retina. This means we see color only in our central vision and not peripherally. Also, since the cones are far less sensitive than the rods in low light levels, it means we see astronomical objects best with “averted vision”, looking peripherally rather than directly at an object. It also means we see essentially in black and white, unless an object is bright enough to stimulate the cones (such as a planet or very bright nebula).
Above: Locations of rods and cones in the human eye. Rods see black and white, cones see color.
There are three different types of cones, each sensitive to a different primary color. L-cones are sensitive to red light at a peak wavelength of 564 nm; M-cones detect green light with a peak wavelength of 533 nm; and S-cones see blue light with a peak wavelength of 437 nm. In an ideal situation, a filter set could be made to selectively filter light around each of these wavelengths to the CCD. In other words, light around 564 nm would transmit through a red filter to be detected by the CCD and this image would become the red portion of an RGB color image.
Above: Spectral sensitivity of the three types of cones in the human eye.
The problem with this is that the cones have varying sensitivities and quantities. For example, only 2% of the cones in the human eye are blue-sensitive cones. The peak sensitivity is in the green part of the spectrum. We are not as insensitive to blue as might be assumed from the low count of blue-sensitive cones because there a “signal boost” occurs in the vision processing area of the brain. But CCD chips typically have their highest sensitivity in red. This means that filters which transmit equally in each color would tend to yield a picture much redder and much less green than our eyes might see it.
The reason CCDs are designed with greater red sensitivity is that most nebulae emit primarily in the red part of the spectrum, so a red-insensitive CCD would not be very effective at capturing some of the most impressive celestial objects. So from the very start, we cannot expect CCDs to accurately replicate human color perception. It would be pointless to image with a camera that mimicked the response of the human eye because we would never be able to capture the spectacular hydrogen clouds that constitute the many star-forming regions of our galaxy.
Above: Typical spectral sensitivity of a CCD chip. Diagram does not factor in filter response, but red filters usually cut off around 700nm.
Above: Relative spectral sensitivity of the daylight-adapted human eye. Note the distinct difference between this and the CCD response curve.
Above: Spectral response of the dark-adapted human eye. Note the lack of red sensitivity.
Usually, the filters used for imaging have about equal transmission for each color. In order to more accurately reproduce the color balance of the eye, a longer exposure often is taken in blue light to compensate for higher red-sensitivity of the CCD. Often, a standard white-light source is used as a reference, and an exposure factor can be determined by taking an image through each filter and then calculating how long would be necessary for each image to have an equal value. For example, if a 10-second exposure through a red filter has twice the value of a 10-second exposure in blue, then a 2x exposure factor is required for an equal value through the blue filter.
Below is an image taken with the CCD camera whose spectral response is plotted in the graphic above (the SBIG ST-10XME). Note that since this camera has the greatest sensitivity in the green portion of the spectrum that the image is too green in appearance. Also, the camera is least sensitive in blue, so the dominant blue color of the Whirlpool Galaxy is lacking until the image is properly color balanced.
Above: On the left is an image taken with equal exposures in each color. On the right, the same object with a 1.3x exposure factor in red and a 1.8x exposure factor in blue to compensate for the lower sensitivities of the CCD chip in red and blue.
Compare the above images to the ones below, balanced for the day and night spectral responses of the human eye.
Above: Same image but balanced to match the spectral response of the human eye. On the left is the daylight-adapted human eye response, and on the right is the dark-adapted human eye response. In the daylight-response balanced image, there is little red sensitivity so the red color, including the many star-forming regions, is missing. The dark adapter eye has almost no red response and a very high green response, so the color is not especially pleasing, to say the least!
What true color really means, then, is balancing the colors so that the combination of the CCD spectral response and the filter transmission curves yield a balanced response in each color, red, green, and blue. In the end, if an aesthetically pleasing image is the desired result, it really does not matter if the colors are “accurate” or not. The image does not need to look right, it just needs to look good!
Determining the Proper Color Balance
So, if we cannot use the human eye as a direct reference for color balance, what do we use to determine if the colors in a CCD image are correct? The usual method is to image a true white star and balance accordingly to get a white star output in the final image. Imaging a white star through each color filter and then measuring the brightness of the star in each color will give the color balance factors as a function of CCD spectral sensitivity and filter transmission (as well as the transmission characteristics of the telescope, although that is normally considered a minor factor). The only remaining major factor to take into account is the elevation of the test star above the horizon. Since the atmosphere selectively scatters blue light at low elevations (the reason sunsets are red, for example), an object’s elevation will affect its true color.
Finding a White Star
Stars of spectral class G2V are considered white stars. There is an obvious such star in the sky–the Sun. However, you need a more distant star (called a solar analog star) to image since you want to use your normal deep-sky imaging setup for this test. Below is a table of some of the most common G2V class test stars, based on information from Al Kelly’s excellent website on the color imaging subject, and from information provided by Brian Skiff.
|Right Ascension||Declination||Magnitude||Spectral Type||Name|
|00h 18m 40s||-08° 03′ 04″||6.5||G3||SAO 128690|
|00h 22m 52s||-12° 12′ 34″||6.4||G2.5||9 Cet|
|01h 41m 47s||+42° 36′ 48″||5.0||G1.5||SAO 37434|
|01h 53m 18s||+00° 22′ 25″||9.7||G5||SAO 110202|
|03h 19m 02s||-02° 50′ 36″||7.1||G1.5||SAO 130415|
|04h 26m 40s||+16° 44′ 49″||8.1||G2||SAO 93936|
|06h 24m 44s||-28° 46′ 48″||6.4||G2||SAO 171711|
|08h 54m 18s||-05° 26′ 04″||6.0||G2||SAO 136389|
|10h 01m 01s||+31° 55′ 25″||5.4||G3||20 LMi|
|11h 18m 11s||+31° 31′ 45″||4.9||G2||Xi UMa|
|13h 38m 42s||-01° 14′ 14″||10.0||G5||SAO 139464|
|15h 37m 18s||-00° 09′ 50″||8.4||G3||SAO 121093|
|15h 44m 02s||+02° 30′ 54″||5.9||G2.5||Psi Ser|
|15h 53m 12s||+13° 11′ 48″||6.1||G1||39 Ser|
|16h 07m 04s||-14° 04′ 16″||6.3||G2||SAO 159706|
|16h 15m 37s||-08° 22′ 10″||5.5||G2||18 Sco|
|19h 41m 49s||+50° 31′ 31″||6.0||G1.5||16 Cyg A|
|19h 41m 52s||+50° 31′ 03″||6.2||G3||16 Cyg B|
|20h 43m 12s||+00° 26′ 15″||10.0||G2||SAO 126133|
|21h 42m 27s||+00° 26′ 20″||9.1||G5||SAO 127005|
|23h 12m 39s||+02° 41′ 10″||7.7||G1||SAO 128034|
Taking a Test Exposure
To test the relative color balance of your imaging system, you will need to image a solar analog star and measure the variation in brightness through each filter. This is easily done. Imager Bart Declerq recommends imaging an out-of-focus G-type star, preferably near the zenith. If no star is available so high in the sky, an atmospheric extinction correction factor can be applied using the chart shown in the next section. Measurement of the star brightness can be done using the Information tool in MaxIm DL or similar function in other image processing software.
- Take one exposure through each color filter, red, green, and blue.
- Choose an exposure that yields a brightness between 10,000 and 50,000 (bright enough for a good signal but not saturated).
- Use the identical exposure time for each filter.
Measure the average brightness of the out-of-focus star in each image. The value should be slightly different based on the characteristics of the CCD chip and filter set. For example, the measured value of the star might be as follows:
Red Value: 19,000
Green Value: 25,000
Blue Value: 14,000
The color ratios are determined as follows:
Red Correction Factor = 1/(Red Value/Maximum Value)
Green Correction Factor = 1/(Green Value/Maximum Value)
Blue Correction Factor = 1/(Blue Value/Maximum Value)
In the above example, the green value is the maximum value so the correction factors would be:
Red Factor = 1/(19,000/25,000) = 1/0.76 = 1.32
Green Factor = 1/(25,000/25,000) = 1/1 = 1.00
Blue Factor = 1/(14,000/25,000) = 1/0.56 = 1.79
These value yield the 1.3:1.0:1.8 RGB ratio used on the Whirlpool Galaxy example image above. Most cameras have the greatest sensitivity in green or red and therefore green or red is normally the basis for comparison, but some cameras (notably the popular ST-2000) have higher blue sensitivities and might yield a ratio more like 1.7:1.3:1.0 in RGB.
Using the RGB Ratios
The RGB ratios as determined above allow you to adjust the exposure time necessary to get the best color balance possible, or to adjust the weight of each color channel in an image that was taken using standard 1:1:1 RGB ratios.
For example, if the RGB ratio is determined to be 1.3:1.0:1.8, you might use exposure times of 6.5 minutes, 5 minutes, and 9 minutes in red, green, and blue, respectively, to obtain proper color balance. These images would then be combined in the image processing software in a 1:1:1 ratio because they are already properly weighted for variations in the sensitivity of the system. Alternatively, if you already had images taken with equal exposures (say 5 minutes each), you could weight these at a 1.3:1.0:1.8 ratio in the software to compensate for the differences in sensitivity.
One other consideration is the effect of the atmosphere. The lower the elevation of the object, the greater the effect of atmospheric extinction. Earth’s atmosphere selectively scatters blue light more than red (which is why the sky is blue and why sunsets are red). This has the effect of reddening objects near the horizon (think of a big orange full moonrise). The chart below lists atmospheric extinction factors that can be applied to objects imaged low in the sky to attain proper color balance. The correction factors have been normalized for red = 1, meaning green and blue are added to the image as the target gets lower in the sky. An overall increase in exposure may be necessary to compensate for general light loss at low elevations (see following section). An example follows.
|Elevation||Red Correction||Green Correction||Blue Correction|
The atmospheric correction factors would be applied by multiplying the normal RGB factors by the above correction factors. If your normal RGB ratio is 1.3:1.0:1.8 and you imaged an object at an elevation of 30°, the RGB ratio would be multiplied by 1.0:1.08:1.15. The resulting RGB factors would be 1.3:1.08:2.07. The extra green and blue exposure time is necessary to compensate for the selective light loss of the atmosphere at those wavelengths.
Above: NGC253 imaged at an elevation of 30° using a 1:1:1 RGB ratio.
Above: The same image balanced with an RGB ratio of 1.3:1.0:1.8 as determined by solar analog star measurements. There is a tad too much red in this image because of atmospheric extinction.
Above: The same image but with a 1.00:1.08:1.15 atmospheric correction factor applied, resulting in a final RGB ratio of 1.2:1.0:1.9 (normalized for green).
In addition to the color shift associated with atmospheric extinction, there is a general loss of light transmission through the atmosphere with decreasing altitude. An approximation of atmospheric extinction can be approximated with the following formula:
extinction = sec z
where extinction is the number of atmospheres you are looking through (relative to “1 atmosphere” at zenith), z is the zenith angle, or angular distance from the zenith (the inverse of the altitude), and sec is the secant of the angle z. The secant is simply the inverse of the cosine of the angle. An example will help:
Say you are imaging an object which is 60° above the horizon. This is equivalent to a zenith angle of 30° degrees.
extinction = sec 30° = 1.15 atmospheres
Thus, 15% less light is reaching the telescope at an altitude of 60° than at zenith. At 30 degrees altitude you are looking through 2 atmospheres:
extinction = sec 60° = 2.00 atmospheres
The formula begins to break down at very low altitudes. The secant of 90° (the horizon) is zero, meaning by this formula, the extinction is infinite! In reality, you look through about 9 times more air at the horizon than at the zenith. The exact values are site-dependent, so it is difficult to give exact values. The table below lists approximate atmospheric extinction values for a given altitude and should be accurate enough for most purposes.
Note that every 2.5 atmospheres of extinction is equivalent to losing one full magnitude of light from your target. It may be tempting to increase exposure time to compensate for this, and that technique will work in some cases. However, the sky background level will not decrease with altitude, unfortunately, so the limiting factor for nebulous detail stays the same. This means you gain little in terms of faint detail by extending the exposure time at low elevations.