Introduction

The purpose of this page is to explain to the lay-reader who might have a little photography background but has never really done astrophotography. The reason behind this page is that astrophotograph is - for the most part - completely different from normal every-day point-and-shoot picture taking, or even what professional photographers do during the day.

Astrophotography involves extremely sensitive cameras, painstaking calibrations, very long exposures, and careful re-combination and colorization. That process is outlined below, and, for the most parte, it is generally what was used to create most of the images on this site's astrophotography pages. Some variations were used, and if they were large, I will try to make a note of that in the subsequent sections.


The Equipment

Tracking Mount - Earth rotates. That's why the Sun rises and sets. As Earth rotates, the stars appear to move in the sky from East to West. This rate is tiny and unnoticable when just looking at the sky with your eye, but astrophotograph involves such long exposure times and small fields of view that it is necessary to take this motion into account. The cheapest correction is called a clock drive and by using a system of motors and gears will slowly rotate your telescope at the correct rate. These are generally accurate to about 5 or 10 minutes, meaning that any exposure longer than that and you will start to see the stars turning into arcs. The other kind is active guidance, and it is accomplished by having a second camera and telescope provide a live image to a computer, and the computer will make sure that a bright star you've told it to watch will stay in the center of the field. This is accurate at much longer (infinite, really) time scales because the system will actively correct for any inaccuracies in the clock drive.

Telescope - This is actually unnecessary for wide fields, like my constellations, but most objects in the sky will be way too small to be visible to a normal camera lens. That's why astronomers use telescopes, which really only act like big buckets for light. The first question people usually ask about telescopes is "what's the magnification." You can make any telescope go to any magnification you want by just changing the eyepiece, but bigger telescopes will actually let you see objects at higher magnification because they collect more light. For astrophotography, what matters for telescopes (besides your field of view) is how big they are because the larger the telescope, the shorter amount of time it will take to get a given amount of light from your object.

What this means is that say, for example, I have a 5-inch telescope and a 10-inch telescope (where those measurements are the diameter of the primary mirror). The area of the telescope's mirror tells us how much light-gathering power the telescope has - how big the "bucket" is - so the 10-inch telescope will collect 102/52 = 4 times more light than the 5-inch. this means that if I needed to take a 10-minute image to actually be able to see Pluto with the 5-inch telescope, then I would only need to take a 2.5-minute image with the 10-inch telescope. Since astronomers (well, some) have lives and don't want to be at the telescope all night every night, that is why we buy larger telescopes, so that we can image fainter objects faster.

Camera and Detector - Way back in the day, astronomers used photographic plates - large glass plates that were sensitive to light. Well, sorta sensitive. Analog film (like 35 mm film or photographic plates) has a pretty low sensitivity to light. In general, about 5-10% of the light that actually hits it gets recorded.

Digital cameras, however, are more sensitive. Consumer cameras these days have a quantum efficiency (how much light it records out of how much light hits it) of about 20-30%. Good digital cameras have a quantum efficiency of around 40-50%. So right away, there is a huge incentive to use digital over film. I won't get into the other aspects here.

The cameras that astronomers use are even more sensitive - they have a quantum efficiency of around 80-95%, depending upon what color of light it is recording. This is why astronomers are willing to pay thousands of dollars for their cameras, since they're so sensitive to light.


The Calibrations

Detector Noise - Electronic devices that have any temperature above absolute zero will produce electronic noise on a detector. What happens is the molecules are physically moving around - the definition of heat - and they can occasionally release electrons that will then be recorded by the detector. Assuming that the noise is completely random, then the longer the exposure time, the more noise will be recorded, but the noise should average out to be even across the image.

Temperature - Detectors that astronomers use aare cooled to low temperatures. This is to reduce the detector noise because with a lower temperature, the molecules don't move around as much, so they are less likely to spit off electrons.

Detector Imperfections & "Dark Frames" - Digital detectors are not perfect. There are some pixels that are more sensitive than others, some that are always on (hot pixels), some that are always off (dead pixels), etc. This will come through in the image, and is bad if an astronomer is doing photometry - measuring the light output of a star very precisely. This can be corrected for in the image by basically taking a picture of the detector. What we do is take a "dark frame," which is an image for the same amount of time we took the image of the object, but effectively with the lens cap still on. Several dark frames are taken and averaged so that detector noise is averaged out and all that remains is the imperfections in the detector. This is then subtracted from the image of the object, and the imperfections will then be removed.

Lens Imperfections & "Flat Fields" - The optics are also not perfect. First off, there's vignetting, which is when the center of the image is brighter than the edges. Second, there is dust, dirt, and sometimes fingerprints on the camera and lenses. This can be removed from the final image by taking "flat fields," which are images of a perfectly evenly lit flat area (like a big white circle). This is why major observatories have a big white circle hanging inside of their domes, so that they can image them to take flat fields. If you don't have a big white circle, then a substitute is to take twilight flats, which are just images of the twilight sky before any astronomical objects can actually be seen. Several flats are taken and averaged (to average out the detector noise). These flats are then divided into the image of your object (after they've been dark-subtracted), and so the imperfections in the optics are accounted for.


Actual Imaging

When to Image - The absolute best time to image an object is when it is on the meridian when there is no moon. No moon is because the moon is a very bright object and it will wash out fainter objects that are nearby, or fainter objects far away if it is a full moon. The meridian is an imaginary line that runs from due South to due North. All objects will pass through the meridian as they move from East to West. The reason you want to image when the object is on (or near) the meridian is because its light will be passing through a minimum amount of Earth's atmosphere at that time. The atmosphere acts to blur an image because of turbulence, and it will also make an image redder than it really is (since the atmosphere preferentially scatters blue light, which is why the daytime sky looks blue).

How Long to Image - The real question should be, "How long can you Image?" In other words, the longer you take an image of the object, the better your results will be. Among other things, this can be understood by thinking about the detector noise I mentioned above. The longer you image, the "flatter" the detector noise will be. You can think of this like throwing rocks into a bin: If you just throw rocks in for a minute, they will clump in some places and not be in others; but, if you throw rocks in for 10 minutes, then chances are there will be an even distribution of rocks. This is especially important if you are imaging a faint object. Because the object is faint, it will not be much brighter than the recorded noise. So, the longer you image, the more signal you will get from the object, and the more you can see it above the noise.


Initial Arithmetic Processing

Taking Multiple Exposures - An alternative to taking a very long (e.g., 8 hours) single exposure of an object is to take many shorter exposures and average them together. In other words, you could take 8 1-hour images, or 32 15-minute images. You can then add them together and divide by the number of images you took and come out with the same noise characteristics as you would in a single 8-hour image. Since every pixel is represented by a number, it does not matter that the actual value of the pixel will be much less than a single exposure since we can then just scale it up or down. This is often preferable to taking a single long exposure in case something happens over the course of your observing (like a satellite flies through the image), and if your tracking is not 100% accurate (so you would start to see star trails being recorded).

Dealing with Calibration Images - Let's say you've already taken your dark frames and flat frames. You will want to combine them to get a single image. However, unlike the "average them" I said above, what you actually want to do is to median combine them. The median of several numbers is the middle number when sorted from lowest to highest. In other words, of the numbers 5, 19, 2, 7, and 4, the median value is 5. This is preferable to taking the average (which would be 7.4) because any outliers (values that are significantly less than or greater than the average, such as "19" above) will pull the average but not the mean. So if you have some event happen, like a cosmic ray, that turns a pixel temporarily into a "hot pixel," then it will sway your average but not your mean, and since that temporary hot pixel will go away in the next image, you do not want it to be in your calibration images.

Applying Calibration Images - Before your actual object images are averaged (assuming you did not do one long exposure), you will need to apply the calibration images to each object image. The formula is (object - dark)/((flat - dark ) / <flat - dark>). The brackets <> mean to take the average (though most people use median). In other words, after you dark-subtract the flats, you want to divide by the median value of the flats so that the average value of your flat is 1.0. This is because you're dividing the dark-subtracted object image by the flats, but you don't want the average value of the then-calibrated object image to change.


Making a Color Image

About Consumer Cameras - Consumer CCD cameras work (for the most part) by having pixels that record red, pixels that record green, and pixels that record blue light. Then, a computer within the camera figures out from the RGB information what the actual color was of the light that hit that cluster.

About Astronomical Cameras - In astronomical CCDs, however, all the pixels are equally sensitive to the same range of color. This means that they only take black-and-white - or "luminosity" images. In order to get the pretty color pictures that often grace press releases and online galleries (such as this), different filters - colored pieces of glass or plastic - are placed in front of the camera to only let light in that is that filter's color. Usually, three filters (red, green, and blue) are used. What the CCD records is still a black-and-white image, but that is a black-and-white image of only the blue light, or green light, or red light, that got through the filter.

Color Combining - Various programs can be used to take the red, green, and blue (or whatever other colors the object image was taken in) images and combine them into a color image. My program-of-choice is Adobe's PhotoShop. There are three main ways that can be used. The first is to paste each red, green, and blue image into the respective red, green, and blue channels. The second and third is to paste the red, green, and blue (or other colors) into different layers, set the layer mode to "Screen" in all but the bottom layer, and then colorize them. One way to colorize is by placing a Hue/Saturation adjustment layer over each color image and colorizing. A second is to do a gradient overlay adjustment layer over each image and colorize, setting one end of the gradient to black and the other end to the color. I've been purposely vague here because teaching the mechanics of color combination is not the goal of this page.