Processing of CCD images, T.A.H.M. Scholten

This article describes the methods used by the author for processing CCD-images. These methods are limited to relatively simple algorithms, that can be programmed easily in Pascal or C++. They consist of filtering, subtracting, adding and deviding images. Iterative Fourier transform techniques are not easily implemented and thus not regarded. The unsharp masking technique is enlightened with a few examples. Results can be found on my homepage.

SOME DEFINITIONS

  1. CCD-camera:
    Camera with a CCD-sensor chip consisting of a matrix (512x512 or 1024x1024 or other numbers) of small light sensitive elements (size 8 to 20 micrometer square).
  2. Pixel:
    Picture element of a CCD-chip; also picture element of the obtained image.
  3. Dark field:
    Image obtained with the un-exposed CCD-camera caused by a thermally generated signal. The intensity of the dark field varies exponentially with the temperature and thus, cooling the CCD-chip is an effective way to lower this unwanted contribution. The intensity of the dark field increases linearly with the exposure time.
  4. Flat field:
    Image obtained with CCD-camera mounted on the telescope or fitted with its lens, when exposed to a uniform illuminated sky (i.e.hen clouded or during twilight). The flat field reveals local differences in sensitivity of the optical system consisting of the telescope/lens and the CCD-chip.
  5. Blooming:
    Over-exposure of CCD pixels (e.g. in case of a bright star) causes signal to spill to neighbouring, non-exposed elements resulting in a local bright blob or bright line in the image.
  6. Digitizing:
    The conversion of the analog intensity information (electrical charges) from the CCD pixels to a digital number that can be handled by the computer. A minimum of 8 bits (i.e. the values 0..255), but most often 12 bits (0..4095) or even 16 bits (0..65535) are used.
  7. Grey value:
    The digitized value of the intensity of a CCD-pixel.
  8. Convolution:
    Processing of the image by replacing the grey value of each pixel with a new value which is determined by adding and/or subtracting the grey values of neighbouring pixels in a fixed pattern. This pattern is the so-called convolution matrix. As an example:
    1 1 1
    1 5 1
    1 1 1
    With this convolution pattern the grey value of the central pixel is replaced by the sum of the grey values of all surrounding pixels and of five times the grey value of the central pixel itself. This matrix performs a smoothing, i.e. noise filtering of the image.
Back to top Back to homepage

IMAGE ACQUISITION

Although image processing may significantly improve the visual quality of the image, it remains essential that the original image is of an as high as possible quality. This quality concerns three aspects: Contrast, sharpness and noise.

  1. Contrast:
    The first thing to consider is the use of a colour filter to obtain a maximum contrast of details of the object or to supress the faint glow of night sky (especially in light polluted areas). Emission nebula (H-alpha and planetary nebula) and planets (mainly Mars and Jupiter) gain from using such filters. When imaging galaxies and star clusters little can be gained due to the broad spectral emission of these objects.
  2. Focusing:
    Focusing of the scope is done (by the author) by taking CCD images in six different (reproducable) positions of the focuser. These images (or enlarged parts of them) are presented as a 3x2 array in one screen and visualy interpreted. In prime focus, with stars in the image, unprocessed images will do. When using eyepiece-projection (planets and moon) the changing seeing, besides focusing, affects image sharpness. In that case sharpness is judged by taking six images for each focusing step. Before presenting the images on screen, they are processed with an unsharp mask. The focusing position yielding the most and highest sharpness images is then selected. With strong changes of the outside temperature it may become necessary to check the image sharpness after some time since the length of the telescope tube may change and thus shift the focus position.
  3. Image noise:
    Noise in the image is caused by two sources: The dark field of the CCD-chip, which depends on the temperature of the chip and the glow of the night sky which depends on the transparancy of the air and the degree of light pollution at the observation site. Dark signal and its noise can be strongly reduced by cooling the CCD-chip: Every 7 degrees Celcius gains a factor 2. The effect of light pollution is something we cannot control (except in some situations by using filters) and results in an offset (i.e. a minimum grey value) of the image. The magnitude of the noise is in general equal to the square-root of the signal strength. Thus the lower the dark signal and the darker the night sky the lower the noise.
  4. Windows:
    Especially in case of eyepiece-projection and barlow-projection (and thus photography of planets and details on the moons surface) it is of great importance that the window sealing the camera and the window sealing the CCD-chip are free of scratches and dust particles: Due to the large f/D-ratio these blemishes are imaged sharply on the chip and show-up clearly when using unsharp masking techniques.

Back to top Back to homepage

PRE-PROCESSING OF IMAGES

The resulting image contains three components: The dark field, the background caused by the (light polluted) sky and the ideal image of the object. All these components contribute to the total noise of the image. In a pre-processing step the first two components are removed from the image in the following way:

  1. Subtraction of the darkfield:
    A first step of processing the image is usually done during image acquisition: The dark field image (which is obtained by taking an image with the camera shut) is subtracted from the acquired image. The resulting image thus contains the object, superimposed on the glow of the night sky and a noise component resulting from the darkfield. This noise is not compensated when subtracting the dark field because it is random. The lower the dark field signal the lower this noise component. It is advised to regularly take a new dark field to avoid that, when (later-on) adding images the noise present in the one dark field image used, shows up as a fixed pattern in the result. Furthermore, for a good compensation it is essential that the exposure time of the dark field is exactly the same as for the images.
  2. Correction for inhomogeneous illumination or sensitivity:
    Before subtracting the contribution of the night sky it is essential to correct the image for non-uniform sensitivity across the CCD caused by inhomogeneous illumination (vignetting) or sensitivity differences between pixels. This so-called flat field is obtained by taking an (dark field corrected) image of the sky during twilight or when clouded, using the same optical set-up (i.e. telescope/lens, filter and CCD-camera). The image will show a slow variation in grey value revealing sensitivity variations within the image. By deviding each image by the flat field image they are corrected for these sensitivity variations.
    To avoid the introduction of noise during this process it is essential to construct the flat field image using multiple exposures (corrected with the dark field) and smooth the result using a noise filter.
  3. Selection of best images:
    Often a large set of relatively short exposed images is acquired in order to avoid tracking problems from showing up (especially with home-build equipment!) or to avoid blooming when bright stars are within the image. In prime focus the exposure time (I use) may be several minutes. With eyepiece projection tracking is usually not the problem and seeing limits the image sharpness. The best images, showing no tracking errors or with best planetary detail, are selected for further processing.
  4. Adding images:
    By adding several images the noise in the resulting image will be reduced. However, from one image to the other slight shifts in the position may be present due to (slow) tracking variations. Before adding images this shift should be corrected. (It can be an advantage when the images are slightly shifted with respect to each other in order to avoid a 'fixed pattern' from showing up in the added result.)
    To optimally fit the images, one image is taken as a reference and the other images are (one by one) shifted in x,y direction and subtracted from the reference. The best fit is obtained with an x,y shift at which the difference between reference and image results in noise only. After determining the x,y shifts, all images are added with their appropriate shifts. The total image intensity will increase linearly with the number of images N, while the total noise increases with the square root of N. Thus, effectively noise is reduced with the square root of N.
  5. Noise correction:
    Noise can be reduced by filtering the image using a convolution filter such as: (with N>=1 to select the weight of the central pixel)
    1 1 1
    1 N 1
    1 1 1
    Unfortunately, these type of filters also affect image sharpness. Therefore, I use a (non-linear) method in which each pixel is compared with the eight surrounding pixels. If the grey-value of the pixel is within the maximum and minimum values of the eight neighbours, its value is not corrected. Its value is also not corrected if its grey-value is a certain selectable value (=threshold) smaller than the minimum or larger than the maximum surrounding values and thus may be regarded as significant. In all other case (i.e. the value is deviating from the extreme values, but not significantly) the grey-value is replaced by the mean value of the surrounding pixels. The threshold is selected such that the corrected pixels are randomly distributed over the image.
  6. Subtracting the sky-background:
    The contribution of the sky is now removed by subtracting a constant value from the image, yielding a black background. To further remove noise from the nearly black background, I use another non-linear noise filter: Around each pixel the number of black pixels (i.e grey value = 0) is counted. If this number, including the central pixel, is equal or larger than 4 (i.e. 4..9) the grey-value of the central pixel is made zero. Is the number equal or larger than 2 (thus 2 or 3) the grey-value of the central pixel is made equal to the mean value of the surrounding pixels. In the other cases (i.e. the number of zero's is 0 or 1) the grey-value remains unaffected.

Back to top Back to homepage

IMPROVING IMAGE SHARPNESS

Using the above described pre-processing steps we obtain an image that, besides from having a higher noise level, is equal to an image obtained in the total absence of stray light and dark current.
This image can now be processed in order to increase the visibility of details present in the image. In the original image, these details may have a low contrast or are slightly blurred due to poor seeing or focus errors.
To start with, we have to distinguish between three different techniques that all are aimed at increasing the image sharpness:

  1. Image restoration (e.g. MEM: Maximum Entropy Method):
    This technique restores a blurred and noisy image by calculating an image that, knowing the noise of the CCD-camera and the blurring characteristics due to the optical limitations of the scope, is the most probable solution (i.e. has the highest entropy). MEMs can significantly improve image sharpness using an iterative process with several (time consuming) Fourier transforms and using the known blurr-function of the optical system.
  2. Image edge sharpening:
    This technique increase sharpness of edges using convolution filters with a limited size (usually 3x3 or 5x5 see figure 1). It is especially useful in case of globular clusters and images of the moon for which the edge structures (the stars in the cluster and the craters on the moon) extend over a few pixels. It is essential that the original image is nearly noise-free since also noise is 'sharpened' and thus increases. As an example figure 2 presents an original image of the moon and the effect on this image using different convolution filters.
  3. Unsharp masking:
    This technique enhances small structures in the image by removing or weakening the more extended intensity variations in the image.

Figure 2:

Effect of 'image edge sharpening' using convolution
filters

Upper-left: The orginal image;

Above and left: Processed images of the orginal
using the given convolution filters.

Back to top Back to homepage

UNSHARP MASKING

Refer to figure 3 for the next discussion on the unsharp masking technique. It shows images of saturn during different processing steps. The graph gives the intensity profile of the horizontal line through the middle of the rings.
For reasons of simplicity we denote the orginal image by ORG (fig. 3a). ORG consists of a superposition (i.e. a sum) of an image with a broad, low spatial frequency structure denoted by LF (fig. 3b) and a much weaker image with fine detail and a high spatial frequency denoted by HF (fig. 3c (streched in intensity)). Thus: ORG=LF+HF. The unsharp masking technique is aimed at increasing the visibility of the fine structures, and thus of HF contribution, that are obscured by the large intensity variations (amplitude)of the LF contribution.

Figuur 3.
Unsharp masking visualized using an original
image ORG of saturn (3A). The superimposed graph
shows the intensity profile over the horizontal
line through the center of saturn.
3B is the LF-image obtained using a convolution
of ORG with a 11x11 Gaussian convolution filter.
3C is the HF-image obtained by simple arithmics:
HF=ORG-LF+offset.
3D and 3E present to possible sharpening
algorithms ORG+5*HF and ORG*HF.

Important for the final result of the unsharp masking action are three aspects
The width of the filter for obtaining the LF-image
The use sharpening algorithm and applied amplification factor of the HF-structure
The presence of artefacts and noise

  1. Generating the LF mask:
    When the LF-image is available we can obtain the detailed HF-structure by subtracting LF from the original image ORG. After this, we kan change the ratio LF/HF at will.
    The LF-image is obtained by filtering the ORG-image: The grey value of each pixel is exchanged for the mean value of the neighbouring pixels up to a certain distance. If we simply calculate the mean value by adding the value of the surrounding pixels and devide the result by the total number of pixels we are applying a uniform filter (fig, 1a): All pixels have equal weight. Better results are obtained if the influence of the pixels farther away from the central pixel decrease in weight. Usually a Gaussian (bell-shaped) pattern is used as a weighting function (fig. 1b).
    Important when generating LF is the width B (in pixels) of the filter: Is B small then the smallest detail including noise are enhanced, and the effect equals the edge sharpening method. By increasing B ever larger structures (with dimensions comparable to B) are enhanced. With B too large the effectiveness of the unsharping mask technique decreases.
  2. Unsharp masking algorithms:
    Depending on the details that are to be enhanced and the required end-result (denoted by NEW) the following processing algorithms are possible:
    1. NEW=HF+offset: The LF-structure is fully suppressed and only the fine detail of the HF-structure remains. Since HF=ORG-LF it may result in a negative number and a certain offset is added to the resulting image to obtain positive values.(fig. 3c).
    2. NEW=LF+n*HF (or written in a more common way: ORG*n+LF*(1-n) ): The LF-structure remains and the HF-structure is enhanced by a factor n>1. This is the most common sharpening technique in which n determines the magnitude of the sharpening action. (fig. 3d).
    3. NEW=ORG/LF(=1+HF/LF): This algorithm resembles the photographic unsharp masking technique. The LF component is suppressed and reduced to a constant grey-value while the HF-component is modulated with the LF-value: With LF low, the HF-component is extra enhanced. Because of this, the gain should be limited to avoid the enhancement of noise in the weaker parts of the image.
    4. NEW=ORG*HF: This operation results in a non-linear (quadratic) enhancement of the HF-structures. This method is very useful to increase detail on low-contrast images of planets. (fig. 3e).
  3. Noise and distortions:
    When using unsharp masking algorithms non-existing image features may be introduced and the noise level may increase significantly. These unwanted effects limit the amount of sharpening that can be applied.
    Intense stars and nuclei of galaxies result in a broad LF-features which, after applying the sharpening algorithm, become visible as dark or even black rings around stars and nuclei. This effect can be reduced by generating the LF-image from the original image after removing the most intense stars from the original. This can be done by applying a 'minimum' filter, i.e. a convolution-like operation in which the central pixel is replaced by the minimum value in a 3x3, 5x5 or even bigger area around the central pixel.
    The noise level is determined by the thermally induced dark current and the background level of the night sky. The first effect can be reduced by sufficiently cooling the CCD-chip. The effect of light pollution is not controllable and results in a noise contribution after removing this backgrond by subtracting some constant. However, by stacking and adding a large number N of images noise can be reduced by a factor sqrt(N).

For the best result of the unsharp masking technique the above mentioned parameters (B, n, offset, method, noise, etc.) should be experimented with. Comparison with images obtained by other amateurs or even professional observatories may be used as a guide in enhancing details up to am acceptable level.

Back to top Back to homepage