Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 6

Image Restoration

In this section we will describe:


A general overview of image restoration Algorithms and tasks in the STSDAS restore package (page 87) How to prepare images for restoration (page 90) How to find and use PSF images (page 92)

Overview of Image Restoration


Image restoration*1 (or deconvolution) techniques are often needed to compensate for the error in HST's primary mirror. Deconvolution methods will likely continue to be important in the analysis of HST data after the installation of WF/PC-2 and COSTAR. Image restoration can remove diffraction spikes, reduce confusion, improve dynamic range, and provide modest gains in resolution. In some cases, significant super-resolution might be attained. All restoration methods are likely to produce better results given higher S/N data. There is no magic number that can be given, however, to say what S/N ratio is adequate. The quality of a restoration depends not only on the S/N ratio, but also on the nature of the object (extended or point-like), the background (strong DC or complicated background structure present vs. empty background), and the quality of the point spread function.

Algorithms and Methods in STSDAS


Several restoration algorithms are provided in the STSDAS restore package. These algorithms, as implemented in STSDAS, are described in some detail in Chapter 12 of the HST Data Handbook. Summary descriptions of the following algorithms are provided in the next subsections:

Richardson-Lucy algorithm: the lucy task Maximum Entropy Method: the mem task s-CLEAN algorithm: the sclean task Wiener filter and Fourier inverse filter: the wiener task Adaptive filters: the hfilter and adaptive tasks

All of these tasks are described in detail in the on-line help for each task.

Richardson-Lucy Algorithm
The STSDAS lucy task generates a deconvolved image from an input image and point spread function (PSF). The output image will be the same size as the input image unless an optional larger output size is specified. The PSF image must contain the same number of dimensions as

the input image, but the actual dimensions do not need to match. If a background (sky) image or mask image are given, their dimensions must match those of the input image. If a model image is specified, its dimensions must match the output image size. When the optional background image is specified, the lucy task will require six real and two complex arrays, i.e., a 512x512 image will need 10 MB of disk space. One common question about the R-L method is when to stop the iterations. The usual rule of thumb is to run the R-L iterations until the c2 per degree of freedom (c2 / number of pixels) reaches 1.0. This only provides a global measure of the quality of the fit, however, since the rate of convergence varies for objects of different brightness and different spatial scales. For stellar fields with little or no background the R-L iterations can (and should) be run a long time--many hundreds or even thousands of iterations are appropriate. It would be possible to run the iterations to true convergence, i.e., such that the differences between successive image estimates are smaller than the numerical accuracy of the computation.

Maximum Entropy Method


The Maximum Entropy Method (MEM) is very versatile, and can be given constraints on the solution that make it as powerful, if not more so, than the R-L method. Owing to the flexibility of the method, MEM is somewhat more difficult to use and is generally more computationally intensive than R-L. Many people may be familiar with the MEMSYS 5 package from Maximum Entropy Data Consultants, Ltd., and the front-end program MEM developed by Nick Weir at Caltech. STSDAS now includes a non-commercial MEM implementation in the mem task, written by Nailong Wu, which gives comparable results to MEMSYS 5 and has a computation time similar to the R-L method. The user interface has also been simplified, with inputs very similar to the other restoration tasks in STSDAS. The task generally takes as input three images:

The degraded HST image A point spread function image A model image

Two input parameters are specified: noise and adu. The output image will be the final converged MEM image, or an interim result that you can use as a model for the next iteration. Generally speaking, R-L is more robust than MEM--that is, even though in some cases MEM methods may give better results, the R-L method is more stable. In some cases MEM will not converge to a solution. MEM tends to give the best spatial resolution but somewhat less reliable photometry (although recent tests comparing the STSDAS mem and lucy tasks show very comparable photometric linearity). MEM implementations are often more computationally intensive.

s-CLEAN
In the standard CLEAN algorithm the image is searched for the brightest pixel, and a fraction (the loop gain) of the source is subtracted at this location by scaling the PSF accordingly. This process is repeated until the noise level of the background is reached. At the same time, a list is

kept of each source removed from the data. At the end of the process, these sources are added back to form a clean map using a modified PSF such as a narrow Gaussian profile. Keel's modification to CLEAN is to employ a noise model for the image so that rather than simply locating the brightest pixel on each iteration, one finds the most significant pixel (taking into account read-out noise, sky background, and Poisson noise). Our experience is that the s-CLEAN method (the sclean task) requires substantial computing time and often does not have photometric linearity as good as the R-L method, most likely because iterations are stopped prematurely.

Wiener Filter
The wiener task applies a Fourier non-iterative deconvolution filter to two-dimensional images. The filter includes the familiar inverse, Wiener, geometric mean, and parametric forms. A general description of the method is provided on-line by typing help wiener opt=sys.

Adaptive Filters
The hfilter task is a simple adaptive low-pass noise filter based on the H-transform. The Htransform is a form of image coding based on the Haar transform, which keeps a tight one-to-one relationship between pixels in the transform space and data pixels in the image space. Using the H-transform, it is possible to design filters that adapt themselves to the local signal-to-noise and spectral properties of the data. The adaptive task reduces the noise in parts of the image without reducing resolution of sharp features. The local S/N ratio as a function of decreasing resolution is evaluated as follows: at any given point, mean gradients and curvatures over different scale lengths are compared to expected noise values. The order for which the S/N ratio exceeds a specified level indicates the local resolution scale length of the signal, which determines the size of the filter applied at that point.

Image Preparation
You must prepare your images to get the best possible restoration results. Some techniques will require you to repair bad pixels (such as cosmic ray hits, data dropouts, etc.), while others will allow you to apply a mask or a weighting array. The following problems and solutions are described in this section:

Cosmic ray removal and bad pixel masking Saturated pixel removal or masking Image boundary or edge considerations Combining images of different resolutions

Cosmic Ray Removal and Bad Pixel Masking

When exposures are split into several shorter length exposures it becomes much easier to identify cosmic rays and other data defects that do not occur in the same pixel on each exposure. Given the undersampling problems of the WF/PC, cosmic ray identification is a significant problem; faint stars can be difficult to distinguish from low energy cosmic ray hits. If hard cosmic rays hits are left in the data, the restored image will usually end up having a PSF-like artifact introduced at the location of the cosmic ray. Thus, it is important to first combine CR-SPLIT images using a task such as wfpc.combine to remove the cosmic rays. If only a single data frame is available try using the cosmicrays task in the noao.imred.ccdred package. This task can repair the cosmic ray hits and also create an output list of the affected pixels, which can be converted to a mask for use in the lucy task. The best approach is to generate a mask for each image which flags the cosmic rays so that they are ignored by the restoration algorithm. The lucy task accepts such a mask and will give flagged pixels zero weight. If the algorithm does not accept input masks, replace the bad pixel with an interpolated value. This gives a statistical significance to the pixel that is not warranted, but cannot be avoided for simple algorithms such as the Wiener filter. If CR-SPLIT images are not registered the cosmic ray removal routines will not perform well. Moreover, it is dangerous to resample the data in order to align the images given the undersampling of the WF/PC--a single bad pixel in the image can end up affecting a number of adjacent pixels after rebinning, and the accuracy of the rebinning is poor under-sampled data in general. Registration within 0.2 pixel is generally acceptable.

Saturated Pixels
Saturated pixels can be handled in several ways. If you have a shorter exposure of the same image at the same pointing (i.e., properly registered) you can patch in and replace the saturated pixels by scaling the shorter exposure image to the longer exposure time. The statistics of the patched in data points will not be the same, of course, and a new version of the STSDAS lucy task will allow the user to specific a weighting image to adjust for the different exposures. Another approach is to mask out the saturated pixels and treat them as totally unknown. The lucy task in STSDAS will fill in the saturated pixels as best it can, given that the wings of the PSF appear in the data and are not saturated. The restoration will have some difficulty in restoring the full intensity of the saturated object, and a third approach is to use the result from, say, 100 R-L iterations as an input model, and restart the restoration ignoring the mask used initially. We have done some experiments along these lines but have not completed this work. Another approach is to estimate or fit a PSF model to the region that is saturated using the wings of the PSF to determine the fit. Subtract the fit from the data, then restore the residual using your fit as a sky model (see the input parameters and help file for the lucy task). Note that this approach requires a very good model for the PSF.

Boundary Extension

Since restoration algorithms typically use Fast Fourier Transforms (FFTs), it is assumed that the image is a periodic or cyclical function. If there is a discontinuity from one edge of the image to the opposite edge, ringing (a pattern of high frequency ripples) will be introduced into the restored image. In order to avoid these problems, the image to be restored must be embedded in a larger image and the data must be extended in such a way as to make the image periodic with a smooth transition from one edge to the next (i.e., continuous first derivative). A task is under development in STSDAS (called edge) which will extend the image boundaries appropriately. The STSDAS implementation of the R-L algorithm handles image edges slightly differently. The restored image can be chosen to be larger than the original data, and the padding region is simply given zero weight in the iteration. The restored image is allowed to have flux in the padding region, but the flux is based only on the known pixel values (e.g., a partial PSF pattern from a star just outside the image boundary).

Combining Images of Different Resolution


Images of varying resolution can be combined into a single image using an algorithm developed by Leon Lucy and Richard Hook of the ST-ECF. The task acoadd in the STSDAS contrib package is their implementation of this method.

PSF Preparation
You can use either an observed or a model PSF. Model PSFs are produced by the Telescope Instrument Modelling (TIM) and TinyTim software. TIM and TinyTIM model PSFs are reasonably good in the optical part of the spectrum, but disagree with observed PSFs more and more as you go further into the UV. If possible, you should try using both an observed and a model PSF and compare the results, looking carefully for artifacts (especially features that resemble the PSF structure) in the restored images. The best source for an observed PSF is in the same image if the PSF star is not too far from the object you want to restore (i.e., for WF/PC, within ~100-200 pixels). A PSF star near the edge of the image will not be suitable for restoring an object at the center of the image since vignetting causes serious distortions of the PSF at the field edge. Other observed PSFs are available on STEIS in the WF/PC and FOC PSF Catalogs; which are described in files available via FTP, gopher, or WWW on the machine stsci.edu (see the directory instrument_news/wfpc /wfpc_psf_lib_07_08_93 and instrument_news/foc/ psf_files/psf_info.ps). The FOC PSFs are actually on-line on stsci.edu, and the WF/PC PSFs are stored in the HST archive. You may also need to search the HST archive to find exposures of stars in the same filter and position in the field (for WF/PC); unfortunately, there are no keys in the Archive database identifying fields with good PSF stars. When using an observed PSF, it is important that it be well-exposed. A noisy PSF will add noise to your restored image. An observed PSF must also be cleaned of cosmic rays and other image flaws and have any background offsets removed. It is advisable to apodize a PSF that has been cut out from a larger image. This means that you multiply the PSF subimage by a function, such as a truncated

Gaussian, which smoothly tapers the pixel values to zero at the edge but does not alter the profile of the PSF itself. Tools are being developed within STSDAS to automate the process of PSF extraction. Either TIM or TinyTIM may be used to generate model PSFs. TinyTIM is easier to use. TinyTIM is available on all STScI science computer systems, and is available via anonymous FTP on stsci.edu (software/tinytim). The Image Restoration Project has contracted with David Redding at the Jet Propulsion Laboratory to develop a more sophisticated PSF modeling and prescription retrieval code. This software will be made available to the HST user community as soon as possible. There can be advantages to using subsampled PSFs for image restoration, especially for WF/PC images. Virtually all WF/PC images are undersampled, and the quality of an image restoration can be improved if the restoration is computed on a higher density pixel grid (e.g., 2 2 or 3 3 subsampling). Subsampling is most effective for data that is most severely undersampled, such as WF images in the UV. Bear in mind that increasing the size of the image and PSF by subsampling will increase the computation time and memory requirements for deconvolution. To use a subsampled PSF you must also subsample the data (and model, if you have one) to the same scale pixel grid. You can use block replication to subsample the data.

You might also like