Combining images and generating masks from clipping¶
Note
No attempt has been made yet to optimize memory usage in
Combiner
. A copy is made, and a mask array
constructed, for each input image.
The first step in combining a set of images is creating a
Combiner
instance:
>>> from astropy import units as u
>>> from ccdproc import CCDData, Combiner
>>> import numpy as np
>>> ccd1 = CCDData(np.random.normal(size=(10,10)),
... unit=u.adu)
>>> ccd2 = ccd1.copy()
>>> ccd3 = ccd1.copy()
>>> combiner = Combiner([ccd1, ccd2, ccd3])
The combiner task really combines two things: generation of masks for individual images via several clipping techniques and combination of images.
Image masks/clipping¶
There are currently three methods of clipping. None affect the data directly; instead each constructs a mask that is applied when images are combined.
Masking done by clipping operations is combined with the image mask provided
when the Combiner
is created.
Min/max clipping¶
minmax_clipping
masks all pixels above or below
user-specified levels. For example, to mask all values above the value
0.1
and below the value -0.3
:
>>> combiner.minmax_clipping(min_clip=-0.3, max_clip=0.1)
Either min_clip
or max_clip
can be omitted.
Sigma clipping¶
For each pixel of an image in the combiner,
sigma_clipping
masks the pixel if is more than a
user-specified number of deviations from the central value of that pixel in
the list of images.
The sigma_clipping
method is very flexible: you can
specify both the function for calculating the central value and the function
for calculating the deviation. The default is to use the mean (ignoring any
masked pixels) for the central value and the standard deviation (again
ignoring any masked values) for the deviation.
You can mask pixels more than 5 standard deviations above or 2 standard deviations below the median with
>>> combiner.sigma_clipping(low_thresh=2, high_thresh=5, func=np.ma.median)
Note
Numpy masked median can be very slow in exactly the situation typically encountered in reducing ccd data: a cube of data in which one dimension (in the case the number of frames in the combiner) is much smaller than the number of pixels.
Extrema clipping¶
For each pixel position in the input arrays, the algorithm will mask the
highest nhigh
and lowest nlow
pixel values. The resulting image will be
a combination of Nimages-nlow-nhigh
pixel values instead of the combination
of Nimages
worth of pixel values.
You can mask the lowest pixel value and the highest two pixel values with:
>>> combiner.clip_extrema(nlow=1, nhigh=2)
Iterative clipping¶
To clip iteratively, continuing the clipping process until no more pixels are rejected, loop in the code calling the clipping method:
>>> old_n_masked = 0 # dummy value to make loop execute at least once
>>> new_n_masked = combiner.data_arr.mask.sum()
>>> while (new_n_masked > old_n_masked):
... combiner.sigma_clipping(func=np.ma.median)
... old_n_masked = new_n_masked
... new_n_masked = combiner.data_arr.mask.sum()
Note that the default values for the high and low thresholds for rejection are 3 standard deviations.
Image combination¶
Image combination is straightforward; to combine by taking the average, excluding any pixels mapped by clipping:
>>> combined_average = combiner.average_combine()
Performing a median combination is also straightforward,
>>> combined_median = combiner.median_combine() # can be slow, see below
With image scaling¶
In some circumstances it may be convenient to scale all images to some value
before combining them. Do so by setting scaling
:
>>> scaling_func = lambda arr: 1/np.ma.average(arr)
>>> combiner.scaling = scaling_func
>>> combined_average_scaled = combiner.average_combine()
This will normalize each image by its mean before combining (note that the
underlying images are not scaled; scaling is only done as part of combining
using average_combine
or
median_combine
).
With image transformation¶
Note
Flux conservation Whether flux is conserved in performing the
reprojection depends on the method you use for reprojecting and the
extent to which pixel area varies across the image.
wcs_project
rescales counts by the ratio of pixel area
of the pixel indicated by the keywords CRPIX
of the input and
output images.
The reprojection methods available are described in detail in the documentation for the reproject project; consult those documents for details.
You should carefully check whether flux conservation provided in CCDPROC is adequate for your needs. Suggestions for improvement are welcome!
Align and then combine images based on World Coordinate System (WCS) information in the image headers in two steps.
First, reproject each image onto the same footprint using
wcs_project
. The example below assumes you have an image with WCS
information and another image (or WCS) onto which you want to project your
images:
>>> from ccdproc import wcs_project
>>> reprojected_image = wcs_project(input_image, target_wcs)
Repeat this for each of the images you want to combine, building up a list of reprojected images:
>>> reprojected = []
>>> for img in my_list_of_images:
... new_image = wcs_project(img, target_wcs)
... reprojected.append(new_image)
Then, combine the images as described above for any set of images:
>>> combiner = Combiner(reprojected)
>>> stacked_image = combiner.average_combine()