Welcome to the ccdproc documentation! Ccdproc is is an affiliated package for the AstroPy package for basic data reductions of CCD images. The ccdproc package provides many of the necessary tools for processing of ccd images built on a framework to provide error propogation and bad pixel tracking throughout the reduction process.
Documentation¶
The documentation for this package is here:
Installation¶
Requirements¶
Ccdproc has the following requirements:
- astropy v0.4 or later
- numpy
- scipy
One easy way to get these dependencies is to install a python distribution like anaconda.
Installing ccdproc¶
Using pip¶
To install ccdproc with pip, simply run:
pip install --no-deps ccdproc
Note
The --no-deps
flag is optional, but highly recommended if you already
have Numpy installed, since otherwise pip will sometimes try to “help” you
by upgrading your Numpy installation, which may not always be desired.
Building from source¶
Obtaining the source packages¶
Source packages¶
The latest stable source package for ccdproc can be downloaded here.
Development repository¶
The latest development version of ccdproc can be cloned from github using this command:
git clone git://github.com/astropy/ccdproc.git
Building and Installing¶
To build ccdproc (from the root of the source tree):
python setup.py build
To install ccdproc (from the root of the source tree):
python setup.py install
Testing a source code build of ccdproc¶
The easiest way to test that your ccdproc built correctly (without installing ccdproc) is to run this from the root of the source tree:
python setup.py test
CCD Data reduction (ccdproc
)¶
Introduction¶
Note
ccdproc
works only with astropy version 0.4.0 or later.
The ccdproc
package provides:
- An image class,
CCDData
, that includes an uncertainty for the data, units and methods for performing arithmetic with images including the propagation of uncertainties. - A set of functions performing common CCD data reduction steps (e.g. dark subtraction, flat field correction) with a flexible mechanism for logging reduction steps in the image metadata.
- A class for combining and/or clipping images,
Combiner
, and associated functions.
Getting Started¶
A CCDData
object can be created from a numpy array (masked or not) or from
a FITS file:
>>> import numpy as np
>>> from astropy import units as u
>>> import ccdproc
>>> image_1 = ccdproc.CCDData(np.ones((10, 10)), unit="adu")
An example of reading from a FITS file is
image_2 = ccdproc.CCDData.read('my_image.fits', unit="electron")
(the
electron
unit is defined as part of ccdproc
).
The metadata of a CCDData
object may be any dictionary-like object, including a FITS header. When a CCDData
object is initialized from FITS file its metadata is a FITS header.
The data is accessible either by indexing directly or through the data
attribute:
>>> sub_image = image_1[:, 1:-3] # a CCDData object
>>> sub_data = image_1.data[:, 1:-3] # a numpy array
See the documentation for CCDData
for a complete list of attributes.
Most operations are performed by functions in ccdproc
:
>>> dark = ccdproc.CCDData(np.random.normal(size=(10, 10)), unit="adu")
>>> dark_sub = ccdproc.subtract_dark(image_1, dark,
... dark_exposure=30*u.second,
... data_exposure=15*u.second,
... scale=True)
Every function returns a copy of the data with the operation performed. If, for some reason, you wanted to modify the data in-place, do this:
>>> image_2 = ccdproc.subtract_dark(image_1, dark, dark_exposure=30*u.second, data_exposure=15*u.second, scale=True)
See the documentation for subtract_dark
for more compact
ways of providing exposure times.
Every function in ccdproc
supports logging through the addition of
information to the image metadata.
Logging can be simple – add a string to the metadata:
>>> image_2_gained = ccdproc.gain_correct(image_2, 1.5 * u.photon/u.adu, add_keyword='gain_corrected')
Logging can be more complicated – add several keyword/value pairs by passing
a dictionary to add_keyword
:
>>> my_log = {'gain_correct': 'Gain value was 1.5',
... 'calstat': 'G'}
>>> image_2_gained = ccdproc.gain_correct(image_2,
... 1.5 * u.photon/u.adu,
... add_keyword=my_log)
The Keyword
class provides a compromise between the simple
and complicated cases for providing a single key/value pair:
>>> key = ccdproc.Keyword('gain_corrected', value='Yes')
>>> image_2_gained = ccdproc.gain_correct(image_2,
... 1.5 * u.photon/u.adu,
... add_keyword=key)
Keyword
also provides a convenient way to get a value from
image metadata and specify its unit:
>>> image_2.header['gain'] = 1.5
>>> gain = ccdproc.Keyword('gain', unit=u.photon/u.adu)
>>> image_2_var = ccdproc.create_deviation(image_2,
... gain=gain.value_from(image_2.header),
... readnoise=3.0 * u.photon)
You might wonder why there is a gain_correct
at all, since the implemented
gain correction simple multiplies by a constant. There are two things you get
with gain_correct
that you do not get with multiplication:
- Appropriate scaling of uncertainties.
- Units
The same advantages apply to operations that are more complex, like flat correction, in which one image is divided by another:
>>> flat = ccdproc.CCDData(np.random.normal(1.0, scale=0.1, size=(10, 10)),
... unit='adu')
>>> image_1_flat = ccdproc.flat_correct(image_1, flat)
In addition to doing the necessary division, flat_correct
propagates
uncertainties (if they are set).
Using ccdproc
¶
CCDData class¶
Getting started¶
Getting data in¶
The tools in ccdproc
accept only CCDData
objects, a
subclass of NDData
.
Creating a CCDData
object from any array-like data is easy:
>>> import numpy as np
>>> import ccdproc
>>> ccd = ccdproc.CCDData(np.arange(10), unit="adu")
Note that behind the scenes, NDData
creates references to
(not copies of) your data when possible, so modifying the data in ccd
will
modify the underlying data.
You are required to provide a unit for your data. The most frequently used
units for these objects are likely to be adu
, photon
and electron
, which
can be set either by providing the string name of the unit (as in the example
above) or from unit objects:
>>> from astropy import units as u
>>> ccd_photon = ccdproc.CCDData([1, 2, 3], unit=u.photon)
>>> ccd_electron = ccdproc.CCDData([1, 2, 3], unit="electron")
If you prefer not to use the unit functionality then use the special unit
u.dimensionless_unscaled
when you create your CCDData
images:
>>> ccd_unitless = ccdproc.CCDData(np.zeros((10, 10)),
... unit=u.dimensionless_unscaled)
A CCDData
object can also be initialized from a FITS file:
>>> ccd = ccdproc.CCDData.read('my_file.fits', unit="adu")
If there is a unit in the FITS file (in the BUNIT
keyword), that will be
used, but a unit explicitly provided in read
will override any unit in the
FITS file.
There is no restriction at all on what the unit can be – any unit in
astropy.units
or that you create yourself will work.
In addition, the user can specify the extension in a FITS file to use:
>>> ccd = ccdproc.CCDData.read('my_file.fits', hdu=1, unit="adu")
If hdu
is not specified, it will assume the data is in the primary
extension. If there is no data in the primary extension, the first extension
with data will be used.
Metadata¶
When initializing from a FITS file, the header
property is initialized using
the header of the FITS file. Metadata is optional, and can be provided by any
dictionary or dict-like object:
>>> ccd_simple = ccdproc.CCDData(np.arange(10), unit="adu")
>>> my_meta = {'observer': 'Edwin Hubble', 'exposure': 30.0}
>>> ccd_simple.header = my_meta # or use ccd_simple.meta = my_meta
Whether the metadata is case sensitive or not depends on how it is initialized. A FITS header, for example, is not case sensitive, but a python dictionary is.
Getting data out¶
A CCDData
object behaves like a numpy array (masked if the
CCDData
mask is set) in expressions, and the underlying
data (ignoring any mask) is accessed through data
attribute:
>>> ccd_masked = ccdproc.CCDData([1, 2, 3], unit="adu", mask=[0, 0, 1])
>>> 2 * np.ones(3) * ccd_masked # one return value will be masked
masked_array(data = [2.0 4.0 --],
mask = [False False True],
fill_value = 1e+20)
>>> 2 * np.ones(3) * ccd_masked.data # ignores the mask
array([ 2., 4., 6.])
You can force conversion to a numpy array with:
>>> np.asarray(ccd_masked)
array([1, 2, 3])
>>> np.ma.array(ccd_masked.data, mask=ccd_masked.mask)
masked_array(data = [1 2 --],
mask = [False False True],
fill_value = 999999)
A method for converting a CCDData
object to a FITS HDU list
is also available. It converts the metadata to a FITS header:
>>> hdulist = ccd_masked.to_hdu()
You can also write directly to a FITS file:
>>> ccd_masked.write('my_image.fits')
Masks and flags¶
Although not required when a CCDData
image is created you
can also specify a mask and/or flags.
A mask is a boolean array the same size as the data in which a value of
True
indicates that a particular pixel should be masked, i.e. not be
included in arithmetic operations or aggregation.
Flags are one or more additional arrays (of any type) whose shape matches the
shape of the data. For more details on setting flags see
astropy.nddata.NDData
.
WCS¶
The wcs
attribute of CCDData
object can be set two ways.
- If the
CCDData
object is created from a FITS file that has WCS keywords in the header, thewcs
attribute is set to aastropy.wcs.WCS
object using the information in the FITS header. - The WCS can also be provided when the
CCDData
object is constructed with thewcs
argument.
Either way, the wcs
attribute is kept up to date if the
CCDData
image is trimmed.
Uncertainty¶
Pixel-by-pixel uncertainty can be calculated for you:
>>> data = np.random.normal(size=(10, 10), loc=1.0, scale=0.1)
>>> ccd = ccdproc.CCDData(data, unit="electron")
>>> ccd_new = ccdproc.create_deviation(ccd, readnoise=5 * u.electron)
See Gain correct and create deviantion image for more details.
You can also set the uncertainty directly, either by creating a
StdDevUncertainty
object first:
>>> from astropy.nddata.nduncertainty import StdDevUncertainty
>>> uncertainty = 0.1 * ccd.data # can be any array whose shape matches the data
>>> my_uncertainty = StdDevUncertainty(uncertainty)
>>> ccd.uncertainty = my_uncertainty
or by providing a ndarray
with the same shape as the data:
>>> ccd.uncertainty = 0.1 * ccd.data
INFO: Array provided for uncertainty; assuming it is a StdDevUncertainty. [ccdproc.ccddata]
In this case the uncertainty is assumed to be
StdDevUncertainty
. Using StdDevUncertainty
is required to enable error propagation in CCDData
If you want access to the underlying uncertainty use its .array
attribute:
>>> ccd.uncertainty.array
array(...)
Arithmetic with images¶
Methods are provided to perform arithmetic operations with a
CCDData
image and a number, an astropy
Quantity
(a number with units) or another
CCDData
image.
Using these methods propagates errors correctly (if the errors are uncorrelated), take care of any necessary unit conversions, and apply masks appropriately. Note that the metadata of the result is not set:
>>> result = ccd.multiply(0.2 * u.adu)
>>> uncertainty_ratio = result.uncertainty.array[0, 0]/ccd.uncertainty.array[0, 0]
>>> round(uncertainty_ratio, 5)
0.2
>>> result.unit
Unit("adu electron")
>>> result.header
CaseInsensitiveOrderedDict()
Note
In most cases you should use the functions described in Reduction toolbox to perform common operations like scaling by gain or doing dark or sky subtraction. Those functions try to construct a sensible header for the result and provide a mechanism for logging the action of the function in the header.
The arithmetic operators *
, /
, +
and -
are not overridden.
Combining images and generating masks from clipping¶
Note
No attempt has been made yet to optimize memory usage in
Combiner
. A copy is made, and a mask array
constructed, for each input image.
The first step in combining a set of images is creating a
Combiner
instance:
>>> from astropy import units as u
>>> from ccdproc import CCDData, Combiner
>>> import numpy as np
>>> ccd1 = CCDData(np.random.normal(size=(10,10)),
... unit=u.adu)
>>> ccd2 = ccd1.copy()
>>> ccd3 = ccd1.copy()
>>> combiner = Combiner([ccd1, ccd2, ccd3])
The combiner task really combines two things: generation of masks for individual images via several clipping techniques and combination of images.
Image masks/clipping¶
There are currently two methods of clipping. Neither affects the data directly; instead each constructs a mask that is applied when images are combined.
Masking done by clipping operations is combined with the image mask provided
when the Combiner
is created.
Min/max clipping¶
minmax_clipping
masks all pixels above or below
user-specified levels. For example, to mask all values above the value
0.1
and below the value -0.3
:
>>> combiner.minmax_clipping(min_clip=-0.3, max_clip=0.1)
Either min_clip
or max_clip
can be omitted.
Sigma clipping¶
For each pixel of an image in the combiner,
sigma_clipping
masks the pixel if is more than a
user-specified number of deviations from the central value of that pixel in
the list of images.
The sigma_clipping
method is very flexible: you can
specify both the function for calculating the central value and the function
for calculating the deviation. The default is to use the mean (ignoring any
masked pixels) for the central value and the standard deviation (again
ignoring any masked values) for the deviation.
You can mask pixels more than 5 standard deviations above or 2 standard deviations below the median with
>>> combiner.sigma_clipping(low_thresh=2, high_thresh=5, func=np.ma.median)
Note
Numpy masked median can be very slow in exactly the situation typically encountered in reducing ccd data: a cube of data in which one dimension (in the case the number of frames in the combiner) is much smaller than the number of pixels.
A
Iterative clipping¶
To clip iteratively, continuing the clipping process until no more pixels are rejected, loop in the code calling the clipping method:
>>> old_n_masked = 0 # dummy value to make loop execute at least once
>>> new_n_masked = combiner.data_arr.mask.sum()
>>> while (new_n_masked > old_n_masked):
... combiner.sigma_clipping(func=np.ma.median)
... old_n_masked = new_n_masked
... new_n_masked = combiner.data_arr.mask.sum()
Note that the default values for the high and low thresholds for rejection are 3 standard deviations.
Image combination¶
Image combination is straightforward; to combine by taking the average, excluding any pixels mapped by clipping:
>>> combined_average = combiner.average_combine()
Performing a median combination is also straightforward,
>>> combined_median = combiner.median_combine() # can be slow, see below
With image scaling¶
In some circumstances it may be convenient to scale all images to some value
before combining them. Do so by setting scaling
:
>>> scaling_func = lambda arr: 1/np.ma.average(arr)
>>> combiner.scaling = scaling_func
>>> combined_average_scaled = combiner.average_combine()
This will normalize each image by its mean before combining (note that the
underlying images are not scaled; scaling is only done as part of combining
using average_combine
or
median_combine
).
With image transformation¶
Reduction toolbox¶
Note
This is not intended to be an introduction to image reduction. While performing the steps presented here may be the correct way to reduce data in some cases, it is not correct in all cases.
Logging in ccdproc
¶
All logging in ccdproc
is done in the sense of recording the steps performed
in image metadata. if you want to do logging in the python sense of the word please see those docs.
There are basically three logging options:
- Implicit logging: No setup or keywords needed, each of the functions below adds a note to the metadata when it is performed.
- Explicit logging: You can specify what information is added to the metadata using the
add_keyword
argument for any of the functions below. - No logging: If you prefer no logging be done you can “opt-out” by calling each function with
add_keyword=None
.
Gain correct and create deviantion image¶
Uncertainty¶
An uncertainty can be calculated from your data with
create_deviation
:
>>> from astropy import units as u
>>> import numpy as np
>>> import ccdproc
>>> img = np.random.normal(loc=10, scale=0.5, size=(100, 232))
>>> data = ccdproc.CCDData(img, unit=u.adu)
>>> data_with_deviatione = ccdproc.create_deviation(data,
... gain=1.5 * u.electron/u.adu,
... readnoise=5 * u.electron)
>>> data_with_deviation.header['exposure'] = 30.0 # for dark subtraction
The uncertainty, \(u_{ij}\), at pixel \((i,~j)\) with value \(p_{ij}\) is calculated as
where \(\sigma_{rn}\) is the read noise. Gain is only necessary when the image units are different than the units of the read noise, and is used only to calculate the uncertainty. The data itself is not scaled by this function.
As with all of the functions in ccdproc
, the input image is not modified.
In the example above the new image data_with_deviation
has its uncertainty
set.
Gain¶
To apply a gain to an image, do:
>>> gain_corrected = ccdproc.gain_correct(data_with_deviation, 1.5*u.electron/u.adu)
The result gain_corrected
has its data and uncertainty scaled by the gain
and its unit updated.
There are several ways to provide the gain, among them as an
astropy.units.Quantity
, as in the example above, as a ccdproc.Keyword
.
See to documentation for gain_correct
for details.
Clean image¶
There are two ways to clean an image of cosmic rays. One is to use clipping to create a mask for a stack of images, as described in Image masks/clipping.
The other is to replace, in a single image, each pixel that is several standard deviations from a central value in a region surrounding that pixel. The methods below describe how to do that.
LACosmic¶
The lacosmic technique identifies cosmic rays by identifying pixels based on a variation of the Laplacian edge detection. The algorithm is an implementation of the code describe in van Dokkum (2001) [1].
Use this technique with cosmicray_lacosmic
:
>>> cr_cleaned = ccdproc.cosmicray_lacosmic(gain_corrected, threshold,
... thresh=5, mbox=11, rbox=11,
... gbox=5)
median¶
Another cosmic ray cleaning algorithm available in ccdproc is cosmicray_median
that is analogous to iraf.imred.crutil.crmedian. This technique can
be used with ccdproc.cosmicray_median
:
>>> cr_cleaned = ccdproc.cosmicray_median(gain_corrected, threshold,
... mbox=11, rbox=11, gbox=5)
Although ccdproc
provides functions for identifying outlying pixels and for
calculating the deviation of the background you are free to provide your own
error image instead.
There is one additional argument, gbox
, that specifies the size of the box,
centered on a outlying pixel, in which pixel should be grown. The argument
rbox
specifies the size of the box used to calculate a median value if
values for bad pixels should be replaced.
Subtract overscan and trim images¶
Note
- Images reduced with
ccdproc
do NOT have to come from FITS files. The discussion below is intended to ease the transition from the indexing conventions used in FITS and IRAF to python indexing. - No bounds checking is done when trimming arrays, so indexes that are too
large are silently set to the upper bound of the array. This is because
numpy
, which provides the infrastructure for the arrays inccdproc
has this behavior.
Indexing: python and FITS¶
Overscan subtraction and image trimming are done with two separate functions. Both are straightforward to use once you are familiar with python’s rules for array indexing; both have arguments that allow you to specify the part of the image you want in the FITS standard way. The difference between python and FITS indexing is that python starts indexes at 0, FITS starts at 1, and the order of the indexes is switched (FITS follows the FORTRAN convention for array ordering, python follows the C convention).
The examples below include both python-centric versions and FITS-centric versions to help illustrate the differences between the two.
Consider an image from a FITS file in which NAXIS1=232
and
NAXIS2=100
, in which the last 32 columns along NAXIS1
are overscan.
In FITS parlance, the overscan is described by the region [201:232,
1:100]
.
If that image has been read into a python array img
by astropy.io.fits
then the overscan is img[0:100, 200:232]
(or, more compactly img[:,
200:])
, the starting value of the first index implicitly being zero, and
the ending value for both indices implicitly the last index).
One aspect of python indexing may particularly surprising to newcomers:
indexing goes up to but not including the end value. In img[0:100,
200:232]
the end value of the first index is 99 and the second index is
231, both what you would expect given that python indexing starts at zero,
not one.
Those transitioning from IRAF to ccdproc do not need to worry about this too
much because the functions for overscan subtraction and image trimming both
allow you to use the familiar BIASSEC
and TRIMSEC
conventions for
specifying the overscan and region to be retained in a trim.
Overscan subtraction¶
To subtract the overscan in our image from a FITS file in which NAXIS1=232
and
NAXIS2=100
, in which the last 32 columns along NAXIS1
are overscan, use subtract_overscan
:
>>> # python-style indexing first
>>> oscan_subtracted = ccdproc.subtract_overscan(cr_cleaned,
... overscan=cr_cleaned[:, 200:],
... overscan_axis=1)
>>> # FITS/IRAF-style indexing to accomplish the same thing
>>> oscan_subtracted = ccdproc.subtract_overscan(cr_cleaned,
... fits_section='[201:232,1:100]',
... overscan_axis=1)
Note well that the argument overscan_axis
always follows the python
convention for axis ordering. Since the order of the indexes in the
fits_section
get switched in the (internal) conversion to a python index,
the overscan axis ends up being the second axis, which is numbered 1 in
python zero-based numbering.
With the arguments in this example the overscan is averaged over the overscan
columns (i.e. 2000 through 2031) and then subtracted row-by-row from the
image. The median
argument can be used to median combine instead.
This example is not very realistic: typically one wants to fit a low-order polynomial to the overscan region and subtract that fit:
>>> from astropy.modeling import models
>>> poly_model = models.Polynomial1D(1) # one-term, i.e. constant
>>> oscan_subtracted = ccdproc.subtract_overscan(cr_cleaned,
... overscan=cr_cleaned[:, 200:],
... overscan_axis=1,
... model=poly_model)
See the documentation for astropy.modeling.polynomial
for more examples of the
available models and for a description of creating your own model.
Trim an image¶
The overscan-subtracted image constructed above still contains the overscan
portion. We are assuming came from a FITS file in which NAXIS1=2032
and
NAXIS2=1000
, in which the last 32 columns along NAXIS1
are overscan.
Trim it using trim_image
,shown below in both python-
style and FITS-style indexing:
>>> # FITS-style:
>>> trimmed = ccdproc.trim_image(oscan_subtracted,
... fits_section='[1:200, 1:100]')
>>> # python-style:
>>> trimmed = ccdproc.trim_image(oscan_subtracted[:, :200])
Note again that in python the order of indices is opposite that assumed in FITS format, that the last value in an index means “up to, but not including”, and that a missing value implies either first or last value.
Those familiar with python may wonder what the point of
trim_image
is; it looks like simply indexing
oscan_subtracted
would accomplish the same thing. The only additional thing
trim_image
does is to make a copy of the image before
trimming it.
Note
By default, python automatically reduces array indices that extend beyond
the actual length of the array to the actual length. In practice, this
means you can supply an invalid shape for, e.g. trimming, and an error
will not be raised. To make this concrete,
ccdproc.trim_image(oscan_subtracted[:, :200000000])
will be treated as
if you had put in the correct upper bound, 200
.
Subtract bias and dark¶
Both of the functions below propagate the uncertainties in the science and calibration images if either or both is defined.
Assume in this section that you have created a master bias image called
master_bias
and a master dark image called master_dark
that has been
bias-subtracted so that it can be scaled by exposure time if necessary.
Subtract the bias with subtract_bias
:
>>> fake_bias_data = np.random.normal(size=trimmed.shape) # just for illustration
>>> master_bias = ccdproc.CCDData(fake_bias_data,
... unit=u.electron,
... mask=np.zeros(trimmed.shape))
>>> bias_subtracted = ccdproc.subtract_bias(trimmed, master_bias)
There are several ways you can specify the exposure times of the dark and
science images; see subtract_dark
for a full description.
In the example below we assume there is a keyword exposure
in the metadata
of the trimmed image and the master dark and that the units of the exposure
are seconds (note that you can instead explicitly provide these times).
To perform the dark subtraction use subtract_dark
:
>>> master_dark = master_bias.multiply(0.1) # just for illustration
>>> master_dark.header['exposure'] = 15.0
>>> dark_subtracted = ccdproc.subtract_dark(bias_subtracted, master_dark,
... exposure_time='exposure',
... exposure_unit=u.second,
... scale=True)
Note that scaling of the dark is not done by default; use scale=True
to
scale.
Correct flat¶
Given a flat frame called master_flat
, use flat_correct
to
perform this calibration:
>>> fake_flat_data = np.random.normal(loc=1.0, scale=0.05, size=trimmed.shape)
>>> master_flat = ccdproc.CCDData(fake_flat_data, unit=u.electron)
>>> reduced_image = ccdproc.flat_correct(dark_subtracted, master_flat)
As with the additive calibrations, uncertainty is propagated in the division.
The flat is scaled by the mean of master_flat
before dividing.
If desired, you can specify a minimum value the flat can have (e.g. to prevent
division by zero). Any pixels in the flat whose value is less than min_value
are replaced with min_value
):
>>> reduced_image = ccdproc.flat_correct(dark_subtracted, master_flat,
... min_value=0.9)
[1] | van Dokkum, P; 2001, “Cosmic-Ray Rejection by Laplacian Edge Detection”. The Publications of the Astronomical Society of the Pacific, Volume 113, Issue 789, pp. 1420-1427. doi: 10.1086/323894 |
Reduction examples¶
Mostly still TBD, hopefully filled in with examples from users. There is one example ipython notebook.
ccdproc Package¶
The ccdproc package is a collection of code that will be helpful in basic CCD processing. These steps will allow reduction of basic CCD data as either a stand-alone processing or as part of a pipeline.
Functions¶
background_deviation_box (data, bbox) |
Determine the background deviation with a box size of bbox. |
background_deviation_filter (data, bbox) |
Determine the background deviation for each pixel from a box with size of bbox. |
cosmicray_lacosmic (ccd[, error_image, ...]) |
Identify cosmic rays through the lacosmic technique. |
cosmicray_median (ccd[, error_image, thresh, ...]) |
Identify cosmic rays through median technique. |
create_deviation (ccd_data[, gain, ...]) |
Create a uncertainty frame. |
flat_correct (ccd, flat[, min_value, add_keyword]) |
Correct the image for flat fielding. |
gain_correct (ccd, gain[, gain_unit, add_keyword]) |
Correct the gain in the image. |
rebin (ccd, newshape) |
Rebin an array to have a new shape. |
sigma_func (arr) |
Robust method for calculating the deviation of an array. |
subtract_bias (ccd, master[, add_keyword]) |
Subtract master bias from image. |
subtract_dark (ccd, master[, dark_exposure, ...]) |
Subtract dark current from an image. |
subtract_overscan (ccd[, overscan, ...]) |
Subtract the overscan region from an image. |
test ([package, test_path, args, plugins, ...]) |
Run the tests using py.test. |
transform_image (ccd, transform_func[, ...]) |
Transform the image |
trim_image (ccd[, fits_section, add_keyword]) |
Trim the image to the dimensions indicated. |
Classes¶
CCDData (*args, **kwd) |
A class describing basic CCD data |
Combiner (ccd_list[, dtype]) |
A class for combining CCDData objects. |
Keyword (name[, unit, value]) |
Class Inheritance Diagram¶
Full Changelog¶
0.3.3 (2015-10-24)¶
New Features¶
Other Changes and Additions¶
- Opt in to new container-based builds on travis. [#227]
- Update astropy_helpers to 1.0.5. [#245]
Bug Fixes¶
- Ensure that creating a WCS from a header that contains list-like keywords
(e.g.
BLANK
orHISTORY
) succeeds. [#229, #231]
0.3.2¶
There was no 0.3.2 release because of a packaging error.
0.3.1 (2015-05-12)¶
New Features¶
Other Changes and Additions¶
- Add extensive tests to ensure
ccdproc
functions do not modify the input data. [#208] - Remove red-box warning about API stability from docs. [#210]
- Support astropy 1.0.5, which made changes to
NDData
. [#242]
Bug Fixes¶
- Make
subtract_overscan
act on a copy of the input data. [#206] - Overscan subtraction failed on non-square images if the overscan axis was the
first index,
0
. [#240, #244]
0.3.0 (2015-03-17)¶
New Features¶
- When reading in a FITS file, the extension to be used can be specified. If it is not and there is no data in the primary extension, the first extension with data will be used.
- Set wcs attribute when reading from a FITS file that contains WCS keywords and write WCS keywords to header when converting to an HDU. [#195]
Other Changes and Additions¶
- Updated CCDData to use the new version of NDDATA in astropy v1.0. This breaks backward compatibility with earlier versions of astropy.
Bug Fixes¶
- Ensure
dtype
of combined images matches thedtype
of theCombiner
object. [#189]
0.2.2 (2014-11-05)¶
New Features¶
- Add dtype argument to
ccdproc.Combiner
to help control memory use [#178]
Other Changes and Additions¶
- Added Changes to the docs [#183]
Bug Fixes¶
- Allow the unit string “adu” to be upper or lower case in a FIS header [#182]
0.2.1 (2014-09-09)¶
New Features¶
- Add a unit directly from BUNIT if it is available in the FITS header [#169]
Other Changes and Additions¶
- Relaxed the requirements on what the metadata must be. It can be anything dict-like, e.g. an astropy.io.fits.Header, a python dict, an OrderedDict or some custom object created by the user. [#167]
Bug Fixes¶
- Fixed a new-style formating issue in the logging [#170]
0.2 (2014-07-28)¶
- Initial release.