The STScI NICMOS Pipeline: , Single Image Reduction (Revision A) CALNICA

advertisement
Instrument Science Report NICMOS-97-028
The STScI NICMOS Pipeline: CALNICA,
Single Image Reduction (Revision A)
Howard Bushouse, Chris Skinner, John MacKenty
October 28, 1997
ABSTRACT
This ISR describes CALNICA, the single-image reduction portion of CALNIC, which produces individual calibrated images. We include a brief reminder of the NICMOS operational modes and data formats, then a description of the algorithmic steps in CALNICA,
and then the reference files used by CALNICA.
Changes in Revision A: The order of the calibration processing steps has changed for
CALNICA versions 2.0 and higher: DARKCORR now precedes NLINCORR, and BIASCORR now follows MASKCORR. The nature of several steps has also changed. BIASCORR no longer performs the subtraction of the ADC zero level (not necessary), but
instead performs the wrapped pixel correction. The NOISCALC algorithm no longer uses
the value of NREAD in its calculations, but instead there are separate NOISFILE reference files for different settings of NREAD. The NLINCORR algorithm has been modified
such that the three-piece quadratic correction has been replaced by a single linear correction. The DARKCORR step now uses separate reference files for each of the MULTIACCUM sample sequences. In versions 2.3 and higher of CALNICA the CRIDCALC
algorithm applied to MULTIACCUM data has been modified such that the final countrate
is computed using linear regression rather than using the mean of individual samples, and
uses the NOISFILE reference file. We also indicate that the BACKCALC and WARNCALC
steps have not yet been implemented.
1. Introduction
In this ISR we describe the steps in CALNICA, the portion of the NICMOS pipeline which
reduces and calibrates single images. The output of CALNICA is an image that has had all
instrumental signatures removed and in which the photometric header keywords have been
populated. NICMOS data file formats and contents used in the pipeline are described in a
general way in ISR NICMOS-002. The overall structure and functionality of the CALNIC
1
pipeline, which will be comprised of two parts (CALNICA, described here, and CALNICB,
described in a forthcoming ISR), is described in ISR NICMOS-008.
The remainder of this ISR briefly reviews the various NICMOS operational modes and the data
they produce, the NICMOS data file contents, and CALNICA design issues. It also describes the
individual algorithmic steps in CALNICA and the calibration reference files and tables used by the
algorithms.
2. NICMOS Operational Modes
NICMOS operates in four modes: ACCUM, MULTIACCUM, RAMP, and BRIGHTOBJECT.
In ACCUM mode, a single exposure is taken. The array is initially read out (non-destructively)
some number of times n, defined by the observer, and for each pixel the mean of the resulting values is calculated on-board. The array then integrates, for a time chosen by the observer from a
pre-defined menu. Finally, the array is again read out non-destructively n times and the average
calculated on-board. The difference between initial and final values is the signal which will be
received on the ground. This is the simplest observing mode.
MULTIACCUM mode allows only a single initial and final read. However, a number n of intermediate non-destructive reads are made during the course of the exposure. The temporal spacing
of the readouts may be linear or logarithmic, and is freely selectable by the observer. In this mode,
the initial read (known hereafter as the zeroth read) is not subtracted on-board from the subsequent reads, so this will have to be done on the ground. The total number of reads after the zeroth
read may be up to 25 (and it should be noted that the number n defined by the observer is the number after the zeroth read). Note that the final read will contain the signal accumulated from the
entire exposure time, not just the interval between the last and the penultimate reads. The result of
each of the n+1 reads will be received on the ground.
RAMP mode makes multiple non-destructive reads during the course of a single exposure much
like MULTIACCUM, but only a single image is sent to the ground. The results of each successive
read are used to iteratively calculate a mean count-rate and an associated variance for each pixel.
A variety of on-board processing can be employed to detect or rectify the effects of saturation or
Cosmic Ray hits. The data sent to the ground comprises a single image, plus an uncertainty and
the number of valid samples for each pixel. For each pixel the science data image contains the
mean number of counts per ramp integration period.
BRIGHTOBJECT mode provides a means to observe objects which ordinarily would saturate the
arrays in less than the minimum available exposure time (which is of order 0.2 seconds). In this
mode each individual pixel is successively reset, integrated for a time defined by the observer, and
read out. Since each quadrant contains 16,384 pixels, the total elapsed time to take an image in
this mode is 16,384 times the exposure time for each pixel. The result is a single image just like
that produced in ACCUM mode.
2
3. NICMOS Data File Contents
The data from an individual NICMOS exposure (where MULTIACCUM observations are considered a single exposure) is contained in a single FITS file. The data for an exposure consists of five
arrays, each stored as a separate image extension in the FITS file. The five data arrays represent
the science (SCI) image from the FPA, an error (ERR) array containing statistical uncertainties (in
units of 1 σ) of the science data, an array of bit-encoded data quality (DQ) flags representing
known status or problem conditions of the science data, an array containing the number of data
samples (SAMP) that were used to calculate each science image pixel, and an array containing the
effective integration time (TIME) for each science image pixel.
This set of five arrays (know as an image set or “imset”) is repeated within the file for each readout of a MULTIACCUM exposure. However, the order of the imsets in a MULTIACCUM file is
such that the result of the longest integration time occurs first in the file (i.e. the order is in the
opposite sense from which they are obtained).
In RAMP mode, as the data from successive integrations are averaged to calculate the count rate,
so the variance of the rate for each pixel is calculated. The data sent to the ground includes the
count rate and also this variance. The variance as calculated on-board will be read into the error
array in the Generic Conversion stage on the ground, and must eventually be converted to a standard deviation for consistency with data from other modes. It should be noted that the variance
calculated in RAMP mode will not always be consistent with the manner in which uncertainties
are calculated on the ground in other modes. For instance, in the case where the number of RAMP
integrations in an exposure is rather small, the variance calculated in this manner may not deliver
a good estimate of the statistical uncertainty of the data. In the first build of CALNICA we will
simply propagate the on-board variance as our uncertainty in RAMP mode, but in future builds
further consideration will be given to the possibility of making the uncertainties consistent with
those which we calculate in other modes.
4. CALNICA Design Goals
All processing steps of CALNICA treat the five data arrays (imset) associated with an exposure as
a single entity. Furthermore, each step that affects the science image will propagate and update, as
necessary, the error, data quality, number of samples, and integration time arrays.
In order to easily facilitate the common treatment of the five data arrays within any step of the
pipeline, the format of all input and output data files - including most calibration reference images
- will be identical. The commonality of input and output science data file formats also allows for
partial processing and the reinsertion of partially processed data into the pipeline at any stage. In
other words, the pipeline will be fully ‘re-entrant’.
MULTIACCUM exposures will generate a single raw data file containing 5(n+1) extensions, or
n+1 imsets, where n is the number of readouts requested by the observer. When this is processed
by CALNICA, there will be two output files. The first will consist of a set of 5(n+1) extensions, or
3
n+1 imsets, each of which may have had some processing, and can be considered as an intermediate file. The second will contain just 5 extensions, or 1 imset, like the output from all other modes,
and will represent the result after the final or nth read. An exception will be the case where the
‘Take Data Flag’ is lowered during a MULTIACCUM exposure. This may indicate, for example,
that the spacecraft has lost its lock on guide stars and is no longer pointing at the desired target. In
this case NICMOS will continue to take data, as experience with other instruments has shown that
useful data may still be acquired when a TDF exception occurs. The data will all still be processed
and written to the intermediate file. However, the final output file will be the result of the exposure
only up to the time of the last readout before the TDF exception occurred.
Processing control will be provided by a set of “switches” consisting of keywords in the science
data file header. Two keywords associated with each processing step are used to 1) determine
whether or not a given step is to be performed, and 2) record whether or not a step has been performed. This means that CALNICA will be aware of what processing has already been done to a
given set of data, and so will be able to issue warnings if the user is requesting the same processing step to be done more than once (which would often, but not always, be undesirable).
The CALNICA code will consist of a series of reusable modules, one for each significant step as
defined in “Algorithmic Steps in CALNICA” on page 4 (e.g. dark subtraction, flat-field correction,
photometric calibration, ...). The code will be written in ANSI C, which will allow the code to be
portable to a number of different programming environments (e.g. IRAF, IDL, MIDAS, Figaro...),
as well as rendering it more easily maintainable. Infrastructure, to tie the modules of the code
together in a pipeline, will be written for the IRAF environment, so that CALNICA will run as part
of the STSDAS package as with all the other HST instruments. We anticipate that the NICMOS
Investigation Definition Team (IDT) will probably write an equivalent IDL infrastructure, so that
the same code will run as a pipeline in IDL. We are developing all the C code modules in collaboration with the IDT, so that although the pipelines will be run from different environments by
STScI and the IDT, both groups should be using identical algorithms and code. I/O to and from
the pipeline code is being minimised, and restricted as far as possible to the beginning and end of
run time, in order to maximise the ease of portability of the code to different environments.
5. Algorithmic Steps in CALNICA
Figure 1 shows the flow of CALNICA processing steps. Table 1 gives a summary list of the steps,
the name of the “switch” keyword that controls each step, and the reference file keyword (if any)
associated with each step, which contains the name of the reference file used by the step. Each
step is described in more detail below.
DATA I/O
Pipeline processing obviously must always commence by reading in the raw data file. This step
will also entail tests of the validity of the data. Some error conditions encountered here, such as if
the images are found to be of the wrong size or datatype, will cause the pipeline to terminate. Oth-
4
Figure 1: Pipeline Processing by CALNICA
Input
Files
Processing
Steps
Keyword
Switches
Subtract M-ACCUM Zero-Read
ZOFFCORR
Mask Bad Pixels
MASKCORR
Wrapped Pixel Correction
BIASCORR
NOISFILE
Compute Statistical Errors
NOISCALC
DARKFILE
Dark Current Subtraction
DARKCORR
NLINFILE
Linearity Correction
NLINCORR
FLATFILE
Flat Field Correction
FLATCORR
Convert to Countrates
UNITCORR
PHOTTAB
Photometric Calibration
PHOTCALC
NOISFILE
Cosmic Ray Identification
CRIDCALC
Calibrated
Output Files
RAW Science Images
MASKFILE
IMA
BACKTAB
Predict Background
BACKCALC
SPT
User Warnings
WARNCALC
CAL
5
ers will generate warning messages or prompt specific non-standard behaviour. If CALNICA
determines upon reading in a data file that a given science data extension consists entirely of
zeroes, appropriate warnings will be issued, and no calibrations will be applied. Reference file
data are also checked for compatibility with the science data being processed. This typically
entails checking for matching parameters such as camera number, observing mode, filter, and
readout speed. If any reference data are missing or found to be inconsistent, processing will
terminate.
For MULTIACCUM observations the data from each readout are processed individually through
each calibration step up to the point where they are combined into a single image (see CRIDCALC below). For MULTIACCUM data, n+1 extensions of each type (SCI, ERR, DQ, SAMP
and TIME) are written into the intermediate calibrated output file, and the first imset encountered
in this file is the result of the last readout, whereas the single combined imset is written to the final
calibrated output file.
All input science and reference data, including all the readouts of a MULTIACCUM observation,
are read into memory at the start of processing. The science data are operated upon in-place by
each processing step and are not written to the output files until the successful completion of all
processing. Therefore errors that occur “midstream” will not leave behind partially processed
data.
ZOFFCORR
A NICMOS image is formed by taking the difference of two non-destructive detector readouts.
The first read occurs immediately after resetting each pixel in the detector. This records the starting value of each pixel and defines the beginning of an integration. The second non-destructive
read occurs after the desired integration time has elapsed and marks the end of the exposure. The
difference between the final and initial reads results in the science image.
As mentioned before, in all instrument observing modes except MULTIACCUM, the subtraction
of the two reads is performed on-board. In MULTIACCUM mode the initial (or “zeroth”) read is
returned along with all subsequent readouts and the subtraction must be performed in the ground
system pipeline. CALNICA will subtract the zeroth read image from all readouts, including itself.
The self-subtracted zeroth read image will be propagated throughout the remaining processing
steps, and included in the output products, so that a complete history of error estimates and DQ
flags is preserved. After this step has been performed, data from all observing modes “looks” the
same from the point of view of subsequent calibration steps.
6
No reference files will be used by this step.
Table 1. CALNICA Calibration Switches and Reference Files
Switch
Keyword
Processing Performed
Reference File
Keyword
Reference File Contents
ZOFFCORR
Subtract M-ACCUM zero read
N/A
N/A
MASKCORR
Mask bad pixels
MASKFILE
bad pixel flag image
BIASCORR
Wrapped pixel correction
N/A
N/A
NOISCALC
Compute statistical errors
NOISFILE
read noise image
DARKCORR
Dark subtraction
DARKFILE
dark current images
NLINCORR
Linearity correction
NLINFILE
linearity coefficients image
FLATCORR
Flat field correction
FLATFILE
flat field image
UNITCORR
Convert to countrates
N/A
N/A
PHOTCALC
Photometric calibration
PHOTTAB
photometric parameters table
CRIDCALC
Identify cosmic ray hits
NOISFILE
read noise image
BACKCALC
Predict background
BACKTAB
background model parameters
WARNCALC
User warnings
N/A
N/A
MASKCORR
Flag values from the static bad pixel mask file are added to the DQ image extension of each science imset. This uses the MASKFILE reference file, which contains a flag array for known bad
(i.e. hot or cold) pixels. There will be one MASKFILE per detector, containing the bad pixel mask
in the DQ image extension of the file.
BIASCORR
NICMOS uses 16-bit Analog-to-Digital Converters (ADCs), which convert the analog signal generated by the detectors into signed 16-bit integers. Because the numbers are signed and because
the full dynamic range of the converter output is utilized, raw pixel values obtained from individual detector readouts can range from -32768 to +32767 DN. In practice the detector bias level is
set so that a zero signal results in a raw value on the order of -22000 DN. In ACCUM, BRIGHTOBJECT, and RAMP modes, where the difference of initial and final readouts is computed onboard, the subtraction is also performed in 16-bit arithmetic. Therefore it is possible for the difference between the final and initial pixel values for a bright source to exceed the 16-bit dynamic
range of the calculation, in which case the final pixel value will “wrap around” the maximum
allowed value of +32767, resulting in a negative DN value. Given the level at which the NICMOS
detectors saturate and the A-D conversion factor, the maximum “real” pixel value that is expected
is on the order of +42000 DN. Such a value will be wrapped to about -23500 DN by the on-board
7
difference calculation.
The BIASCORR step therefore searches for pixel values in the SCI images in the range -23500 to
-32768 and, upon finding any, adds an offset of 65536 DN to these pixel values to reset them to
their original real values.
No reference files are used by this step.
NOISCALC
Statistical errors will be computed in order to initially populate the error (ERR) image for each
imset. The error values will be computed from a noise model based on knowledge of the detector
characteristics. The noise model will be a simple treatment of photon counting statistics and
detector readout noise, i.e.:
2 + counts ⋅ adcgain } ⁄ adcgain
σ = { σ rd
where σrd is the read noise in units of electrons, counts is the signal in units of DN, and adcgain is
the electron-to-DN conversion factor. The value for adcgain will be read from the ADCGAIN
header keyword.
This calculation is not needed for RAMP mode data because a variance image computed on-board
is recorded in the raw error image by the OPUS generic conversion process. In this case the
NOISCALC step simply computes the square-root of the raw error image so that the variance
values become standard deviations.
This step will use the NOISFILE reference file, which will be an image containing the read noise
(pixel-by-pixel), in units of electrons, for a given detector. The read noise is a function of both the
detector readout speed and the number of Multiple Initial and Final reads (MIFs). Therefore there
will be several NOISFILEs per detector; one for each value of READOUT (FAST and SLOW), as
well as different values of NREAD, the number of ACCUM mode MIFs. If there is not an available
NOISFILE with a value of NREAD that matches the science data being processed, the reference
file with the closest value of NREAD will be used by CALNICA and a warning will be issued indicating that the noise estimate may not be accurate. Data quality flags set in the NOISFILE DQ
image will be propagated into the science data DQ images.
DARKCORR
The detector dark current will be removed by subtracting a dark current reference image appropriate for the exposure time of the science data. This step will be skipped for BRIGHTOBJECT
mode observations because the very short exposure times result in insignificant dark current.
A simple scaling of a single dark reference image to match the exposure time of the science data
is, unfortunately, not possible due to the nonlinear behavior of the dark current as a function of
time. Therefore a library of dark current images will be maintained, covering a range of exposure
times and observers will be encouraged to obtain observations at these exposure times as much as
8
possible. When the exposure time of a science image matches that of one of the library darks, then
a simple subtraction of the reference image will be performed. When the science image exposure
time does not match any of those in the reference library, a suitable dark current image will be
constructed within CALNICA by interpolation of the reference images. The type of interpolation
that will be necessary has not yet been determined, but will most probably eventually be a spline
due to the highly nonlinear behaviour of the dark current. However, the initial release of
CALNICA will use linear interpolation, and the form of the interpolation may be altered in subsequent releases as we receive better information about the behaviour of the dark current. In the case
of observations with integrations longer than the longest library dark, a simple linear extrapolation of the temporal behaviour of the dark current will be adopted. Our current understanding of
the detectors suggests that for long integration times this should be entirely adequate.
There are several reference files (DARKFILE) per detector, which will contain dark images at the
exposure times for all MULTIACCUM standard sequences (indicated by the SAMP_SEQ keyword value), as well as one set for ACCUM mode exposure times for that detector. The pipeline
will determine whether or not a dark image exists that matches the science image exposure time
and, if not, will interpolate amongst the images to form an appropriate dark image.
Dark current uncertainties stored in the ERR image extensions of the DARKFILEs will be propagated into the science data ERR images by taking the quadratic sum of the errors at each pixel.
Data quality flags from the DARKFILE DQ images will also be propagated into the science data
DQ images.
NLINCORR
The linearization correction step corrects the integrated counts in the science image for nonlinear
response of the detectors. The observed response of the detectors can conveniently be represented
by three regimes. At low count levels the response is linear and directly proportional to the incoming photon flux. At intermediate levels the detector response deviates in a linear fashion from
the in-coming flux, and at high levels - as saturation sets in - becomes highly nonlinear.
In CALNICA the boundaries of the three correction regimes will be defined by two threshold or
“node” values. The first node will denote the maximum value in DNs where no linearity correction is needed, while the second will denote the level beyond which no useful data can be
extracted. For intermediate points, a correction has been defined to be used in this manner:
C′ = ( a 1 + a 2 ⋅ C ) ⋅ C
where C is the uncorrected science pixel value, C’ is the corrected value, and a1 and a2 are the linear correction coefficients. The error estimate for such a correction is
2
2
σ = ε 1 + C ⋅ ε 2 + 2 ⋅ C ⋅ ε 12
where ε1 and ε2 are the variances of coefficients a1 and a2, and ε12 is the covariance for a1 and a2.
9
This uncertainty will be propagated (in quadrature) with the input error value for each pixel. Note
that in practice the a1 and a2 coefficients may be completely independent of one another, in which
case the ε12 covariance term will be zero.
Pixel values that are above the saturation limit (second node) receive no correction but will be
flagged in the DQ image with the saturation DQ value.
This step uses the NLINFILE reference file, which will consist of a set of images containing the
function coefficients and variances at each pixel location and does not, therefore, conform to the
standard NICMOS data file format. The coefficients will be stored as a set of image arrays, using
one image extension per coefficient, along with two variance and one covariance arrays. One data
quality array will be used in order to warn of poor quality fits for any pixel. Two arrays will also
be used to store the function node values. This gives a total of eight image extensions in the
NLINFILE. There will be one NLINFILE per detector.
FLATCORR
In this step the science data are corrected for variations in gain between pixels by multiplying by
an (inverse) flat field reference image. This step will be skipped for observations using a grism
because the flat field corrections are wavelength dependent and must be accomplished using other
techniques. This step uses the FLATFILE reference file, which contains the flat field image for a
given detector and filter (or polarizer) combination. There will be one FLATFILE per detector/filter combination.
Values from the FLATFILE ERR and DQ images will be propagated into the ERR and DQ images
of the science data.
UNITCORR
The conversion from raw counts to count rates will be performed by dividing the science image
data by the exposure time. The science data ERR images will also be divided by the exposure
time. The exposure time used will be the value from the SAMPTIME keyword in the SCI extension header of each imset. No reference file will be needed. For RAMP mode the SAMPTIME
value is the time per ramp integration, rather than the total elapsed time of the exposure.
PHOTCORR
This step provides photometric calibration information by populating the photometry keywords
PHOTMODE, PHOTFLAM, PHOTFNU, PHOTZPT, PHOTPLAM, and PHOTBW with values appropriate to the camera and filter combination used for the observation. The photometry parameters
will be read from the PHOTTAB reference file, which will be a FITS binary table containing the
above six parameters for all observation modes.
The PHOTMODE parameter specifies the observing mode to which the photometric information
apply and is a string composed of 1) the instrument name (NICMOS), 2) camera number (1-3), 3)
10
filter, polarizer, or grism name (e.g. F110W), and 4) the “DN” specifier which specifies the conversion from units of electrons to DNs. The PHOTMODE string will be constructed by CALNICA
using the values of the CAMERA and FILTER header keywords. The PHOTTAB table will be
searched for a matching PHOTMODE row entry and, upon finding one, CALNICA will read the
values of the five remaining parameters from that table row. The PHOTFLAM and PHOTFNU
parameters give the inverse sensitivity in units of ergs/cm2/Å/DN and Jy*sec/DN, respectively.
Therefore data that are in units of count rates (DN/sec) can be converted to absolute fluxes by
multiplying by either the PHOTFLAM or PHOTFNU values. The PHOTZPT parameter specifies the
zeropoint (in magnitudes) of the ST magnitude system, PHOTPLAM gives the pivot wavelength
(in Ångstroms) of the photometric bandpass, and PHOTBW gives the rms width (in Ångstroms) of
the bandpass.
CRIDCALC
This step will identify and flags pixels suspected to be affected by cosmic ray hits and, for MULTIACCUM data, combine the data from the individual readouts into a single image. The
algorithm is not yet defined for single (ACCUM/BRIGHTOBJECT/RAMP) images, but for
MULTIACCUM data CRs can be recognised by seeking large changes in count rate between successive readouts (as is done on-board in RAMP mode). Such an algorithm has been implemented
in the second build of CALNICA (see below).
For single-image modes, identification of a cosmic ray hit will only result in setting the cosmic
ray DQ flag value for that pixel in the output calibrated (“cal”) image. The science (SCI) image
pixel values will not be modified. In MULTIACCUM mode, identification of a cosmic ray hit will
similarly result in the setting of a DQ flag in the appropriate imset of the intermediate (“ima”) output file, but no SCI pixel values will be modified in the intermediate file. In the process of creating
the final calibrated (“cal”) image for MULTIACCUM mode, however, only good (unflagged) pixels will be used. The samples (SAMP) and integration time (TIME) arrays in the MULTIACCUM
calibrated (“cal”) file will reflect the actual number of samples, and their total integration times,
used to compute the SCI image. The DQ image in the “cal” file will also indicate where good data
was used, as well as the error conditions associated with pixels where no good samples were
found.
The algorithm currently in use for MULTIACCUM data works on individual pixels. Any pixel
that is marked by a DQ flag will not used in any of the computations. The process will be as follows. First, the accumulated counts from successive readouts will be differenced, resulting in a list
of nsamp-1 values representing the number of counts detected within individual readout periods.
Because the length of time between readouts usually varies within a MULTIACCUM sequence,
the number of counts in each sample period will then be normalized by the exposure time for each
sample. The list of normalized samples will then be searched for anomalous values which might
be due to CR hits, saturation, or noise. This will be done by computing the weighted mean of the
list of samples and looking for outliers from the mean. The weighting used in the computation of
11
the mean will be taken from the error estimate for each sample, where the error will be computed
using the same technique as in the NOISCALC step, i.e. the combination of read noise and Poisson noise in the signal for each sample. If the difference between the largest outlier and the mean
is more than five times the error estimate for the sample, the sample will be rejected. This process
will be repeated until no new samples are rejected, resulting in a list of “good” samples for the
pixel.
Next, the list of good samples will be recombined to construct the accumulated counts as a function of increasing exposure time and a weighted linear regression will be performed to compute
the mean count rate (and uncertainty) for the pixel. The count rate and uncertainty will be written
to the SCI and ERR images, respectively, of the output “cal” file, while the number of good samples and their total exposure time will be written to the “cal” file SAMP and TIME images. If at
least one good sample was available for a given pixel, the “cal” file DQ value will be set to zero.
For those pixels that had no good samples (e.g. a permanently hot or cold pixel, which will be bad
in all samples) the “cal” file SCI, ERR, SAMP, and TIME values will be set to zero and the combined DQ values from all samples will be written to the “cal” file DQ image, so that a user can
identify the reason for the bad pixel.
Finally, for those pixels that had outliers identified, a CR hit flag value (512) will be set in the
“ima” file DQ image in the imset corresponding to the sample (readout) in which the outlier was
found. Note once again, however, that the science data (i.e. the SCI and ERR images) in the “ima”
file will not be modified or corrected in any way.
For MULTIACCUM observations, this step will use the NOISFILE reference file for the read
noise contribution in error calculations.
BACKCALC
This step will compute a predicted background (sky plus thermal) signal level for the observation
based on models of the zodiacal scattered light and the telescope thermal background. The results
will be stored in header keywords. This step will use the BACKTAB reference file, which will be
a FITS binary table containing model background parameters. This step is not yet implemented in
CALNICA.
WARNCALC
In this step various (TBD) engineering keyword values will be examined and warning messages to
the observer will be generated if there are any indications that the science data may be compromised due to unusual instrumental characteristics or behavior (such as unusual excursions in some
engineering parameters during the course of an observation). No reference files will be used. This
step is not yet implemented in CALNICA.
12
STATISTICS
Statistics of the SCI image data will be computed for all CALNICA output files. The quantities
computed will be the mean, standard deviation, minimum, and maximum of the good (unflagged)
pixels in the whole image, as well as each quadrant. The computed values will be stored as SCI
image header keywords in the output “ima” and “cal” files.
13
Download