Lab 2: CCD/CMOS Detector Characterization - Cornell

advertisement
Introduction to CCD/CMOS Detector Calibration
In this lab we will look at the methods used to calibrate various characteristics common to both CCD and
CMOS detectors. Some of the basic properties will measure are the read noise, bias, dark current, gain,
and responsivity of the detector. We will discuss each of these in more detail. Please take the time to
look over the following links to familiarize yourself with the architecture of both the CCD and CMOS
detectors as well as some of their common advantages and disadvantages. Keep these in mind as you
are going through the lab to understand how these might be affecting the quantities you are measuring.
Optional Preparation Readings
Digital Imaging – Although focused on CCDs, many of the concepts found here apply to both CCD and
CMOS detectors. Reading here is not required, but feel free to glance through to get a primer on many
of the topics we will talk about.
CCD Background Info
CCD Overview – A fairly concise and comprehensive overview of CCD architecture and read out
CCD Blooming – A discussion of one of the main challenges in CCD imaging
CMOS Background Info
CMOS Overview – From the same company as the CCD Overview
Rolling Shutter – An informal discussion of rolling shutters - one of the main challenges of CMOS imaging
N.B. The shutter on our CMOS detector actually reads out from the center of the detector, not from top
to bottom like animations in the link. This is because it is actually a combination of two separate chips
that are read out together. Keep this in mind when going through the lab.
Goals
The goals of this lab are to understand and measure the characteristics of a CCD detector and how these
characteristics may affect the science data we are looking to interpret. As you should have learned from
the background readings, a CCD detector is very similar to a CMOS, and although we will focus on
characterizing properties common to both detectors, there are some differences between them that
must be calibrated differently (ex blooming). This lab will have a strong focus on array manipulation and
operations as well as statistical fitting and error analysis (it is always important to know the error in any
measured quantity). Although this lab will focus on calibrating a CMOS detector, you should think about
how a CCD would be calibrated similarly/differently along the way.
Logistics
All of the data you will need to carry out at the can be found in the DATA subdirectory of the LAB2 folder
that you downloaded. Also in that directory is a README file that explains the naming conventions for
the files as well as the subdirectories included in the download. READ THIS FILE AND BE FAMILIAR
WITH THE NAMING CONVENTIONS. Having clear and explicit file names allows for easy automation and
processing of the large number of files that we will be working with by pulling all of the necessary
information of how the data was collected from the file name.
The structure of each section will be an introduction explaining the characteristic of interest followed by
a section detailing how to collect the data (all of which will be provided in this case) as well as a
guideline to the steps of how to analyze the data. This lab will require much more independent
programming than Lab 1 so please feel free to ask any questions to have along the way. As a guideline
and tool, a Python script that runs many of the algorithms we will discuss in this lab has been provided.
This is meant simply as a helpful hint file. Python handles data differently than Matlab and the script
performs functions that are not asked for in this lab – so do not attempt to simply copy functions from
the Python file. They will most likely not work for our purposes. Also in each section are questions that
have been colored and italicized that should be answered. Like with Lab 1, a separate lab write-up
document has been provided (in the REPORT subdirectory) with a copy of all the questions. The
questions are included in the lab simply so you can think about them along the way.
Please make sure that all plots and images have appropriate titles, legends, colorbars, axis labels, etc. as
needed. Any unclear or unlabeled plots will not receive full credit. The submission procedure will be
the same as for Lab 1 and is detailed in the lab write-up document.
Brief Digression: Filters
Neutral Density Filters
Because the light source we will be using is so bright (even at the lowest levels) we need to use neutral
density filters to block a significant fraction of the light reaching the detector. Neutral density filters
reduce light equally by a fixed amount across a defined region of wavelengths (for us that region is the
optical). They are “neutral” because of this equal reduction. An ND filter is defined by the optical
density, τ, given by:
𝐼
= 10−𝜏
πΌπ‘œ
We will use a combination of 𝜏 = .3, .6, & .9 to reduce our light by ~98.5%!
Bayer Filters
CMOS and CCD detectors are designed to be sensitive to light over the entire visible spectrum. How,
then, doe the CMOS detectors on our camera phones measure color? The answer is through the use of
Bayer filters. These filters are grids of RGB filters (see Figure 1) that are laid over the pixels of the sensor
to create color images, through an interpolation of the pixels. You’ll notice that the Bayer filter has two
green pixels for every red and blue pixel. This is because the eye is most sensitive to green light, were
the Sun’s emission peaks in the visible spectrum.
Figure 1: Bayer Filter Pattern
Read Noise and Bias
Introduction
The first characteristic we will discuss is the read noise of the detector. As the name suggest, this is a
source of error in our measurements associated with how the electronics read out the signal from the
photon wells. Even in the absence of a signal, and with a sufficiently cooled detected where dark
current is negligible (see below) there will still be a noise associated with the electronics of the detector.
This is the fundamental limit in the noise of the detector and companies go to great lengths to minimize
this noise, with some detectors having read noise much less than one electron – therefore allowing a
very accurate measurement of just how many electrons are in your well. The bias level is the initial
signal on the chip and is designed to keep the number of counts slightly above zero – even for a zero
second exposure.
In this section we will look at how to correct for the bias signal in our measurements and how to
measure the read noise of the detector – an important source of noise.
Collecting the Data
1) Make sure the lens cap is on the detector
2) To ensure that the temperature will remain constant – set the temperature to a value near the
ambient temperature using the “CIS Calibration” button on the top ribbon. (It looks like a
standard settings gear icon)
3) Take two “zero” second exposure by setting the exposure time field on the “CIS Snapshot
Control” tab to 0 ms.
4) Save the file using the naming convention explained above.
5) Now take another 100 “zero” second exposures.
6) Also take one exposure at a few longer exposure times (say 1s, 5s, 10s, and 25s).
Analyzing the Data
The read noise is measured through the root mean square (or a measure of the noise) in a zero second
exposure. If the chip can be assumed uniform it can be determined simply by finding the RMS value in a
uniform region on the chip. However, because we cannot truly measure a zero second exposure, we
can approximate one by subtracting two separate very short (or “zero” second) exposures, and find the
RMS value of a region of this subtracted frame. This is called the “Two Bias” method.
1) Measure the bias by finding the average of a characteristic region of the chip in one of your bias
frames. Measure the read noise by looking at the RMS value of the difference of two bias
frames in the same region. Divide this RMS value by √2 to get the value of the read noise. Why
do we divide by √2? What is the problem with this method (spatial sampling) of determining the
bias and read noise? Report your values for the bias and read noise.
We determined the bias here through spatial sampling achieving many realizations of the noise by
averaging over an area of the chip. Instead of using many pixels of a uniform region to get many
measurements of the bias and read noise (spatial sampling), we can instead use many different
exposures to get many samples of each pixel to generate a master bias map and determine the read
noise for each pixel (temporal sampling). What are the advantages/disadvantages of this method?
Which method, spatial or temporal sampling, is more appropriate for a CCD detector? Why?
We will now look at the temporal method of determining the bias and read noise.
2) Load the 100 bias matrices into Matlab and find the average and standard deviation for each
pixel. This is now a bias and read noise map. Save these maps. Do you notice any difference
about the bias and read noise in these temporally averaged images? Is spatial sampling
justified?
Now that we have measured the bias, we can subtract it from future measurements to remove it from
the signal, understanding that the uncertainty in that subtraction is our read noise. Finally, how should
the bias and read noise vary in time? Why?
Dark Current
Introduction
Another important source of noise that is inherent to any detector is the dark current. Even without a
signal present (i.e. without any flux on the detector) there can still be an accumulation of charge that is
the result of thermal fluctuations generating charges on the chip. This is called the dark current. Like
with the bias, the dark current can be subtracted, however, there is still an uncertainty in the signal,
which results in additional noise on the chip.
Because the dark current is statistically random, the distribution of the thermal signal is Poisson in
nature. You will recall from class that this means the uncertainty in the signal is simply the square root
of the signal itself. This uncertainty, however, is an additional source of noise in our measurement. The
dark current should, in theory, be constant, and therefore vary linearly in time. How should the noise in
the dark current vary in time then? Because of the Poisson nature of the dark current, it is not necessary
to take spatial or temporal averages to determine the uncertainty, as with the bias/read noise. Note,
however, that multiple exposures allow for an additional √𝑁𝑒π‘₯𝑝 reduction in the noise, so multiple
exposures are always advantageous (usually a master dark is created, like the master bias, which is the
average of several darks). Dark current is measured in [e-/s] and can be determined from the slope of a
line fit through a set of zero flux exposures. The dark current also need not be uniform across the chip,
as we will see, so measuring it accurately for each pixel is important to characterizing the CMOS
detector. In theory, because the dark current is linear, measuring it at one time should tell us the dark
current for any exposure. However, because of drift and instabilities in the electronics it is standard
practice to take a dark for each exposure.
Cooling the detector can help to mitigate the noise of the dark current by reducing the amount of
thermal fluctuations that generate charge on the chip. The CMOS detector we are using is equipped
with a simple thermal electric cooler (TEC) that allows for the detector to be cooled to -50C below the
ambient temperature. Ideally, we would decrease the temperature as much as possible – some CCDs
are equipped with TECs that allow for temperatures as low as -50C, which result in dark currents of
< 1e-/sec. How do you think the dark current will vary as a function of temperature?
In this section we will measure the dark current as a function of time and temperature and determine
the noise that it is introduced into our measurements.
Collecting the Data
1) Make sure that the lens cap is on the detector
2) To ensure that the temperature will remain constant – set the temperature to a value near the
ambient temperature using the “CIS Calibration” button on the top ribbon. (It looks like a
standard settings gear icon)
3) Measure a bias frame.
4) Measure the dark current at several integration times, roughly doubling the exposure time, until
either you approach saturation or reach ~500s integrations (which would be an extremely long
integration time if any flux – even a very small one – were incident on the detector). Obtain a
minimum of 10 exposures.
5) Now keeping a constant integration time of 5s – vary the temperature of the detector in 1.5C
steps to 12C below ambient temperature. (This is the limit of the capabilities of the TEC)
Analyzing the Data
1) Load the varying exposure times for the set temperature into an array in Matlab. Subtract the
bias from each frame in the array.
2) Make a plot for a few pixels of the time varying dark current. Describe the similarities,
differences, and features of the curves you see.
3) Write a for loop to go through your array, and using the LSCOV function in Matlab, determine
the linear fit for each pixel and obtain the slope and offset, as well as the associated errors. How
could we have reduced the error in these measurements?
4) Make an image of the slope and offset of the dark current curve for the chip. Describe what you
notice about the dark current across the array. Are there any peculiarities, interesting features,
or questionable values? What is the average dark current you measure? What do you notice
about the offset? Why is this?
We will now look at the dark current as a function of temperature, to see if your guess was correct. We
won’t worry about the error in the measurements here, since we are only curious in the functional form.
5) Load your 8 varying temperature measurements into an array in Matlab. Subtract the bias for
each frame in the array.
6) Again make a plot for a few pixels of our temperature varying dark current. Do you notice a
trend? Describe the similarities, differences, and features of the curves you see.
7) [Optional] If you do not notice a trend, try loading the CCD data supplied. This data covers a
much larger temperature range. Plot a few pixels of the temperature vs dark current. Now do
you see a trend? Describe the similarities, differences, and features of the curves you see.
From this curve, you can see why we try to cool the detector to as low of temperatures as possible and
why for extremely low temperatures the dark current can be extremely small, which is necessary to not
be dominated by the dark current noise when taking longer exposures of faint objects.
Gain
Introduction
Did you notice a problem with the value you just reported for the dark current? The value you
measured was the slope of a line that was counts on the y-axis and time on the x-axis – so it had units of
[DN/s]. It was stated, however, that dark current is measured in [e-/s]. What we were missing before
was the gain of the detector – that is how many electrons it takes to register a count on the read out
electronics. The gain, therefore, has units of [e-/DN].
How is the gain chosen? There are two numbers to consider when setting the gain of a detector. The
first is the saturation of the potential well. Because the potential well has a set physical size, there is a
limit to the number of electrons it is able to hold. Larger pixel allow for large potential wells. We will
discuss later what happens when we overfill these potential wells. The second number comes from the
conversion of the analog number of electrons to the digital voltage that is read. This is done by the
analog-to-digital converter. The resolution of the converter is defined by a bit number. That is a 16 bit
ADC can convert a signal to 216=65536 distinct possible values. The gain is chosen such that the largest
number of electrons that a potential well can hold corresponds to the largest possible ADC value. What
would the gain be for a detector with a full well of 42840 electrons, read out with a 15 bit ADC?
In this section we will determine the gain for our CMOS detector. We can do this through both a spatial
and temporal methods, as we did with the bias and read noise. The explanation provided in the analysis
below is only an overview. For a more detailed discussion please see the Mirametric technical note
found at: http://www.mirametrics.com/tech_note_ccdgain.htm.
Collecting the Data
1) Cool the detector to as low a temperature as possible. This is to minimize the effects of the dark
current.
2) To determine the gain we need exposures at ~10 different flux levels, but want to keep
exposure times as short as possible to mitigate the dark current. Choose a flux level such that
the longest integration time is 10 second and reaches, on average, 50% full well.
3) Measure a bias frame as well as two different exposures at this flux level.
4) Repeat step 3 for nine additional flux levels, roughly doubling the flux each time. This means
your final integration time should be about 10 ms to reach ~50% full well.
Analyze the Data
Before analyzing the data, we will briefly discuss how to determine the gain from the frames we just
acquired, noting some caveats along the way. Again, for a fuller description, please see the Mirametrics
technical note given above.
To understand how to measure the gain of the CMOS, we must first note an important property of the
nature of light. The arrival of photons at the detector, like the dark current, follows a Poisson
distribution. That is, the arrival rates of photons under a constant illumination is not uniform and will
have an error that is the square root of the number of photons being received. This directly translates
to a noise in the signal that is simply the square root of the signal. The signal we measure is counts,
which relates to the number of electrons through the gain. In the form of an equation we have:
𝑆𝑒 = 𝑔𝑆𝑐
𝑁𝑒 = 𝑔𝑁𝑐
where “S” is the signal and “N” is the noise, and the subscript “c” corresponds to counts, and “e”
corresponds to electrons. As stated, the noise is simply the square root of the signal. Therefore:
𝑁𝑐 =
1
√𝑆
𝑔 𝑐
Thus, if we look at the slope of a signal-variance plot (variance being the square of the noise) for several
flux levels we find:
1
𝑆𝑐
𝑆𝑐
𝑔 𝑆𝑒
=
=
=𝑔
πœŽπ‘ 𝑁𝑐2 ( 1 )2 𝑆
𝑔 𝑒
Thus, the signal-variance plot is simply a straight line and the slope of our plot is simply the gain of our
detector! We would be done, therefore, if our chip was perfectly uniform and we had no additional
noise terms. Unfortunately, this is not the case. We have other sources of noise such as read noise,
dark current, and flat pattern noise that we must account for. These sources of noise actually lead to a
non-linearity in our signal-variance plot and, therefore, an underestimation of the gain (see the technical
note). Let us briefly, though, look at these noise contributions and see how we can correct them.
Bias and Read Noise
For each flux level we must subtract the bias. This is because the bias contributes to the signal and not
the noise. (Remember although we calculate read noise from bias frames it is not the noise in the bias,
but rather the noise in the electronics.) Without subtracting the bias, we would therefore preferentially
add signal at low flux levels, leading to an underestimation in gain. Why is this an underestimation?
The read noise could in theory be measured as the offset of our signal-variance plot (i.e. the noise at
zero signal). However, errors in the estimation of the slope are large, and therefore the method used
previously is preferred. Fortunately, the read noise is a constant offset and does not affect the slope of
the line.
Dark Current
As you found in the previous section, the noise introduced by the dark current depends on the
integration time and temperature of the detector. We attempt to keep exposure times to a minimum
and the detector as cool as possible to make the dark current as low as possible. If integration times are
longer, or our detector warmer, we have to subtract the dark current from our signal before continuing
with our analysis. We will always subtract the dark current from our measurements. Write an equation
for our three source of noise so far (assuming we subtract the bias). Is our signal-variance curve still
linear?
Fixed Pattern Noise
Fixed pattern noise results from the fact that neighboring pixels will vary in sensitivity with respect to
one another. This is known as the responsivity of the chip and will be something we will look at later.
How does this variation in sensitivity affect our noise? The fact that one pixel may measure 100 counts
and a neighboring pixel may measure 110 counts for the same incident flux adds noise that is not
captured in the simple photon noise consideration. This additional variance amongst the pixels leads to
greater noise for a given signal, and therefore leads to an underestimation of the gain (see technical
note). To correct for this flat field variation, we can subtract two frames at the same exposure level to
cancel out the flat field contributions and allow for a true estimate of the gain. This is similar logic to
how we calculated the read noise, however, there will be a greater variance now due to all our noise
sources. Remember, a difference image does not remove the noise – it only zeros the mean value of the
image.
We are now ready to analyze our data and to get an estimate of the gain of our chip. We can do this
through both spatial and temporal methods. Remember, the spatial method relies on getting an
accurate measure of the noise by averaging many pixels. The temporal method relies on getting an
accurate measure of the noise for each pixel by averaging many frames.
Method 1:
1) Load 2 frames of a given flux level into Matlab and subtract the bias and provided dark current
from each of them. Now that we finally have some flux on our detector – do you notice any
immediate peculiarities in the detector? What might these be?
2) Choose a uniform region (at least 100x100 pixels) on the chip and calculate the average signal in
that region for both frames. Is your choice justified? Why or why not?
3) Using your average signals from Step 2, normalize your two frames. Do this by finding the
𝑆
normalization ratio, π‘Ÿπ‘›π‘œπ‘Ÿπ‘š = 1⁄𝑆 , and multiply Frame 2 by π‘Ÿπ‘›π‘œπ‘Ÿπ‘š .
2
4) Subtract your normalized frames to correct for the flat pattern variations. Calculate the
variance of this new image in the same region as that used in Step 2. Remember to divide by √2
5) Using your results from Step 2 and Step 5 place a point on your signal-variance plot. Repeat for
the remaining flux levels. Once finished, save your signal-variance plot.
6) Using the LSCOV function determine the gain in the region of your chip that you chose. Report
this value. What is the error in your gain? Is this gain justified for the whole chip? Why or why
not?
Method 1 used spatial averaging to obtain an estimate for the gain and assumes the same gain for the
entire array. However, we could also use an average and variance of many exposures to get an estimate
of the gain for each pixel. Because looking at the whole chip would take many gigabytes of data, for
each flux level, we have taken 100 exposures of a small area (100x100 pixels) of the CMOS detector.
(Remember this is one advantage of a CMOS over a CCD.)
Method 2:
1) Load the 100 images of a given flux level into Matlab and subtract the bias and dark from each
of them.
2) Using a 100x100 pixel section, normalize your frames w.r.t. one another. (Refer to Steps 2 & 3 of
Method 1) What does this assume about the region we have provided?
3) Taking the average and standard deviation of your 100 frames.
4) Using your results from Step 2 and Step 5 begin populating a signal and variance arrays for each
pixel. Repeat for the remaining flux levels.
5) Write a for loop to use the LSCOV function determine the gain for each pixel, using your signal
and variance arrays, in the region supplied. Make a gain and error map for your region. Is your
error greater or less using this new method? What do you think now of using a single gain for
the entire chip?
N.B. Step 4 still required a spatial average across the array to correct for any variations in the source
intensity over the course of the 100 measurements, and is not using spatial sampling to find the
signal/variance as we did in Method 1.
Make new read noise and dark current maps that are in the correct units of [e-] and [e-/s]. Make these
maps incorporating your two different gain calculations. That is, create a base map using the single gain
calculated in Method 1, and update the region where we know the gain specifically for each pixel. Do
you notice any difference between the two regions in your maps (i.e. can you measure if the gains differ
more than the noise in the gains you measured)?
Responsivity
Introduction
As we mentioned in the previous section – the pixels have a variability in their sensitivity (or
responsivity) that results in flat field correction that must be considered. The problem with differencing
two separate exposures is that we remove all interesting signal, leaving only the noise. Instead, what
would be better, is to have a responsivity array to multiple an image by to normalize out any flat field
variations, thus maintaining our signal of interest. But how is this responsivity measured?
To answer this, we must first ask what responsivity is. Ultimately, it is a measure of how many electrons
are generated for a given number of photons on our detector. Responsivity has units of [e-/ph] and is
ultimately a measure of the combined effects of the quantum efficiency (QE) of our detector (i.e. how
efficiently are photons turned into countable electrons – ideally this is one-to-one) along with other
considerations such as the transmission function of our ND filters and any lenses in the system. The
transmission of filters and QE are ultimately a function of wavelength, but because our sensor integrates
the signal over the entire optical spectral region, it is difficult to determine their wavelength
dependence. For this reason, we will assume a “grey” response for our filters and QE (i.e. independent
of wavelength). This is a good approximation for the ND filters, which are grey by design, and an OK
approximation for QE and other transmission terms, like from the lens. You can see the QE curve for our
detector in the provided documents with this lab. Ultimately, to determine the QE as a function of
wavelength would require a wavelength tunable light source to accurately vary the number of photons
at a particular wavelength to measure how our counts change in response to such variations in signal.
Can we measure the responsivity curve and get an estimate of the QE to compare to the one provided?
The answer is yes! Provided there is a calibrated light source, which we just so happen to have. So how
do we measure responsivity? Let’s look at a dimensional analysis. As we mentioned, the units of
responsivity are:
[
e−
]
Λ
Where e- is the number of electrons and Λ is the number of photons. If we were to expand this we
could express this in different units we are more familiar with:
e−
Λ −1 𝑒 −
[ ]=[ ] ∗[ ]
Λ
s
𝑠
The units of [Λ/s] is a measure of flux. More flux means more photons per unit time. The units of [e-/s]
we have already seen. It is the slope of the counts (up to a factor of the gain) vs integration time curve
for a given flux. For example, for the dark current we measured, the flux was 0. Therefore if we
measure the slope of the counts vs integration times for a variety of fluxes we can measure the
responsivity.
The question then becomes – how many photons are incident on our detector. We can then divide this
by our number of counts/sec in order to obtain the responsivity of our detector. The relationship
between counts and photons is given by the camera equation:
πœ†2
𝑒 − = 𝐴Ωtg ∫ 𝑅(πœ†)𝑆(πœ†)π‘‘πœ†
πœ†1
Let’s parse this equations piece by piece to get a better understanding. First there is A, the area of
collecting area. For our camera, this area is the area of the lens. For a telescope, it is the area of the
primary mirror. It is the amount of collecting area that will be focused onto the detector. Ω is the solid
angle of a pixel. That is, although the dish collects light from a large area, each individual pixel is only
seeing a small portion of that entire field of view. t is simply our integration time and g the gain. The
integral is to integrate the light from the spectral region over which our detector is sensitive. For CCDs
and CMOSs that is the entire visible band. For a spectrometer, the spectral region of a given pixel can be
very small and depends on the spectrometer’s spectral resolution. R is the responsivity of our detector,
while S is the source function (i.e. how does the intensity of light vary as a function of wavelength).
Let’s look at these last two terms in a bit more detail. More explicitly R can be defined as:
𝑅(πœ†) = 𝑇(πœ†)𝑄𝐸(πœ†)
T(λ) is the transmission of the “system”. This can account for the transmission of the Earth’s
atmosphere, for each lens, for each filter, etc. Transmissions of individual elements are always
multiplicative. For example, we are using 3 ND filters on our setup. The first blocks ½ the incident light,
the second blocks ¼ of the remainder, and the final blocks 1/8 of the remainder of the first two, giving a
total reduction in light of 1/64. QE(λ) is the quantum efficiency (explained above) of our detector, what
we hope to measure. Because we assume both of these to be grey we can reduce our equation to:
𝑅 = 𝑇 ∗ 𝑄𝐸
and pull both terms out of the integral.
So what is S(λ)? S(λ) is simply the source term – it tells us what our incident flux is. This is turn tells us
an amount of incident energy, which we can relate to the number of photons incident on our detector.
A standard source function used in astronomy is blackbody emission given by Planck’s Law:
𝐡(πœ†, 𝑇) =
2β„Žπ‘ 2
πœ†5
1
β„Žπ‘
𝑒 πœ†π‘˜π‘‡
−1
[
π‘Š
]
π‘š2 π‘ π‘Ÿπœ‡π‘š
Using dimensional analysis, confirm that the units for this source function are correct. This is just an
example source function. We will talk more about our source in a moment. What we are interested in,
however, is the total number of photons. Remember that the energy of a photon is given by:
𝐸=
β„Žπ‘ 𝐽
[ ]
πœ† Λ
So dividing our source function by this we can get:
−
πœ†2
𝑒 = 𝐴Ω𝑑𝑔 ∗ 𝑇 ∗ 𝑄𝐸 ∫
πœ†1
πœ†
𝑆(πœ†)π‘‘πœ†
β„Žπ‘
So what is our source function? We will be using a calibrated Labsphere to make our measurements.
The Labsphere’s spectral curve has been measured to know exactly how much power is being outputted
as a function of wavelength. This curve is provided for you in the data package. The nice thing about
this Labsphere is that the wavelength dependence stays the same regardless of flux level, therefore the
integral in the above equation is constant, up to a normalization parameter (referenced to the max flux)
to account for the variations in the flux level. Therefore, we can solve for QE as:
𝑒−
( 𝑔𝑑 )
𝐷𝑁
[ 𝑠 ] 𝐹̅
𝑄𝐸 =
=
;
= 68.48
πœ†
𝐴Ω𝑑𝑔 ∗ 𝑇 ∗ ∫πœ† 2 𝑆(πœ†)π‘‘πœ† 𝐴٠∗ 𝑇 ∗ 𝐹 ∗ 𝐹̅ [Λ] πΉπ‘œ
1
πΉπ‘œ 𝑠
𝑒
−
where 𝐹 is simply the measured flux and 𝐹̅ is the constant integral of the spectral curve normalized to
the maximum flux, πΉπ‘œ . For the transmission, in addition to the ND filter losses, assume an additional 70%
loss from the other optics, like lens transmission. Pause for a moment and take the time to make sure
you understand the camera equation. What are the units of 𝐹, 𝐹̅ , & πΉπ‘œ ? Unfortunately, our Labsphere
was calibrated for the Near-IR, so we will have to make some modification to make it work in the visible,
which will we explain below.
We are now ready to analyze the data! Our general strategy will be to get a DN/s curve for a variety of
fluxes and to plot the slopes of these curves as a function of flux to get a responsivity curve. The slope
of this responsivity curve is the QE we are interested in measuring.
Collect the Data
1) We will want to collect data at ~10 different flux levels at ~10 integration times each. Determine
10 reasonable flux levels (i.e. a reasonable saturation time for the highest and lowest fluxes) and
the ten integration times at each flux level such that you sample up to ~90% full well in the
maximum integration time.
2) Collect a dark frame (make sure the lens cap is on!) for each exposure.
Analyze the Data
Create a modified blackbody curve [Optional – this will be provided, but could be fun to try!]
1) Load the provided spectral calibration curve provided in the data package. Note the provided
π‘šπ‘Š
flux values are in [π‘π‘š2 ∗π‘ π‘Ÿ∗πœ‡π‘š].
2) We want to fit a modified blackbody curve to our calibration curve. But what temperature
should we use? Fit a parabola to the region of peak intensity [.7um, 1um] – the vertex is the
peak wavelength. Use Wien’s Law to determine the temperature of the blackbody:
π‘‡πœ†π‘šπ‘Žπ‘₯ = 2.897 ∗ 10−3 [π‘š ∗ 𝐾]
3) Write a function to make a blackbody curve as a function of wavelength and temperature. And
create one that goes from 0 to 2.5 um with the temperature found in Step 2.
4) Evaluate your blackbody function at the same wavelength values as your spectral calibration
curve. What is the main difference between these two curves? Evaluate the ratio of the spectral
curve to blackbody curve by doing an element-by-element division.
5) Create a modified blackbody curve by multiplying the ratio curve you found in Step 4 with the
blackbody curve you created in Step 2. This will scale the blackbody curve to match the
functional form and intensity of our calibrated curve. Make a plot of the spectral curve, original
blackbody, modified blackbody, and peak fitting parabola. Scale them as needed to fit
reasonably.
Calculate Responsivity/QE
1) Load your flux frames into an array in Matlab (a 4 dimensional array works well: one axis for
flux, one for time, and two for the spatial dimensions).
2) Subtract the dark and bias from each frame
3) Using your signal, dark current, and read noise, create an error array for your 100 frames.
Remember that error adds in quadrature.
4) Using LSCOV fit a line to the set of integration time images for each flux level (your error will
come from Step 3). Save these slopes and the error associated with them.
5) Now run the same LSCOV routine using the slopes and their associated errors from Step 4 as
your y parameter and the number of photons/sec (determined from the camera equation) as
your x parameter. You error in x is simply photon noise (assuming we know all other terms of
the camera equation perfectly, which is not the best assumption). The slope LSCOV will give you
is a measure of the QE. Make a map of the QE. Is your QE reasonable? How large is the error in
QE? What do you think is our largest source of error?
Download