Post-SM4 Data Volume Estimates for HST

advertisement
20 Mar 03
Post-SM4 Data Volume Estimates for HST
C. Biagetti, T. Brown, C. Cox, S. Friedman, T. Keyes, R. Kutina, A. Patterson, N. Reid, J.
Rhoads, A. Schultz, J. Scott, D. Soderblom, C. Townsley
Table of Contents
Introduction
Section 1
Section 2
Section 3
Section 4
Appendix 1
Appendix 2
Appendix 3
Current Data Volumes
Post-SM4 Data Volume Estimates
Current TDRS Scheduling Practice
References
Analysis of Engineering Overheads
Data Recording Constraints and
Prime/Parallel Scheduling Assumptions
Cycle 11 high data volume ACS visits
1
2
7
20
22
23
25
29
Introduction
This document provides estimates for total HST downlinked science data volumes in the postSM4 timeframe when the complement of HST science instruments (SIs) is expected to be ACS,
COS, NICMOS, STIS, and WFC3. For comparison and, in some cases as a basis of estimate for
the post-SM4 calculations, the statistics for SI usage and data volumes for Cycle 11 are given in
detail (section 1). The study predicts an average daily data volume following SMOV4 of 27+/- 3
gbits (See box, below, for definition of units.). The 6-gbit range on the daily average results
from the large uncertainty in the efficiency of scheduling parallel observations, especially in the
unprecedented presence of two large-format SIs such as ACS and WFC3. A detailed explication
of the assumptions and expectations that go into the derivation of this data volume estimate is
given in section 2. This expected increase over the current 16.8 gbits/day will stress the TDRS
scheduling process and will require an increase in the STScI workload in that area. This issue,
along with a description of the TDRS contact scheduling process, is addressed in section 3. The
downlinked volume of science data is typically 30% larger than the simple sum of the individual
SI data production. Appendix 1 provides an overview of the components that go into this socalled engineering overhead. The assumptions used in determining the amount of parallel
science, as well as an indication of some of the scheduling constraints, are addressed in
Appendix 2. Appendix 3 contains information that sheds light on the peak daily volumes that we
are currently experiencing in Cycle 11.
In the course of various derivations herein, the following definitions are used:
Mbit = 220 bits = 1.049 x 106 bits
Gbit = 230 bits = 1.074 x 109 bits
gbit = 109 bits
In order to maintain consistency with other studies, past and current, within the HST Project, we
express all our results in units of gbits (109 bits).
1
20 Mar 03
Section 1 Current Data Volumes
In this section, we seek to document the current Cycle 11 data volumes in the six-month time
frame following the completion of SMOV3B, i.e., from June to December 2002. The plots and
charts in this section show the recorded data volume through this representative fraction of the
cycle and are derived from the PASS MS (Mission Scheduler) product output files which list
every 'record' activity between the SIs and the SSRs (Solid State Recorders). The data volume
recorded includes the overhead of typically 30% that is occupied by fill data but which has to be
dumped through TDRS to the ground. (See App. 1 for an analysis of this overhead.) The type of
observation may be classified as main, upar, apar or intl according to the following scheme:
Main - pointed external science observation (typically GO
primary science)
Upar - unattached parallel (internal observation not
requiring external visibility; typically
a calibration)
Apar - attached parallel or pure parallel (external
observation that takes advantage of previously
scheduled main science but uses a different
instrument; typically GO parallel science)
Intl
- interleaver (external observation fitted between
previously scheduled main science; usually
an Earth calibration)
Figure 1-1 plots the daily data volumes, in gigabits/day, for each science instrument over
the six-month period of June to December 2002. The dominance of ACS as the main
contributor to Cycle 11 data volume is apparent. The gap appearing around Day 332 is
the zero-gyro sunpoint safemode event of 28 Nov. 2002. The peaks in the daily volumes
are due almost exclusively to heavy scheduling of ACS main observations on those days.
The ACS proposals which are the primary contributors of these peaks are:
9425: The Great Observatories Origins Deep Survey: Imaging with ACS
9583: The Great Observatories Origins Deep Survey: Imaging with ACS
9500: The Evolution of Galaxy Structure from 10, 000 Galaxies with 0.1<z<1.2
9075: Cosmological Parameters from Type Ia Supernovae at High Redshift
9700: 2002 Leonid Observations
From among these proposals, the largest contributors are the two GOODS proposals,
9425 and 9583, each of which consist of epochs of 16 closely packed ACS visits, with
each epoch recurring every six or seven weeks through May of 2003.
The STIS peak on day 242 results from the GO proposal 9078: Flares, Magnetic
Reconnections and Accretion Disk Viscosity. These were STIS MAMA TIMETAG
observations.
2
20 Mar 03
Appendix 3 provides further details on such peak data volume days. It contains a list of
the days in which the ACS data volume exceeded 15 gbits along with a list of the
contributing ACS proposals scheduled that day and their contributions to the total data
volume.
MS Data Volume per Day by Instrument
30
MS Data Volume (Gbits)
25
20
15
10
5
18
2
18
7
19
2
19
7
20
2
20
7
21
2
21
7
22
2
22
7
23
2
23
7
24
2
24
7
25
2
25
7
26
2
26
7
27
2
27
7
28
2
28
7
29
2
29
7
30
2
30
7
31
2
31
7
32
2
32
7
33
2
33
7
34
2
34
7
35
2
35
7
36
2
36
7
0
Day of Year (2002)
ACS
WFII
STIS
NIC
Figure 1-1
Figure 1-2 presents the same data in stacked form, again depicting both the dominance of
ACS volumes and the large dispersion in daily amounts.
3
20 Mar 03
Figure 1-2
MS Data Volume per day by Instrument
35
30
MS Data Volume (Gbits)
25
20
15
10
5
18
2
18
8
19
4
20
0
20
6
21
2
21
8
22
4
23
0
23
6
24
2
24
8
25
4
26
0
26
6
27
2
27
8
28
4
29
0
29
6
30
2
30
8
31
4
32
0
32
6
33
2
33
8
34
4
35
0
35
6
36
2
36
8
0
Day of Year (2002)
ACS
WFII
STIS
NIC
Figure 1-3 represents the same data sorted by scheduling type, i.e., main science, attached
parallels, unattached parallels, and interleavers. It also contains a curve representing a
seven-day smoothed average of the daily volumes. Over the entire period, the daily
average is 16.8 gbits. The standard deviation is calculated to be 4.6 gbits.
Table 1-1 presents the overall six-month averages and confirms that the daily average is
16.8 gbits/day.
4
20 Mar 03
Data Volume per day by Scheduling Type
30
25
MS Data Volume (Gbits)
20
15
10
5
18
2
18
8
19
4
20
0
20
6
21
2
21
8
22
4
23
0
23
6
24
2
24
8
25
4
26
0
26
6
27
2
27
8
28
4
29
0
29
6
30
2
30
8
31
4
32
0
32
6
33
2
33
8
34
4
35
0
35
6
36
2
36
8
0
Day of Year (2002)
Main
Upar
Apar
Intl
Smoothed Average of Total
Figure 1-3
Type
apar
Data
Cycle Average
Instrument
ACS
2.543
0.574
intl
Cycle Average
0.016
main
Cycle Average
upar
Cycle Average
Sum of Cycle Average
Percent of total
NIC
STIS
WFII
Total
0.901
1.185
5.203
0.000
0.000
0.340
0.356
6.347
0.508
0.955
0.540
8.349
1.190
0.308
0.858
0.506
2.862
10.096
60.2
1.389
8.3
2.714
16.2
2.571
15.3
16.771
Table 1-1 SI Data Volumes vs. Scheduling Type (gbits/day, average)
Figure 1-4 presents the average daily volumes by scheduling type, each of which are
depicted as a stacked column of individual SI contributions. The daily average of main
science slightly exceeds 8 gbits and this is roughly equal to the combined total of
attached and unattached parallels.
5
20 Mar 03
9
Average of MS Volume
8
7
6
Instrument
5
W FII
STIS
NIC
4
ACS
3
2
1
0
apar
intl
main
upar
Type
Figure 1-4
Table 1-2 sorts the cycle averages per SI by main (prime), parallel, and calibration data
types (gbits/day, average). These terms represent proposal types and are used more
frequently when discussing the HST science program. With the exception of a small
amount of calibration observations, Main is equivalent to primary (or prime) science
observations (i.e, those, which when put on a schedule, dictate the telescope pointing.
Calibrations are principally unattached parallels and parallel science is GO science
observations scheduled in parallel with other instruments as primes (attached parallels).
MAIN
(PRIME)
6.5
0.5
0.9
0.5
SI
ACS
NICMOS
STIS
WFPC2
TOTAL
Percent
Total
of
8.4
50.0
PARALLEL CALIBRATION
2.6
1.1
0.5
0.4
0.9
0.9
1.7
0.3
5.7
33.9
2.7
16.1
TOTAL
10.2
1.4
2.7
2.5
16.8
Table 1-2 SI daily data volumes by proposal type (gbits/day, average)
6
20 Mar 03
Section 2 Post-SM4 Data Volume Estimates
We now estimate post-SM4 daily data volumes by using a set of science and scheduling
assumptions for each SI. As in ref. 1, the WFC3 Data Volume Estimates, we assume that
WFC3 (which of course will have replaced WFPC2 in SM4) and ACS each consume 1/3
of the daily scheduled orbits for prime observing. COS, NICMOS, and STIS share the
remaining time for prime observing. Furthermore, WFC3 is assumed to be operating in
parallel an average of 7.1 orbits per day and ACS 4.6 orbits per day.
As mentioned in the introduction to this document, the estimate of the number of orbits of
parallel science that can feasibly be expected to schedule for the two large-format SIs is
the source of the largest uncertainty in the total data volume estimate. The assumptions
and approach which result in our estimate of parallel science are based on our current
cycle 11 experience in scheduling a large-format SI (ACS) both as prime and parallel.
They are also based on our understanding of the constraints imposed by the on-board
buffer-dump process for the transfer of science data from the SI buffers to Solid State
Recorder (SSR). These assumptions and constraints are addressed in detail in Appendix
2.
For health and safety reasons due to (mainly bright object protection), COS will be
prohibited from operating in parallel. The parallel contributions of STIS and NICMOS
are relatively minor compared to those of WFC3 and ACS and are derived by a simple
scaling of their current cycle 11 behavior without further analysis.
These high-level scheduling assumptions are depicted graphically in below.
scheduling assumptions for parallel science are provided in Appendix 2.)
PRIME ORBITS
5 orbits
5 orbits
5 orbits
WFC3
ACS
COS/NICMOS/STIS
2.5 ORBITS
PARALLEL
ORBITS
(not including NICMOS &
STIS)
(The
ACS
<1
A
C
S
2.5 ORBITS
<1
WFC3
W
F
UV+IR
I
R
WFC3/ACS => 1.5 gbits
2.5ORBITS
WFC3 IR
[See, also, section 2.5 for some caveats with respect to scheduling parallels with COS as
prime appear in section 2.4.]
WFC3 data volumes are derived by means of a bottom-up approach starting with DRMbased assumptions of exposure quantities and UV versus IR time allocations. COS also
uses a bottom-up approach that assumes typical and worst-case observing scenarios.
ACS, NICMOS, and STIS data volume estimates are based on assumed deltas to the
known Cycle 11 SI usage and data volumes.
The following subsections describe these assumption and estimates in more detail, but
first we provide the baseline results of the analysis in table 2-1, below.
7
20 Mar 03
SI
Prime
Parallel
Calibration
Total
ACS
4.85
2.75
1.10
8.70
COS
0.52
0.00
0.07
0.59
NICMOS
0.50
0.60
0.30
1.40
STIS
0.67
0.90
0.69
2.25
WFC3 UVIS
4.08
1.54
1.31
6.93
WFC3 IR
2.70
3.46
0.77
6.93
13.32
9.25
4.24
26.80
TOTAL
Table 2-1 Estimated Post-SM4 Data Volumes, (gbits/day, average, incl. 30%
engineering overhead)
The total of 26.8 gbits/day includes the 30% engineering overhead and therefore
represents the total average data volume to be downlinked from the HST. The total is
dominated by the ACS and both channels of the WFC3. In these calculations, the
average number of scheduled orbits/day is taken to be 15.
2.1 Estimated science data volume for WFC3
Preliminary estimates of the WFC3 data rates were derived by Lisse et al (see Reference
1, WFC3 ISR 2001-02 (Data Volume Estimates for WFC3 Operations by C. Lisse, R.
Henry, P. Knezek, C. Hanley; see also Kutina’s presentation in the WFC3 Pipeline CDR.)
These estimates are based on 41 proposals from the WFC3 Design Reference Mission
(Knezek & Hanley, WFC3 ISR 2001-10).
The calculations are based on the following specifications and assumptions:
ƒ each UVIS full-frame image (4140x4206 pixels) produces 0.259 Gbits
- 4140 x 4206 x 16 [bits/px] / 230 [bits/Gbit] = 0.259 Gbits
ƒ each IR full-frame image (1024x1024) produces 16 Mbits = 0.0156 Gbits
- 1024 x 1024 [px] x 16 [bits/px] / 230 [bits/Gbit] = 0.016 Gbits
ƒ on average, each IR exposure will include 10 frames = 160 Mbits = 0.156 Gbits
The total number of exposures is estimated by analyzing the requirements outlined in the
Design Reference Mission (Ref. 2), for UVIS and IR prime and parallel usage. Explicit
allowance is made for CR-split exposures at UVIS wavelengths (where appropriate), and
the Lisse et al calculations are extended to allow for realistic time overheads for data
storage. In the case of UVIS exposures, the minimum time between full-frame exposures
is set by the readout time of 100 seconds; the corresponding overhead for the nearinfrared camera is less than 10 seconds. However, once the WFC3 data buffer fills, the
time between exposures is set by the SDF transfer rate. The net result is an effective
8
20 Mar 03
overhead of ~5.5 minutes for both IR and UVIS exposures with durations shorter than 6
minutes.
2.1.1 Estimated primary science data volume
As stated above, we assume that one-third of the Observatory’s primary observing time
will be allocated to WFC3.
WFC3 prime observing time is assumed to be allocated as follows:
• 60% of WFC3 primary orbits for UVIS
• 40% of WFC3 primary orbits for IR
The predicted exposure rates are 3.75 exposures/orbit for the UVIS channel and 6.2
exp/orbit for the IR.
Using 15, from above, as the average number of the Observatory’s prime observing orbits
per day, the WFC3 data volumes for primary observations are:
UVIS = 1/3 x 15 x 0.6 [orbits/day] x 3.75 [exp/orb] x 0.259 [Gbits/exp] = 2.91 Gbits/day
IR = 1/3 x 15 x 0.4 [orbits/day] x 6.2 [exp/orb] x 0.156 [Gbits/exp] = 1.94 Gbits/day
Therefore, the total data rate for primary science is 4.85 Gbits/day (not including
engineering data overhead).
2.1.2
Estimated parallel science data volume
WFC3 is assumed to operate in parallel an average of 7.1 out of its 10 non-prime orbits
each day, with the UVIS channel in operation 30% of the time and the IR for the
remainder.
Assuming 2 UVIS exposures/orbit and 2 16-frame IR exposures/orbit, we get:
UVIS: 0.3 x 7.1 [orbits/day] x 2 [exp/orb] x 0.259 [Gbits/exp] = 1.10 Gbits/day
IR:
0.7 x 7.1 [orbits/day] x 2 [exp/orb] x 16 [frames/exp] x 0.0156 [Gbits/frame] = 2.48 Gbits/day
The total data rate for parallel science is therefore 3.58 Gbits/day (without overhead).
2.1.3
Estimated calibration science data volume
Calibration exposures (during Earth occultation) are predicted to require ~3.6 UVIS
exposures/day and 2.2 16-frame IR exposures/day. Carrying out the multiplications:
UVIS: 3.6 [exp/day] x 0.259 [Gbits/exp] = 0.932 Gbits/day
IR:
2.2 [exp/day] x 16 [frames/exp] x 0.0156 [Gbits/frame] = 0.549 Gbits/day
9
20 Mar 03
The total data rate for calibration data is therefore 1.48 Gbits/day (without overhead).
2.1.4
Total estimated WFC3 data volume
After multiplication of all the foregoing results by 1.3 to account for the typical
engineering overhead and by 1.074 to convert Gbits (230) to gbits (109), the total WFC3
predicted data volumes for each channel and each observation type are given in the
following table.
WFC3
Channel
UVIS
IR
Total
Prime
Parallel Calibration
Total
(gbits/day)
(gbits/day)
(gbits/day)
(gbits/day)
4.08
2.70
6.78
1.54
3.46
5.00
1.31
0.77
2.08
6.93
6.93
13.86
These estimates are entered in Table 2-1, above, as the WFC3 contribution to the average
daily data volume for the entire Observatory following SM4.
2.2 Estimated science data volume for ACS
The first two lines of table 2-2 are average daily ACS data volumes from two periods of
Cycle 11. The first starts after completion of SMOV3B and runs from August 1st to the
end of the year. The second period starting November 1st includes the GOODS program
and probably corresponds more to the steady state in which ACS gets about 65% of the
orbits or almost 10 orbits /day. (This fraction comes from the accepted proposals data.)
Since our starting assumption, above, specified 1/3 of the scheduled orbits for ACS in the
post-SM4 era, the daily amounts for mains (primes) are simply one half of the prime
science amounts for the Nov.-Dec. interval in which ACS is scheduling in 2/3 of the
orbits.
The post-SM4 calibration level is assumed to remain essentially equivalent to the current
level, i.e., 1.1 gbits/day. The ACS parallel science is assumed to schedule the equivalent
of ~ 4.6 orbits/day. This quantity of orbits and the corresponding parallel science data
volume result from the parallel scheduling assumptions depicted in the table in this
section’s introduction. (The assumptions are described in greater detail in appendix 2.)
ACS gbits/day, Cycle 11 & Post-SM4
st
st
02 Aug 1 to Dec 31
02 Nov 1st to Dec 31st
Post-SM4 Estimate
Prime Science
6.6
9.7
4.9
Parallel
2.6
1.5
2.75
These entries include the 30% engineering overhead.
Table 2-2
10
Calibration
1.2
1.1
1.1
20 Mar 03
2.3 NICMOS Data Volume Estimate
NICMOS was heavily requested and allocated for Cycle 7, ~33% of available orbits.
Due to the sublimation of the solid nitrogen cryogen, a special call for proposals (7N)
was issued. The combined NICMOS allocated science orbits for Cycle 7 and 7N reached
~42%.
With installation of the NICMOS cooling system (NCS) onboard HST during the March
2002 Servicing Mission (SM3B), NICMOS has been reactivated. The number of the
allocated orbits for Cycle 11 is ~9%, substantially less than what was allocated in the
previous Cycle 7 and 7N. The low percentage of the number of allocated orbits may be
due in part to the lack of any TAC approved NICMOS GO coronagraphy for Cycle 11.
During Cycle 7 and 7N, approximately 80% of NICMOS science data were obtained
from direct imaging, approximately 2% from polarimetry, approximately 6% from
spectroscopy, and approximately 2% from coronagraphy.
For Cycle 11, approximately 85% of the data were obtained from direct imaging,
approximately 1% from polarimetry, 1% from spectroscopy, and 5% from coronagraphy
(calibration program). Cycle 11 is not complete and the relative percentages may vary
slightly upon completion of the cycle.
For the post-SM4 timeframe, we can not predict that the number of NICMOS
observations will double given the number of Cycle 11 and 12 proposals submitted and a
proved by the TAC. It seems that for the foreseeable future, NICMOS will not reach the
levels of usage it obtained during Cycle 7 & 7N. Therefore, for our purposes, we will
assume a post-SM4 NICMOS data volume equivalent to the current Cycle 11 data
volume = 1.4 gbits/day (including engineering overhead), broken down as follows:
Primary
0.5
Parallel
0.6
Calibration
0.3
Total
1.4 gbits/day
2.4 STIS Data Volume Estimate
We anticipate that the average data volume for STIS will drop to 2.3 gbits/day after SM4,
compared to 2.7 gbits/day in Cycle 11. (All data volumes in this section contain the 30%
engineering overhead.)
Since the start of Cycle 11, the average data volume for STIS has been 0.95 gbits/day for
science, 0.90 gbits/day for pure parallels, and 0.86 gbits/day for calibration. These rates
include overhead.
11
20 Mar 03
After SM4, the calibrations will likely be reduced by 20%, as STIS calibration moves
into maintenance mode. The pure parallels will likely be unchanged. The main science
will likely be reduced by 30%. Scaling the Cycle 11 rates by these reductions gives 2.3
gbits/day.
To understand the estimated reduction in science, we need to break down the STIS usage
by detector and optical element, and look to see where other instruments
may take away science through improved capabilities. SM4 brings another competing
spectrograph (COS) onto HST, which will seriously impact NUV & FUV science but not
CCD science.
We begin by breaking down the STIS usage for Cycles 8 through 11; we exclude Cycle 7
because the usage of a new HST instrument is not representative of its usage in
subsequent cycles. By detector, the breakdown in exposure time during these cycles has
been 35% far-UV, 31% near-UV, and 34% CCD. The far-UV breakdown is 11% highresolution echelle, 40% medium-resolution echelle, 42% first-order spectroscopy, and 7%
imaging. The near-UV breakdown is 17% high-resolution echelle, 35% mediumresolution echelle, 41% first-order spectroscopy, and 7% imaging.
We assume that the overall far-UV usage will be reduced by 60%. None of the highresolution echelle spectroscopy will go to COS, because there is no equivalent mode on
COS. Approximately 60% of the medium-resolution echelle spectroscopy will go to
COS, but not all, because STIS has better resolution and less stringent bright object
limits. About 60% of the first-order spectroscopy will go to COS, but not all, because
STIS can observe extended objects in its long slits and it has less stringent bright object
limits. All imaging will go to the ACS/SBC, given its wider field and higher sensitivity.
The overall near-UV usage will be reduced by 35%. None of the high-resolution echelle
spectroscopy will go to COS, because there is no equivalent mode on COS.
Approximately 25% of the medium-resolution echelle spectroscopy will go to COS;
compared to the far-UV, the gains by COS are not as strong, because STIS has better
wavelength coverage, resolution, and bright object limits. Approximately 50% of the
first-order spectroscopy will go to COS, but not all, because STIS has better wavelength
coverage, resolution, and bright object limits. About 75% of the imaging will go to
WFC3 and ACS/HRC, but there are still some imaging regimes where an observer wins
with the STIS near-UV detector.
Given a 60% reduction in far-UV science, a 35% reduction in near-UV science, and no
reduction in CCD science, STIS usage will be reduced by ~30% compared to cycle 11.
This translates into 2.3 gbits/day (including the 30% engineering overhead) after SM4.
Note that this is a conservative estimate; the average daily rate on STIS is unlikely to
exceed 2.3 gbits/day, but it may be somewhat lower, depending upon how
enthusiastically the community embraces the new capabilities of COS.
12
20 Mar 03
2.5 COS Data Volume Estimate
Introduction and Background
The default data-taking mode for COS science will be TIME-TAG.
All internal
calibration data (WAVECALs, FLATs, and DARKs) will always be taken in TIME-TAG
mode. Objects near the brighter end of the COS observable range may be taken in
ACCUM mode. At the equivalent highest non-loss data rates, TIME-TAG will produce
more data volume per exposure than ACCUM, so in the following we consider only
TIME-TAG limiting cases.
All estimates that follow do NOT include the TIME-TAG recording “inefficiencies”
documented elsewhere (these inefficiencies amount to approximately 2.5 Mbytes – 20
Mbits – of blank or fill data per COS TIME-TAG data dump. [2.5 / (9+2.5) ~ 22%. COS
ACCUM dumps will have an inefficiency of [2.5/(16+2.5) ~14%]. For COS this
inefficiency can add up to 28% (2.5/9) additional data volume per regular TIME-TAG
readout.
As pointed out in section 1, the standard unit of data volume adopted in this document is
the gigabit (gbit) defined as one billion bits. All COS data volume estimates derive from
the COS onboard memory buffer size of 18 Mbytes, expressed in the usual computer
form (1 Mbyte corresponds to 1024 x 1024 bytes.). Mbytes are converted to mbits (1000
bits) by multiplying by 8 and dividing by 1024x1024. The final conversion to bits is
accomplished via a further division by 1000.
Useful numbers and benchmarks
First we define some useful COS data volume benchmarks. Standard Bright Object
Protection (BOP) limits will restrict the maximum COS data-taking rate to ~30,000
counts per second for both detectors, however the highest data-taking rate sustainable in
TIME-TAG without data-loss due to onboard memory readout rate restrictions is ~21,000
counts/sec.
The COS onboard memory buffer size is 18 Mbytes or 0.151 gbits. This corresponds to a
TIME-TAG dataset of approximately 4.7 million photons. The largest possible dump is
18 Mbytes, but the largest recurring dump for COS TIME-TAG will be 9 Mbytes, or a
half-buffer. COS ACCUM images are 8 Mbytes in size, such that two ACCUM images
can be held in onboard memory prior to read out.
Approximately 192 sec are required to fully read out the full COS onboard memory
buffer and approximately 110 sec for 9 Mbytes. We assume the 192 second figure for
ACCUM dump times, as well.
13
20 Mar 03
The COS DRM made a simple estimate that approximately 25% of available HST
science orbits in cycle 14 will be devoted to COS science observations (4000 x 0.25 =
1000 orbits).
Alternatively, we can consider the following usage scenario. Approximately one-third
(4000 x 0.33 = ~ 1350) of all cycle 14 orbits will be devoted to COS, STIS, and
NICMOS observations. An equal split among SIs yields approximately 450 orbits for
each. We estimate that 60% of current STIS FUV science observing fraction and 35% of
current STIS NUV science fraction will move to COS. As STIS FUV observations use
approximately 35% of available STIS time and NUV use ~31%, the corresponding
fraction of STIS total science time that we estimate may move to COS is ~ 1/3. In our
cycle 14 scenario, this would add an additional 150 orbits for a total of 600 orbits
committed by the TAC to COS. Addition of the anticipated 250 orbits of COS GTO time
results in an estimate of 850 COS science orbits.
- DRM estimates approximately 1000 COS science orbits; alternative estimate is
850 orbits will be scheduled per cycle
- typical COS visit will last 6 orbits so, approximately 140-160 visits will be
scheduled per cycle for an average of 1 6-orbit COS visit every 2 - 2.5 days.
As we describe later, present assumptions concerning calibration usage will not
significantly alter this “one visit every other day” estimate.
Visit Scenarios
We shall consider several visit scenarios: 1) a 6-orbit SAA-free non-CVZ visit with
typical visibilities of 50 min (3000 sec) per orbit for a total of 18,000 sec of observing
time; 2) a 10-orbit SAA-free non-CVZ visit (visibilities as in item 1) for a total of 30,000
sec of observing time; 3) a 6-orbit SAA-free CVZ visit of 96-minute (5760 sec) orbits for
a total of 34,560 (~35,000) sec of observing time.
Data Rate Fiducials
At 1000 counts/sec: 4 kbytes per sec or 14,400 kbytes per hour (14.1 Mbytes per hour)
At 21000 counts/sec: 84 kbytes per sec or ~300 mbytes per hour
Data Volume Scenarios
Typical Rates:
A “typical” relatively bright COS target will fill the onboard memory buffer in one orbit.
This corresponds to approximately 1570 counts/sec/resel (S/N ~40 per resel). (Most
COS observations will likely be in the S/N ~15-20 regime). Such an observation will
produce 144 Mbits (0.151 gbits) of data per 3000 sec orbit or 0.29 gbits in a 5760-sec.
Non-SAA CVZ orbit. Therefore, (see Table 2-3) a “typical” 6-orbit non-CVZ SAA-free
14
20 Mar 03
visit would yield 864 Mbits (0.91 gbits). A 6-orbit CVZ SAA-free visit yields 1640
Mbits (1.74 gbits). Similarly, a 10-orbit non-CVZ SAA-free visit would produce 1.51
gbits. For any TIME-TAG case, the worst-case limiting scenario will be a 6-orbit SAAfree CVZ visit.
For our purposes, we will assume for the COS contribution to Observatory data volumes
the 6-orbit non-CVZ SAA-free visit, scheduled every other day and yielding ~ 0.91 gbits.
The daily average is then half-this amount; 0.45 gbits. Multiplying this by 1.3 to account
for the engineering overhead gives 0.59 gbits/day, which is entered in table 2-1 for the
average COS daily amount for primes. (Recall that COS parallel science is prohibited.)
Extreme and Limiting Cases:
The following considers the most extreme COS data volume case. The maximum readwhile-acquiring no-loss data-taking rate in TIME-TAG mode is ~21,000 counts/sec. This
value is within allowed COS BOP limits for both detectors. Operation at this rate
produces approximately 62 million counts per 3000 sec orbit or 2.02 gbits per orbit. An
SAA-free CVZ orbit would produce 3.9 gbits of data. A 6-orbit SAA-free 3000 sec/orbit
“typical” visit would produce ~12 gbits of data. If such a 6-orbit SAA-free visit were
conducted in the CVZ as the worst-case scenario, ~23 gbits of data would result. A 10orbit SAA-free, 3000-sec/orbit visit yields ~20 gbits. Note that STIS is capable of
operation at 8/9 of these rates, hence capable of nearly these same data volume levels,
but, to our knowledge, this never has occurred in practice. Note, also, that none of the
rates in this section include the 30% engineering data overhead.
Other Instruments in Parallel with COS:
ACS and WFC3 can produce high data volumes when used in parallel with COS. In all
cases in which the COS detector is readout at repeated intervals shorter than typical ACS
or WFC3 readout times, no parallel (high-data volume) camera operation can occur. If
we assume the shortest ACS readout time is 6 min, then we can ask what COS count-rate
will produce COS buffer dumps at the same or shorter intervals in order to establish the
count-rate above which all parallel operation must cease when COS is prime. Above this
maximum rate, the worst-case data volume assumptions for the telescope will be the
highest COS values and below this rate, the worst-case assumptions for the telescope will
be the sum of the COS limit plus any other allowable worst-case values for the parallel
SIs.
Filling half the COS buffer (2.35 million photons) in six minutes requires a count rate of
approximately 6500 counts/sec. Therefore, volumes of about 4.2 times those for the
“nominal” or “typical” COS data rate of 1570 counts/second in Table 2-3 are the upper
limit to COS data volumes that can be added to other SI-in-parallel data volumes (for a 1orbit SAA-free 3000 sec/orbit: 0.624 gbits per orbit; for 1 CVZ orbit SAA-free: 1.2
gbits).
15
20 Mar 03
So, for the limiting 6-orbit CVZ SAA-free case, the effective COS parallel limiting data
volume is 7.2 gbits plus that of the parallel SIs or ~23 gbits from COS alone with no
other SI active.
3000sec
orbit
Nominal
(1570
0.151
5700sec
(CVZ)
orbit
0.29
6-orbit
6-orbit
SAA-free SAAfree
normal
CVZ
0.91
1.74
10-orbit
SAAfree
1.2
3.8
7.2
6.3
1.8
5.8
11.1
9.6
3.9
12.1
23.2
20.2
Maximum
1.51
counts/sec)
Max COS parallel 0.62
rate (~6500 counts/sec)
Common fast rate 0.96
(10,000 counts/sec)
Highest no-loss
rate (21,000 counts/sec)
2.02
Table 2-3: COS Data Volume Summary (gbits/1-visit day)
(Not including engineering overhead)
Calibration Usage:
DARK: All COS darks will be taken in TIME-TAG mode. Anticipated data volumes
from COS dark exposures are miniscule. Total FUV rates are ~12 counts/sec (~2.2 mbits
per CVZ orbit) from the entire detector; total NUV rates are ~220 counts/sec (40 mbits
bits per CVZ orbit) from the entire detector.
WAVECAL: All COS wavecal exposures will also be taken in TIME-TAG mode. COS
wavecal exposures will be quantified shortly, however, a conservative overestimate
assuming 100 lines per spectrum each with 10,000 counts yields one million counts or
roughly 32 mbits per exposure. This estimate corresponds to a rate of approximately
5000 counts/sec if in a 3-minute exposure. Wavecal exposures will be 1-3 minutes in
duration and will not threaten COS bright object limits. Under this scenario, wavecals
taken in rapid succession in a calibration exposure would produce count rates, hence data
volumes, approximately three times higher than the “typical” case in Table 2-3 or
approximately 0.45 gbits per orbit. There are approximately 60 COS grating central
wavelength settings. One six-orbit visit of continuous internal wavecals at this rate (10
exposures per orbit) would be sufficient to sample all COS central wavelength positions
and would produce a data volume corresponding to 2-3 “typical” 6-orbit visits. Such a
program would be likely to execute only once or twice per cycle. Automatic wavecals
taken with science exposures will not add significantly to routine science data volumes.
(This implies the addition of 4-6 6 6-orbit “typical” visits to the estimates.)
16
20 Mar 03
FLAT: All COS flatfield exposures will also be taken in TIME-TAG mode. The actual
data rate for COS flatfield exposures has not been finally determined. If we operate at
“typical” rate of 1500 counts per second per resel (yields S/N~40 in one 50-minute
visibility), then we reach photon-statistical S/N~100 in one six-orbit visit. Four such
visits will be required to obtain a single epoch of flat fields. The COS DRM estimates
that such a program would run twice per cycle. (This implies the addition of 8 6-orbit
“typical” visits to estimates.)
FLUX: All COS flux-calibration standard star exposures will also be taken in TIMETAG mode. Assume reach S/N~40 per resel, requires 1500 counts/sec for 50 min; or
15,000 counts per sec for 5 min. We have standard stars of this brightness, but none are
in CVZ. Again, for worst-case, assume all 60 central wavelength positions will be
calibrated. At 5 min per exposure, 3-minute overhead to read, and 5-minute overhead to
set up next exposure, approximately 4 exposures can be taken per visibility. Hence 15
orbits or 2.5 “typical” 6-orbit visits. Worst-case assumption is run this program 4 times
per cycle; more likely is twice in first cycle and once per cycle afterwards or subsets of
this program 4 times per cycle afterwards.) (This implies the addition of 10 6-orbit
“typical visits to estimates.)
Summary: We have assumed 140-160 6-orbit science visits per cycle. Calibration adds
~24 more at “typical” data rates. Therefore, calibration represents ~ 1/8 of the total
average data volume.
Conclusion: the original bound of 1 6-orbit visit every-other day remains valid and is
not significantly perturbed by calibration requirements. All calibration proposals
estimated here will run at or less than the “typical rate” of 0.9 gbits per 6-orbit non-SAA,
non-CVZ visit. The result is a simple average of 0.45 gbits/day, of which ~ 1/8 (.05
gbits) is calibration data. Multiplying these results by 1.3 for the engineering overhead
gives the COS daily values that appear in table 2-1 (0.52 gbits/day prime and .05
gbits/day calibration).
Caveat: We must evaluate actual flat field and, to a lesser extent, wavecal count rates.
Flat field rates could safely be 10 times higher than estimated here.
2.6
Other Scenarios
2.6.1 Variation of Parallel Scheduling
Given the aforementioned uncertainty in the efficiency of simultaneous scheduling (as
prime and parallel) of two large-format SIs and its large effect on total data volume, this
section attempts to depict the total daily data volume as a function of varying the amount
of WFC3 and ACS parallel orbits. Figure 2-1 is a family of three curves, parametrized by
the number of daily ACS parallel orbits (5, 3, and 1), that demonstrate the change in total
data volume as the number of WFC3 parallel orbits is varied from 0 to 10. Our other
17
20 Mar 03
basic assumptions remain unchanged, i.e., WFC3 and ACS are each scheduled for 5
orbits as prime and the other 3 SIs schedule as primes in the remaining 5 orbits.
Average Daily Data Volum e as function of WFC3 parallel orbits
(WFC3 Prim e = ACS Prim e = 5 Orbits)
35.0
30.0
*
25.0
20.0
Total Daily Volume: ACS Parallel = 5
Total Daily Volume: ACS Parallel = 3"
Total Daily Volume: ACS Parallel = 1"
15.0
10.0
5.0
0.0
0
1
2
3
4
5
6
7
8
9
10
W F C 3 Par all el O r b it s
Figure 2-1 Daily data volume as a function of WFC3 parallel orbits. The asterisk
marks the baseline estimate of 27 gbits/day resulting from the average equivalent of
4.6 ACS parallel orbits and 7.1 WFC3 parallel orbits.
The smallest data volume is 19.7 gbits/day, which can be expected to occur with one
ACS parallel orbit and no WFC3 parallel orbits. The largest daily data volume, under
these circumstances, occurs with 5 ACS orbits in parallel and 10 WFC3 orbits in parallel
and leads to the baseline total of 29.1 gbits/day. The routine scheduling of 10 WFC3
orbits/day is deemed optimistic and, so, 29 gbits/day is considered relatively rare under
normal conditions.
2.6.2
WFC3 Predominates as Prime 6 to 10 orbits/day
In this case, we look at the effect increasing the number of WFC3 prime orbits from 5 to
10 while maintaining the assumption that ACS schedules as prime in the half of the
remaining orbits. The WFC3 and ACS parallel orbits scale proportionately with the
variation in the allocation of prime orbits. The results, in gbits/day, are:
WFCS Prime Orbits
6
7
8
9
10
Total Data Volume
27.3
27.7
28.2
28.6
29.06
As one might expect, the net effect is small because an increase in the WFC3 prime orbits
is countered by a corresponding decrease in its parallels along with a decrease in the
number of opportunities to schedule ACS, the other large-format SI, as prime. There is
18
20 Mar 03
also a small increase in the ACS parallel orbits, and this is partially offset by the
reduction in COS prime orbits.
2.6.3 ACS Predominates as Prime 6 to 10 orbits/day
This case is equivalent to the previous one, except that the roles of ACS and WFC3 are
swapped. Since an increase in ACS prime orbits allows more WFC3 parallel scheduling,
the resulting data volumes also increase slowly (0.21 gbits/(ACS prime orbit) though
with a slightly lower zero point.
ACS Prime Orbits
6
7
8
9
10
Total Data Volume
27.0
27.2
27.4
27.7
27.9
2.6.4
Extreme COS Cases
In section 2.5, it was shown that COS, albeit under circumstances expected to be very
rare, can produce as much as 23 gbits in a 6-orbit CVZ, SAA-free observation at the
highest possible count rate. Applying the 30% engineering overhead gives 29.9 gbits. In
this case, there would be 9 orbits left for scheduling WFC3 and ACS. We assume that
WFC3 and ACS are scheduled, prime and parallel, in the same proportions as in the
baseline estimate. (STIS and NICMOS, being significantly lower data producers, are not
considered.) In addition to COS's 29.9 gbits, ACS and WFC3 would produce 5.7 and 9.2
gbits, respectively, for a total of 44.8 gbits in one day.
The 6-orbit, SAA-free case (non-CVZ), as explained in section 2.5 would be expected to
produce 15.7 gbits (12.1 plus 30% overhead). With the same contributions from WFC3
and ACS as above, the total data volume amounts to 30.5 gbits, much closer to the
estimated post-SM4 daily average.
2.6.5 Conclusion
Aside from the highly unanticipated 6-orbit COS visit producing 23 gbits, the variation in
total data volume as a function of prime scheduling is small. As indicated in section
2.6.1, the total daily data volume is most sensitive to the efficiency in scheduling ACS
and WFC3 parallels.
19
20 Mar 03
Section 3 Current TDRS Scheduling Practice
This section gives an overview of the STScI level of effort for the routine scheduling of
TDRS support for the HST science program.
The processing of TDRS contacts falls into three stages: the request for contacts, the
receipt of the shortfalls and the merge of the final Missions Schedule (MS) with the
available contacts. Each step occurs at a certain time prior to the execution of the MS
according to a schedule imposed by the Network Control Center (NCC). NCC handles
TDRS needs from the user community on a week by week basis. Requests for TDRS
contacts must be sent to NCC no later than 14 days before execution. NCC returns
adjustments to the requested contacts in the ‘shortfall’ week (14 to 7 days before
execution). In the final week beginning 7 days before execution we match our real
downlink needs to the granted contacts, returning those that are not needed, adjusting
parameters of those that we have and obtaining additional contacts if necessary.
Request
The request for TDRS contacts may be made using either a generic or actual pointing
profile. The High Gain Antennas (HGA) may be pointed within a region slightly smaller
than a hemisphere centered on the V3 axis (HGA1 is centered on +V3 and HGA2 is
centered on –V3). Using the actual pointing profile will have the advantage of leading to
a better match between what we are granted and what we need, so less work will be
required in the final step. However use of the actual profile requires creation of the
calendar, SMS and an initial MS prior to T-14, which leads to more work generally and
significantly complicates processing when SMS adjustments are desired late in the
process (e.g. target of opportunity observations). The numbers of contacts needed can be
determined very well from an actual schedule because the data recording activity along
the calendar timeline is known, so with a small level of oversubscription the granted
contacts should match well with what is needed. If a generic request is made (which is
the current process) a high number of contacts are requested uniformly over the SMS
week. It is made at a high enough level to cover the data volume needs of the majority of
SMSes, and includes oversubscription to account not only for the shortfalls but also for
the losses in contacts due to the differences between the actual and generic-based granted
HGA views. When we expect weeks of exceptionally high data volume, the number of
contacts requested are increased. At present we routinely use transmitter 2 alone
(through HGA2), adding contacts with transmitter 1 only when needed to handle high
data volume. A generic request for TDRS contacts may take as little time as one hour.
Shortfall Resolutions
The shortfall resolution process requires us to make adjustments to a limited number of
the requests based on information provided by NCC by FAX. While we typically request
20
20 Mar 03
about 190 TDRS contacts for a week, the NCC shortfall list affects only about 30 of those
events. During shortfall resolution each of the indicated events is adjusted manually in
the UPS database within the parameters allowed by NCC. At completion we resend the
adjusted event details to NCC. This process takes only a few hours.
Final TDRS Schedule and Final MS creation
Shortly after T-7 NCC will release the confirmed TDRS schedule for HST. This
information is one of the inputs to the PASS MS and CL generation system that is used
during final MS and CL processing. The PASS software matches the downlink needs of
the MS with the available TDRS contacts and the SMS (actual) pointing profile. It also
produces a file of ‘replaces’ detailing the TDRS events that need to be changed. The
final MS and CL generation process may also disclose other issues that need to be
handled but these will not be discussed here.
If there is no overflows reported by the PASS software we send the ‘replaces’ to NCC,
generate a new TDRS schedule and rerun MS and CL generation with the updated TDRS
schedule. The PASS run will create another ‘replace’ file though the number of changes
expected will be few. The second run has a different input than the first (the updated
TDRS schedule) so the algorithm determining use of TDRS contacts may make
somewhat different choices, and consequently produce another set of “replaces”. A first
MS run may produce ~90 replaces and the second set of replaces should number in the
single digits. A third run should eliminate them entirely however that is not guaranteed.
After each new set of replaces is sent to NCC, a new TDRS schedule generated and
another run of the PASS MS and CL generation system made. If the number of replaces
is very small then after these are transmitted to NCC and a new schedule generated, only
the Command Loads are generated. This overall process may be completed in a few
hours, if there were no other issues to handle at the same time.
If there are overflows of the solid state recorders (SSRs) or the ending usage of the SSRs
is above 50% then additional downlink time is needed. This is resolved by extending
existing TDRS contacts, adding new contacts on the primary transmitter, then adding
contacts on the other transmitter. Extending existing contacts has the advantage of
causing no increase in the number of transmitter turn-ons. Each contact change is made
manually on the UPS. We can estimate the total amount of additional contact time
needed so the contact extensions and additions on the primary transmitter are made at the
same time. NCC provides a TDRS unused time (TUT) report daily to all TDRS users.
From this list we determine what new TDRS contacts are possible. Some of the unused
TDRS time may already have been grabbed by users of other satellites but this becomes
clear as we attempt to add the new service times one by one. Following this we once
again make a new TDRS schedule and execute another MS run. Some of the added
contacts may not be usable due to HGA motion constraints, engineering record activities
or even low gain antenna visibilities. This process of adding new services may be
repeated if an overflow still exists. Only if we have exhausted all available contact usage
on the primary transmitter do we attempt to use the secondary transmitter. In this case
21
20 Mar 03
we will be placing the downlink requests on top of single service access (SSA) services
that we already have available in the TDRS schedule. These are times when we have an
uplink scheduled through the other HGA, so there is no competition with other TDRS
users and the PASS software has already confirmed that we have the HGA view to
TDRS. In selecting uplinks on which to place the new downlink services we
preferentially select the longest services in order to minimize the number of transmitter
turn-ons.
Level of Effort
When the weekly data volume is less than about 120 gbits the number of TDRS
downlinks required should be less than the number that NCC has granted to us.
Therefore the assignment of specific dump times will be handled automatically by the
PASS software and will not require any extra effort on the part of the operator. If, in
addition, the MS and CL generation is routine the process could be complete in a few
hours.
When the weekly data volume is more than 120 gbits we may need significant numbers
of additional TDRS contacts, possibly requiring the use of the secondary transmitter. If
the data volume is above 150 gbits then the need for the secondary transmitter is certain.
More than a day (two shifts) will be required to handle the overflow analysis, manual
addition of contacts, and the additional repetitions of TDRS schedule generation and
PASS software runs.
Section 4 References
1. WFC3 ISR 2001-02, Data Volume Estimates for WFC3 Operations by C. Lisse, R.
Henry, P. Knezek, C. Hanley, 27 March 2001
2. WFC3 ISR 2001-09/10, WFC3 Design Reference Mission
3. WFC3 Pipeline CDR, WFC3 Science Data Volume Estimates, R. Kutina
4. COS ISR 99-01.0, Design Reference Mission and Ground System Volume
Requirements, by C. Keyes, R. Kutina, J. Morse.
22
20 Mar 03
Appendix 1
Analysis of Engineering Overheads
The downlinked data volume is usually substantially larger than the simple sum of the
corresponding exposure data volume. This engineering overhead can be characterized as
a linear function of the exposure volume. The following analysis by Alan Patterson
provides a derivation of the slope and y-intercept of the linear function and demonstrates
that for the typical SMS the engineering overhead amounts to approximately 30% of the
total exposure data volume.
The Constant Component. The components of the constant are startup handshaking,
dump time pad, and ramp down time. The component due to startup handshaking occurs
as a result of the sequence of commands that need to be executed to start the Solid State
Recorder (SSR) record activity and to initiate each instrument's buffer dump. This
sequence of events requires at least 8 seconds with commands being issued on integral
second marks. The ramp down time includes 2 seconds for Tape Recorder motion
termination.
The dump time calculation includes an explicit pad of 10 seconds. There are other small
additional time pads that are instrument dependent. Therefore the fixed component of the
engineering overhead is at least 20 seconds.
The Linear Component. Packetization embeds 965 words of raw data in a 1024 word
packet along with identifying information, however this linear overhead is already
included in the values of data volume available in the PMDB.
The linear component of the overhead includes the effect of Reed-Solomon
encoding plus other observed components. Reed-Solomon encoding (a factor of 15/14) is
required to ensure a high level of data integrity. There are small additional percentages
of overhead. These are either understood to be a result of real world inefficiencies in data
transfer when compared to theoretical designs or are observed but unexplained
inefficiencies. The additional inefficiencies and the fixed explicit pad of 10 seconds need
to be included in order to accommodate the real world behavior of the equipment. Indeed
periodic downward adjustments of the pads are believed to have reduced the safety
margins to about the minimum prudent level.
Because of the 1 megabit/sec transfer rate, the constant component of the overhead
is typically 20-24 Mbits per buffer dump to the SSR, where the small variation reflects
instrumental differences. The linear factor is about 1.10 after packetization, so includes
the effect of Reed Solomon encoding and the allowances for data transfer inefficiencies.
The table shows the observed values of the constant and linear factor for each instrument.
23
20 Mar 03
MS to Exposure Data Volume
ACS
NIC
STIS
All (incl
WFPC2)
Constant
(Mbits)
21.765
24.481
20.889
20.975
Linear
Factor
1.1129
1.0929
1.1011
1.1148
Summary
Observed Data Volume Overheads
•
•
•
•
•
•
•
From data for all exposures on a typical SMS (023437E7) the MS data volume is
constant + (linear factor x Exposure data volume)
Constant is 20-24 Mbits per exposure (Record activity to SSR) and is instrument
dependent
Linear Factor is ~ 1.10 and is also instrument dependent
WFPC2 exposures are always 44.816 Mbits which become 68.8126 Mbits on the
MS. No linear relationship can be determined. Resulting overhead consistent
with other instruments.
Typical SMSes have 800 – 1000 exposures so the fixed overhead consumes ~18.9
gbits
The linear Factor –
o Reed Solomon Encoding (7%)
o Documented additional (1.7%)
o Contingency
o Total effect about 11%
Thus, for the typical SMS of ~900 exposures and 100 gbits (exposure data
volume) the total data volume, V, can be expressed as:
V = (1.11) x (100 gbits) + (~900 exposures) x (~21 Mbits/exposure)
= 111 + 18.9 = ~ 130 gbits
For a typical 100-gbit SMS, this represents a 30% increase.
24
20 Mar 03
Appendix 2
Data Recording Constraints and Prime/Parallel Scheduling
Assumptions
A2.1 Data Recording Constraints
The following caveat for scheduling large-format SIs is provided by Alan Patterson.
The standard duration for an ACS WFC readout is 349 seconds (almost 6 minutes). For a
typical orbit of say 52 minutes, where 6 minutes is consumed by the PCS Acq there
would be room for 7.9 ACS WFC readouts within visibility. An extra one could occur in
occultation, but the integral maximum would still be 8 (maybe 9) per orbit and then only
when all readouts are jammed back to back.
In practice a limit of 5 or 6 readouts per orbit has been suggested for ACS visits. Any
new large format instrument (e.g. WFC3) with similar readout times will require a similar
block of 6 minutes per full readout. For visits of the new instrument to be successfully
scheduled in parallel the readouts for it must be able to fit between existing readouts of
the primary science, but the primary science readouts have been placed on the timeline
without any knowledge of the need for parallels. The gaps between them will only
permit a large format parallel readout where an exposure of the primary (ACS) is at least
twice the duration of a large format readout i.e. 12 minutes.
A WFC3/ACS parallel scheduling experiment was performed by Wayne Baggett.
His results follow:
A near-best case visit with ACS & WFC3 parallels was investigated. It was set up as a
CVZ visit containing the following exposures (no spectral elements or apertures are
mentioned as they are essentially irrelevant, and all exposures set CR-SPLIT=NO;
ACS/HRC auto-parallels were disabled for this example):
Exp
Num Config
10 ACS/WFC
OpMode
Sp. Requirements
ACCUM
20 ACS/WFC
ACCUM
30 WFC3/UVIS ACCUM
PAR 30 WITH 20
31 ACS/WFC
ACCUM
40 WFC3/UVIS ACCUM
PAR 40 WITH 31
41 ACS/WFC
ACCUM
50 WFC3/UVIS ACCUM
PAR 50 WITH 41
25
20 Mar 03
51 ACS/WFC
ACCUM
60 WFC3/UVIS ACCUM
PAR 60 WITH 51
61 ACS/WFC
ACCUM
70 WFC3/UVIS ACCUM
PAR 70 WITH 61
This is a total of 11 full-frame readouts of 4kx4k detectors, and requires a total of 103
minutes to execute. (It us possible that some further tweaking could result in all of them
scheduled in a 96-minute visit.) Of the 103 minutes total time, 6 minutes are spent in a
GS Acq, and 58.5 minutes are actively spent in dump activities.
In summary, a near best-case scenario for a CVZ orbit would be 6 full-frame ACS WFC
exposures plus 5 full-frame WFC3 UVIS exposures.
A2.2 Prime and Parallel Scheduling Assumptions
The assumptions used in section 2 for assessing the amount of ACS and WFC3 prime and
parallel science are based in part on the statistics of the cycle 11 scheduling of ACS, the
current “large-format” SI. Figure A-1 is a histogram, provided by Alan Patterson,
showing the frequency of ACS prime visits containing varying numbers of exposures
(and therefore buffer dumps).
Distribution of long ACS bufferdum ps per prim e visibility
25
Percentage of Visibilities
20
15
10
5
0
1
2
3
4
5
6
7
Num ber of long (>4 m inute) ACS buffer dum ps in a visibility
Figure A-1
26
8
20 Mar 03
With this data in hand, we made the following assumptions for prime and parallel
scheduling:
1. WFC3 and ACS each take 1/3 of the prime observing time, or 2/3 total (~ = ACS
cycle 11).
a. The pattern of readouts for the WFC3 primes will be like that of the ACS
primes now. (probably true for the UVIS channel, not so obvious for the
IR channel, so that is an uncertainty).
b. The distribution of long ACS buffer dumps during prime visibilities is
more or less what we assume for 2/3 of the time after SM4.
c. We use the current ACS buffer dump statistics (fig. A-1) as a guide.
i. 50% of the time there are 4 or more long, prime SI buffer dumps,
ii. 50% of the time there are 3 or fewer long, prime SI buffer dumps.
d. For the 50% of the time when there are 3 or fewer long, prime dumps,
the other wide field SI is successfully scheduled in parallel, limited to
two exposures (ACS and UVIS) or one buffer dump (IR).
e. For the other 50% of the time, parallels are successfully scheduled only
1/3 of that time, again limited to two exposures or one buffer dump (IR).
2. STIS/NICMOS/COS are primes for 1/3 of the time
a. Success in scheduling ACS/WFC3 parallels will be the same as it is now
for ACS parallels.
b. Since the ACS and WFC3 readouts are close to the same size, today's
ACS parallel data volume is a guide for the initial parallel volume for
this 1/3 of the time (divided between ACS and WFC3). Using table 2-2
(sec. 2.2) , we put ACS/WFC3 parallel data volume ~ 1.5 gbits in
parallel with STIS/NICMOS/COS. We actually assign 1.5/2 = 0.75 gbits
to ACS and allocate the equivalent of 1.25 orbits to WFC3 at that SI’s
parallel data rate of ~ 0.66 gbits/orbit (with overhead) as derived in sec.
2.1.2.
c. These parallels are assumed to be limited to two readouts in the visibility
period, and that the prime observation has two readouts or more in the
visibility period
i. Then, there is a small possibility that the other large format
camera could be scheduled for a parallel also. Roughly
interpreting fig A-1, one could conclude that the second parallel
will never schedule if it wants more than 2 readouts, and would
schedule better with only 1 readout.
ii. A useful single buffer dump parallel orbit out of the WFC3 IR
channel is more likely than from the larger-format cameras.
iii. Therefore, assume that only 1/2 of this 1/3 gets a second parallel,
and that those are limited to the equivalent of two images of
WFC3 IR data.
27
20 Mar 03
These are the basic scheduling assumptions that result in a daily average of 27 gbits cited
in sections 1 and 2. For convenience, the table from section 2.1 is repeated here.
PRIME ORBITS
5 orbits
5 orbits
5 orbits
WFC3
ACS
COS/NICMOS/STIS
2.5 ORBITS
PARALLEL
ORBITS
(not including NICMOS &
STIS)
ACS
<1
A
C
S
2.5 ORBITS
<1
WFC3
W
F
UV+IR
WFC3/ACS => 1.5 gbits
I
R
Graphical depiction of prime and parallel scheduling assumptions
28
2.5ORBITS
WFC3 IR
20 Mar 03
Appendix 3 Cycle 11 high data volume ACS visits
This lists identifies the ACS visits scheduled on those days of cycle 11 where the ACS
data volume went over the gbits per day and includes the Proposal titles for reference.
--------------------------------------------------------------------Start Day, SU/Visit, Volume (gbits), Type
179, 095583R,
0.1854, upar
179, 0929025,
2.2806, main
179, 095583S,
0.1854, upar
179, 0944201,
4.0560, main
179, 0944202,
4.0560, main
179, 0944206,
4.0560, main
179, 0944207,
4.0560, main
179, 095583T,
0.1854, upar
179, 095583U,
0.1854, upar
179, 095583V,
0.1854, upar
179, 095583W,
0.1854, upar
262, 0942541,
3.0408, main
263, 09558FG,
0.2008, upar
263, 09558FH,
0.2008, upar
263, 0942542,
3.0408, main
263, 09558FI,
0.2008, upar
263, 0942543,
3.0408, main
263, 09558FJ,
0.2008, upar
263, 0942544,
3.0408, main
263, 09558FK,
0.2008, upar
263, 0942545,
3.0408, main
263, 0942546,
3.0408, main
263, 0942547,
3.0408, main
304, 09647FU,
0.2008, upar
304, 0942548,
3.0408, main
304, 09647FV,
0.2008, upar
304, 0942549,
3.0408, main
304, 09647FW,
0.2008, upar
304, 0942550,
3.0408, main
304, 09647FX,
0.2008, upar
304, 0942553,
3.0408, main
304, 0942554,
3.0408, main
304, 0942555,
3.0408, main
304, 09647FY,
0.2008, upar
315, 0950033,
2.2806, main
316, 09647IC,
0.2008, upar
316, 09647IG,
0.2008, upar
316, 0950034,
2.2806, main
316, 09647ID,
0.2008, upar
29
20 Mar 03
316,
316,
316,
316,
316,
316,
316,
316,
316,
317,
317,
317,
317,
317,
317,
317,
317,
317,
318,
319,
319,
319,
319,
319,
319,
319,
319,
319,
319,
319,
319,
320,
320,
320,
320,
320,
320,
320,
320,
320,
323,
323,
323,
323,
323,
323,
30
0950035,
09647IE,
09649FC,
0950036,
0950037,
0950038,
0950039,
09647IF,
0950040,
09647IH,
09647IK,
09647II,
09647IL,
0950041,
0945424,
0950042,
0950043,
0949002,
0950049,
09647IR,
09647IT,
0950050,
09647IS,
0950051,
0949003,
09647IU,
0950052,
0950053,
09647IV,
09472BN,
0949001,
09647IW,
09647IZ,
09647IX,
0950054,
09647IY,
0950055,
0965614,
0950056,
0950057,
09647JL,
0970014,
09647JM,
0970005,
09647JN,
0970007,
2.2806, main
0.2008, upar
0.6883, upar
2.2806, main
2.2806, main
2.2806, main
2.2806, main
0.2008, upar
2.2806, main
0.2008, upar
0.2008, upar
0.2008, upar
0.2008, upar
2.2806, main
0.1034, main
2.2806, main
2.2806, main
8.1120, main
2.2806, main
0.2008, upar
0.2008, upar
2.2806, main
0.2008, upar
2.2806, main
4.0560, main
0.2008, upar
2.2806, main
2.2806, main
0.2008, upar
0.0626, main
8.1120, main
0.2008, upar
0.2008, upar
0.2008, upar
2.2806, main
0.2008, upar
2.2806, main
2.0280, main
2.2806, main
2.2806, main
0.2008, upar
1.4298, main
0.2008, upar
1.4412, main
0.2008, upar
1.3520, main
20 Mar 03
323,
323,
323,
323,
323,
323,
323,
323,
323,
323,
323,
324,
325,
325,
325,
325,
325,
325,
325,
325,
325,
325,
325,
325,
325,
325,
326,
326,
326,
326,
326,
326,
326,
326,
326,
326,
326,
353,
354,
354,
354,
354,
354,
354,
354,
354,
31
0970001,
0970002,
0970003,
0970009,
0970006,
0970008,
090757X,
09647JO,
090757B,
09647JP,
09472AC,
0965602,
09647JV,
09647JZ,
09647JW,
09583B0,
09583B1,
0937911,
09647JX,
09480JO,
09647JY,
09583B2,
09583B3,
09583B4,
09583B5,
09583B7,
09647KA,
09583B9,
09583C0,
09583C1,
09583C2,
09583C3,
09647KB,
09583C4,
09583B6,
09647KC,
09583B8,
0942567,
0942568,
0942569,
09647PK,
09647PM,
0942570,
0965820,
0942571,
09647PL,
1.3520, main
1.3520, main
1.3520, main
1.3520, main
1.3520, main
1.3520, main
3.0841, main
0.2008, upar
1.4362, main
0.2008, upar
0.0626, main
0.4334, main
0.2008, upar
0.2008, upar
0.2008, upar
3.0408, main
3.0408, main
0.0626, main
0.2008, upar
1.0140, apar
0.2008, upar
3.0408, main
3.0408, main
3.0408, main
3.0408, main
3.0408, main
0.2008, upar
3.0408, main
3.0408, main
3.0408, main
3.0408, main
3.0408, main
0.2008, upar
3.0408, main
3.0408, main
0.2008, upar
3.0408, main
3.0408, main
3.0408, main
3.0408, main
0.3546, upar
0.2008, upar
3.0408, main
0.6760, main
3.0408, main
0.2008, upar
20 Mar 03
354, 0942572,
355, 0942573,
355, 0942574,
355, 09647PP,
355, 09647PR,
355, 0942575,
355, 0942576,
355, 0942577,
355, 09647PQ,
355, 0942578,
1, 09480LM,
367, 09480LN,
367, 09480LO,
367, 09647RX,
367, 09647SA,
367, 09583C5,
367, 09647RY,
367, 09583C6,
367, 09583C7,
367, 09583C9,
367, 09583D0,
367, 09647RZ,
367, 09583D1,
368, 09647SC,
368, 09583D2,
368, 09647SD,
368, 09583D3,
368, 09583D4,
368, 09583D6,
368, 09583D7,
368, 09647SE,
368, 09647SF,
369, 09647SG,
369, 090758K,
369, 09647SH,
369, 09583D8,
369, 09472EQ,
369, 09647SI,
369, 09583D9,
369, 09647SJ,
369, 09583E0,
369, 09583C8,
369, 09583F9,
369, 09647SK,
369, 09647SL,
369, 09583D5,
32
3.0408, main
3.0408, main
3.0408, main
0.3546, upar
0.2008, upar
3.0408, main
3.0408, main
3.0408, main
0.2008, upar
3.0408, main
1.3520, apar
1.0140, apar
0.3380, apar
0.3546, upar
0.2008, upar
3.0408, main
0.2008, upar
3.0408, main
3.0408, main
3.0408, main
3.0408, main
0.2008, upar
3.0408, main
0.3546, upar
3.0408, main
0.2008, upar
3.0408, main
3.0408, main
3.0408, main
3.0408, main
0.2008, upar
0.2008, upar
0.2008, upar
1.1403, main
0.3546, upar
3.0408, main
0.0626, main
0.2008, upar
3.0408, main
0.2008, upar
3.0408, main
3.0408, main
3.0408, main
0.2008, upar
0.2008, upar
3.0408, main
20 Mar 03
09583: The Great Observatories Origins Deep Survey: Imaging with ACS
09442: Optical Counterparts for Low-Luminosity X-ray Sources in Omega Centauri
09500: The Evolution of Galaxy Structure from 10, 000 Galaxies with 0.1<z<1.2
09454: The Nature of the UV Continuum in LINERs: A Variability Test
09656: Stability of the ACS CCD: geometry, flat fielding, photometry
09290: The Morphological, Photometric, and Spectroscopic Properties of Intermediate
Redshift Cluster Galaxies:
09075: Cosmological Parameters from Type Ia Supernovae at High Redshift
09379: Near Ultraviolet Imaging of Seyfert Galaxies: Understanding the Starburst-AGN
Connection
09425: The Great Observatories Origins Deep Survey: Imaging with ACS
09558: ACS weekly Test
09647: CCD Daily Monitor Part I
09480: Cosmic Shear With ACS Pure Parallels
09490: Stellar populations in M101: X-ray binaries, globular clusters, and more
09658: ACS Earth Flats
09649: ACS internal CTE monitor
09700: 2002 Leonid Observations
09472: A Snapshot Survey for Gravitational Lenses among z >= 4.0 Quasars
33
Download