Camera Control System and Data Flow Stuart Marshall Mike Huffer

advertisement
Camera Control System
and Data Flow
Oct 14, 2008
internal review
Stuart Marshall
Mike Huffer*, Terry Schalk, Jon
Thaler
Outline
•
•
•
•
CCS Definition
CCS Scope
Control Architecture
Data Flow Architecture
2
Internal review
Oct 14 ‘08
Camera Control System
•
The CCS maintains the state of the camera as a whole, and thus is
able to orchestrate the sequence of operations that enables data
collection.
•
It receives status information from the camera subsystems and is
able to detect and report faults. (This does not include the detection of
safety-related error detection, which is the responsibility of individual
subsystems.)
•
It provides the camera interface to the observatory control system
(OCS), responding to commands from and reporting status
information to the OCS;
•
It is the interface between the human operators and the instrument. It
can respond to reconfiguration commands, ensuring orderly
transitions between different modes of camera operation.
•
It provides the hardware and software necessary to receive the data
streams generated by the camera and transmit them to downstream
clients, such as the Data Management system (DM).
3
Internal review
Oct 14 ‘08
Camera Systems Locations
Approximate routing of
utility runs from camera
on telescope:
Hard
Flex
Suite of camera maintenance
rooms on east side of service
& ops building
telescope
pier
platform
lift shaft
cable wrap
camera
utilities
clean
room
white
room
test/staging
mirror
coating
data
control
entry
office
mech.
Nordby
4
Internal review
Oct 14 ‘08
CCS Baseline Architecture
subsystem
subsystem
subsystem
subsystem
“framework”
Library of classes,
Documentation,
Makefiles,
Code management
S
telemetry
Intelligent CCS
Network
Messages {cmd,
telemetry, alarms}
Mediation
logging
alarms
console
M
OCS
M
console
M
engineer
Internal review
Oct 14 ‘08
Developer
Same
GUI tools
User
scripting
5
Subsystems mapping to Managers
Every arrow has an
interface at each end.
OCS
Red means it’s a
CCS group responsibility.
Command
Response
MCM
Subsystem
managers
SAS
SDS
Data
Subsystems that produce data.
Similar for
WFS/WDS
and
GSS/GAS
(see next slide)
DM
FCS
Subsystems that do not
produce data (only status info).
Similar for
TSS, RAS,
SCU, VCS,
and L2U
6
Internal review
Oct 14 ‘08
Camera Control Architecture
Auxiliary systems
Camera Body
Camera buses
Cryostat
Thermal (T5U)
Thermal (T3U,T4U)
Science DAQ (SDS)
Science array (SAS)
Vacuum (VCS)
Shutter (SCU)
WF DAQ (WDS)
Wave Front (WFS)
Filters (FCS)
Guide Analysis (GAS)
Guide array (GSS)
Power/Signal (PSU)
TCM
FP actuation (FPU)
Thermal (T1U,T2U)
Raft Alignment (RAS)
Command
Status
Camera Control (CCS)
Observatory buses
Internal review
Oct 14 ‘08
Control Room
7
CCS architecture
•
The CCS architecture is described in DocUshare (Document-4083).
•
•
•
In a nutshell, the CCS consists of a Master Control Module (MCM) and
a collection of devices. This is very similar to the TCS SW which will
hopefully simplify the system and lessen the maintenance burden.
•
Ground rules / requirements...
•
* The control system is built on nodes in a tcp/ip network (ethernet?).
•
•
•
•
* Each device (eg, shutter) must be capable of of supporting an API to the CCS network.
The CCS will provide:
. Device APIs that support a limited number of language bindings.
. MCM adaptors to the device controllers that enable device diagnostics using the developer's
language binding.
•
* All devices must be individually addressable.
8
Internal review
Oct 14 ‘08
CCS architecture II
•
•
* One device per physical and logical connection.
Multiple connections means multiple devices.
•
* Devices must always be listening (no dead time).
•
•
* Each device must have a watchdog timer and a well
defined response to communications loss.
•
•
* Every device must implement at least German's minimal
state machine model (CTIO talk):
•
•
[Offline] <-> [Active] <-> [Idle]
<-> [Error] <->
•
(not all transitions shown here)
•
•
•
•
* There will be a standardized packet format, accommodating:
. Commands (ack/nak)
. Responses
. Asynchronous publication (status, config params, ...)
9
Internal review
Oct 14 ‘08
CCS architecture III
•
* Devices must know the time. The MCM is the CCS time server.
•
•
•
•
•
•
* Except for primary data flow only the MCM has an external interface.
. The MCM must interface with the OCS/FDB.
. The human interface to CCS is through the MCM.
. GUI implementations must support scripting.
. The CCS control API must be able to support the same GUI that
the OCS/TCS operators use.
•
•
•
•
* Long term maintenance:
Collaborators will develop, deliver, and debug their SW. Long term
(eg, for 10 years), the SW must be widely maintainable by people
who did not participate in the original code writing.
10
Internal review
Oct 14 ‘08
CCS architecture IV
• Implementation issues :
•
•
* Implementation decisions must take into consideration possible
incompatibilities and the desire for homogeneity.
•
•
•
•
•
•
. Hardware platforms.
. Operating systems.
. Programming languages.
* Code repository = Subversion
* Bug tracking and wiki:
. Code reuse:
•
•
•
•
•
•
. Test stand needs:
. CCS needs to control various environmental quantities that OCS will do on mtn.
. We will need mock-up OCS, FDB, etc.
. This year's test stand SW needs to work, but not necessarily the final implementation.
. How many CCS's will we really need on the mountain? (eg, clean room)
How capable must they be?
•
•
•
. How will we implement alarms?
We should have a single person responsible for this.
. How will we implement Displays?
• How many different types?
Internal review
Oct 14 ‘08
11
Data Flow: Science DAQ System (SDS)
Processes, buffers & publishes science data to both telescope and DM systems
• Science Array (21 center rafts)
– ~3.2 Gpixels readout in 1-2 seconds
• Wave-Front-System (WFS) (4 corner rafts)
• Guider (4 corner rafts)
commodity wire protocol
(Ethernet)
camera
wire protocol optimized
for:
small footprint,
low latency,
robust error correction
SDS
data management & telescope systems
Camera Control System
(CCS)
•
Equally important requirement is to support camera
development & commissioning
– This means support for test-stand activity.
•
Why is this support necessary?
– reuse
– allows early field testing
– preserves across development, commissioning &
operation:
• camera team’s software investment
12
Internal review •
camera operational experience
Oct 14 ‘08
Irreducible functions
•
•
•
•
•
•
•
Convert read-out representation to physical representation
Manage and perform electronics cross-talk correction for the transient analysis
Mount the server side of the client image interface
– Stream data to DM
Mount CCS interface
Manage and perform image simulation
Support In-situ monitoring & diagnostics
Provide image buffering, used for…
– “3 day” store
– flow-control (client interface)
– simulation
– monitoring & diagnostics
13
Internal review
Oct 14 ‘08
13
SDS Summary
•
•
•
Architecture is now relatively mature
– Functional prototypes are in-hand
– Performance and scaling have been demonstrated to meet the performance
requirements
Focus has now shifted to the definition & development of the client interface.
However:
– A significant amount of interface work remains between camera electronics
and the input side of the SDS
– The specification (& implementation!) of necessary diagnostics is a work in
progress
• A camera-centric activity, however…
• observatory could well add to these requirements…
There is more then one SDS…
– Support for “test-stands”
– WFS & Guider activities
– Full camera commissioning & operation
• “Building 33”
• Mountain-top
– The SDS design is intended to encompass and support all these
Internal review
activities…
Oct 14 ‘08
14
The SDS & its data interfaces
camera
guider array
WFS array
science array
SDS
guider data
processing system
WFS data
processing system
to telescope
to telescope
image data
processing system
data management system
Internal review
Oct 14 ‘08
15
15
The test-stand crate & its interfaces
test-jig
sensor/raft/electronics etc…
1-8
channels
SDS crate
1-2 gbit (copper)
1-2 10 gbits
(fiber)
router/switch
Linux host
Linux host
Linux host
Linux host
Control, analysis & long-term archival
Internal review
Oct 14 ‘08
16
16
Using the development board
Fiber/transceiver
USB
General purpose I/O
(LVDS)
FPGA
(FX020)
Internal review
Oct 14 ‘08
1 GE
17
17
Interface ICDs actively being developed
• We have had several workshops to develop a
consistent set of expectations across the connected
systems
•
•
•
•
TCS in chile in 2007
Electronics in Boston and @ slac in 2007 & 2008
DM and OCS in tucson in 2008
French group (filter subsystem)
• Coming up:
– DM 2 day workshop (nov)
– TCS workshop (nov - dec)
18
Internal review
Oct 14 ‘08
COMPUTER ROOM - Mountain
SDSto Staging area,
Clean room etc
Image or
Part image
SDS-Computer room
to Telescope
Second
image
????
Switch has 1.4 Tb
backbone
Sci. image flow
4 x 10 Gb fibers
Cisco Layer
3 switch
To
Central
Mountain router
To
Base
10 GigE lines
TCS
and
OCS
48 port 100/1000
baseT blade
For replicating control rooms, TCS/OCS and all other services, videos, phones etc
Internal review
Oct 14 ‘08
19
Backup Slides
20
Internal review
Oct 14 ‘08
Defining Subsystems
Master Control Module
The MCM controls all other CCS subsystems via messages passed over tcp/ip
networking on a private camera net. The MCM is the only "device" visible to
the OCS and represents the camera as a whole.
Power control - utility room
This subsystem enables, monitors, sequences power sources which are in the
ground utility room It includes switches, relays, and logic to provide
appropriate sequencing and interlocks between power source In particular, we
might not power the heat exchanger if the insulating vacuum system is not
functioning.
Power control -- camera
This subsystem enables, monitors, sequences power sources which are in the
utility trunk volume. It includes switches, relays, and logic to provide
appropriate sequencing and interlocks between power sources.
Filter Changer (FCS)
Filter changer is self contained set of 5 filters, electromechanical motion
devices and controller.
21
Internal review
Oct 14 ‘08
Defining Subsystems (cont’d)
Shutter
Assuming a two blade shutter instrumented for position vs. time
measurements
Focal Plane Thermal Control
Controls the temperature of the focal plane assembly. It must monitor
temperatures and heater settings in each of the 25 rafts. Inputs are
temperatures reported by the science array via the TCM (or possibly the SDS)
and output is to the RCM's via the TCM (or again possibly the SDS).
Cryogen control (camera)
Cryogen is delivered from the utility room to the valve box in the utility trunk.
Actuated valves (at least 4) control the flow of cryogen to the cold plate (BEE
heat sink) and the cryoplate (CCD/FEE heat sink). This system controls and
monitors this flow. Sensors for air pressure (for actualted valves) and nitrogen
gas purge may be needed. Temperature sensors in the lines, cold plates, and
valve boxes are also likely.
The output temperatures from this system would be visible to the cryogen
delivery system on the ground and could be used to bring the CCD temperature
control system within the limits of the focal plane heaters.
Internal review
Oct 14 ‘08
22
Defining Subsystems (cont’d)
Cryogen control (ground)
Comprises 3polycold refrigeration systems, a pumping system, heat
exchanger, storage reservoir, and round-trip transport lines. This system
provides monitoring and control of this system as a whole. It must be able to
start/stop and adjust these devices during operation.
Cryostat temperature monitoring
This system provides monitoring of all temperature sensors in the cryostat
which are not part of the cryogen flow or science array systems.
Camera body Thermal Control
This is environmental control and monitoring of the camera body including all
areas outside of the cryostat. In particular, this includes the L1/L2 chamber,
the L3/shutter/filter zone and the carousel zone also.
The system consists of a temperature regulated supply (from the ground) of
dry nitrogen gas feeding a fan coil unit with a supply/return line for chilled
water. The fan coil circulates the dry N2 throughout the volume absorbing heat
in chilled water coils.
Operation depends upon the supply of dry N2 (low pressure) and chilled water
(telescope utility). Temperature sensors in the camera body skin and
components will be inputs, with the system adjusting either fan speeds, or
water flow rates to regulate the skin
temperature
of the system.
Internal
review
Oct 14 ‘08
23
Defining Subsystems (cont’d)
Utility Trunk Thermal Control
This is environmental control and monitoring of the utility trunk.
The system consists of a temperature regulated supply (from the ground) of
dry nitrogen gas feeding a fan coil unit with a supply/return line for chilled
water. The fan coil circulates the dry N2 throughout the volume absorbing heat
in the chilled water coil.
Operation depends upon the supply of dry N2 (low pressure) and chilled water
(telescope utility). Temperature sensors in the utility trunk components will be
inputs, with the system adjusting either fan speeds, or water flow rates to
regulate the skin temperature of the system.
Cryostat vacuum control
Two turbo-pumps with backing from the utility room. Operation would assume
the backing pumps are on (with verification). Two or more vacuum gauges and
possibly an RGA would be used to monitor the vacuum quality. Interface to
turbo pumps unknown at this time but may include health/safety data from the
pump itself.
24
Internal review
Oct 14 ‘08
Defining Subsystems (cont’d)
Utility room vacuum control
The on-ground utility room vacuum system provides three distinct vacuum
systems:
- backing vacuum for the cryostat
- backing vacuum for the valve box
- insulating vacuum for the cryogen supply system lower portion.
These systems will include some number of gauges and valves (manual) and
also a N2 (nitrogen) gas purge supply with some monitoring.
Valve box vacuum control
This system comprises one turbo pump and some gauges and is backed by the
utility room backing pump system (shared with the cryostat vacuum but they
can be manually valved off).
Sensors/gauges for monitoring both the vacuum and the pump/valves would
also be included in this system.
science array (TCM)
Timing and Control Module (TCM) distributes commands and timing to the ~21
raft controllers. The TCM delivers raft status to the CCS.
Internal review
Oct 14 ‘08
25
Defining Subsystems (cont’d)
Science Data Acquisition (SDS)
The SDS is the data acquisition and distribution system for science data. It
receives the data stream from the CCDs, does preliminary processing,
buffering and temporary storage of the science data. While under the control
of the CCS, the SDS also provides services that are beyond and asynchronous
with the camera readout.
guide sensor (TCM)
Timing and Control Module (TCM) distributes commands and timing to the 4
guide controllers. The subsystem module (this object) instantiates the states
that represent the guider system.
Guider Data Acquisition
This is the portion of the SDS devoted to guiding. It nominally operates when
the science array is integrating. Also, the real-time analysis of the guider data
may take place in the SDS which then outputs offset parameters that the
telescope control system uses to control the telescope tracking.
Internal review
Oct 14 ‘08
26
Defining Subsystems (cont’d)
Wavefront sensor (TCM)
Timing and Control Module (TCM) distributes commands and timing to the 4
guide controllers. The subsystem module (this object) instantiates the states
that represent the guider system.
WFS Data Acquisition
The SDS is the data acquisition and distribution system for wavefront sensor
data. It receives the data stream from the sensors, does preliminary
processing, buffering and temporary storage of the wavefront. While under the
control of the CCS, the SDS also provides services that are beyond and
asynchronous with the camera readout.
The primary client of this data is the telescope system. It is not expected to be
routinely consumed by any other client.
Focal Plane laser spot system
This system consists of a set of laser sources and optics which project a fixed
pattern onto the focal plane for in-situ calibration procedures. Control will
essentially consist of just enable/disable and on/off.
27
Internal review
Oct 14 ‘08
CCS APC Paris Workshop
Introduction to the SDS
(the Science DAQ System)
Michael Huffer, mehsys@slac.stanford.edu
Stanford Linear Accelerator Center
August 27, 2008
Representing:
Mark Freytag
Gunther Haller
Ryan Herbst
Chris O’Grady
Amedeo Perazzo
Leonid Sapozhnikov
Eric Siskind
Matt Weaver
Internal review
Oct 14 ‘08
28
The Channel
• Physically:
– fiber-optic pair connected to
transceivers
• Camera
• SDS
– transceiver choice not yet made,
however…
Camera
Back-End-Electronics
FPGA
Protocol interface
• SDS is relatively insensitive to this
decision
• Logically:
– Runs camera-private protocol
• full duplex
• asynchronous
• may operate from 1 to 3.2 Gb/sec
– we will operate at 3.2
• provides error detection &
correction
– data not buffered on camera
Internal review
Oct 14 ‘08
• QOS support (4 virtual channels)
29
to Cluster Element
29
The Cluster Element
•
Is the unit of intelligence and buffering for the SDS …
–
–
–
–
–
services science data from 1–4 Channels
has two (2) 10-GE Ethernet MACs
contains up to 1 TByte of flash memory (currently 1/2 TByte)
contains a reconfigurable FPGA fabric ( + DSPs)
contains a PowerPC processor (currently @ 450 MHZ) with:
• 512 Mbytes RLDRAM-II (currently 128 Mbytes)
• 128 Mbytes configuration memory
– processor operates an Open Source Real/Time kernel (RTEMS)
• POSIX compliant interfaces
• standard I/P network stack
– processor has a generic DMA interface which supports up to 8 GBytes/sec of
I/O
– processor has I/O interfaces to:
•
•
•
•
all four input channels (at full rate)
both Ethernet MACs (at full rate)
flash memory (access at up to 1GByte/Sec)
FPGA fabric
Internal review
Oct 14 ‘08
30
30
RCE board + RTM (Block diagram)
Zone 3
MFD
backplane
slice0
slice1
RCE
slice2
slice3
Media
Carriers
Fiber-optic transceivers
slice4
slice5
slice6
slice7
RCE
MFD
Internal review
Oct 14 ‘08
Zone
2
31
31
RCE board + RTM
Media Carrier
Zone 1
(power)
Zone 2
Power
conversion
transceiver
s
Media Slice
Zone 3
RCE
Internal review
Oct 14 ‘08
32
32
Cluster Interconnect board + RTM (Block diagram)
Zone 3
X2
backplane
L2
10-GE
switch
1-GE
RCE
MFD
1-GE
X2
CX-4 connectors
L2
10-GE
switch
Switch
Managemen
t
Managemen
Internal review
t bus
Oct 14 ‘08
Zone
2
33
33
Cluster Interconnect board
X2
10 GE
switch
Zone 3
External
Ethernet
1 GE
Ethernet
Zone 2
Management
Processor
Zone 1
Power
conversion
Internal review
Oct 14 ‘08
34
34
ATCA (horizontal) crate (front)
Cluster
Interconnec
t board
Fans
shelf
manager
35
Internal review
Oct 14 ‘08
35
Generic Ideas for Continued Discussion
•
•
•
•
•
•
•
•
•
TCS/SOAR heritage where useful
Subsystem boundaries defined by “tightly coupled”
Implementing the event driven state machine model
Communications:
– Master/slave only
– No peer-to-peer
– Peer info via “subscription”???
Network (tcp/ip) is only connection to subsystem
CCS proxy modules vs. resident CCS code in subsystem.
Use of EA and UML/SysML to document design
CCS trac/svn site exists
Supervision of a distributed set of processes on a distributed set of
nodes:
– Startup/shutdown
– Logging
– Debugging/tracing/snooping tools
– Graphical tools for monitoring
– Error handling/restarts
– Links to “hardware protection
system”
Internal review
Oct 14 ‘08
36
Defining Subsystems
•
•
•
•
•
Substantial updates
from mechanical
engineering team
Additions include
utilities transport, utility
trunk evolution, offtelescope operations
Review and re-parse
subsystem definitions
based on “subsystems
are tightly coupled”
Track definitions and
functionality of
subsystems in UML
Each subsystem (CCS
view) is based on a
generic “controller”
model
37
Internal review
Oct 14 ‘08
Download