Software & Controls

advertisement
ATST Software Conceptual Design
ATST Conceptual Design Review
26 Aug 2003
Presentation Structure
• Introduction
– Approach
– Things to watch for
•
•
•
•
Requirements
Functional design overview
Technical design overview
Virtual Instrument Model
Design Architectures
Requirements
Functional Design
Technical
Design
Implementation
Behavior
Special points
• Configurations
– How observations are modeled in system
• Virtual Instrument Model
– Provides flexibility in laboratory-style operations
• Device Model
– Uniform implementation and control of devices
• Container/Component Model
– Flexibility in a distributed environment
Key Science Requirements
• Combine multiple post-focus instruments
– Operate simultaneously
– Coordinated observing with remote sites
• Match flexibility and adaptability achieved by DST
– Support ‘laboratory-style’ operation (modular instruments)
– Support visitor instruments
– < 30 min switching time between of active instrument set
• >40 year lifetime
• Massive data rates
• Track on/off solar disk (up to 2sr off)
Software Requirements
Science
Requirements
Software
Requirements
Common
Sense
Software
Design
Reality
What types are there?
•
•
•
•
•
•
•
Functional – what must the system do?
Performance – how well must the system run?
Interface – how does the system talk with the outside?
Operational – how is the system to be used?
Documentation – how is the system to be described?
Security – who/what can do what/when?
Safety – what can’t go wrong?
Functional Design
• Purpose
– Focus on behavior and structure (what and why)
– Measure against requirements
• Use cases/Overall design
• Information flow
• Principal Systems
–
–
–
–
Observatory Control System (OCS)
Data Handling System (DHS)
Telescope Control System (TCS)
Instruments Control System (ICS)
Overall Design Approach
• Want to adapt a conventional modern observatory
software architecture to the special needs of the ATST
– Avoid re-invention, but…
– Concentrate on multiple instruments operating simultaneously in
a laboratory environment
– Flexibility is a key requirement of the functional design
• Overall functional design derived from ALMA, Gemini,
and SOLIS, with consideration from other projects as
well.
– All share a common overall structure (with wildly different
implementations)
– All highly distributed with strong communications infrastructure
Overall Design
User Interfaces
OCS
UIs, Coordination, Planning,
Services
DHS
Science data collection
Quick look
TCS
Enclosure/Thermal, Optics
Mount, GIS
Core Software
ICS
Virtual Instruments
Information flow
Experiment
Virtual Instrument
Component
Observations
Device
Driver
Hardware
Component
Configurations
Data
Component
Device
Driver
Hardware
Device
Driver
Hardware
Device
Device
Quick Look
Engineering Archives
Operating Characteristics
• Distributed architecture using a communications bus
–
–
–
–
Components can be placed anywhere (and moved as needed)
Devices may be constrained by device driver constraints
Components locate each other by name, not location
Language and other environment choices are independent of
behavior
– Behavior separated from control by a control surface
• Communications bus
–
–
–
–
–
Multiple channels
Provides inter-language peer-to-peer communication
Locates components and provides connection handles
Monitors connectivity (detects communication failures)
High-speed, robust
Observatory Control System (OCS)
• Roles:
–
–
–
–
–
Construct sequence of configurations for each observation
Coordinate operation of TCS, ICS, and DHS
Provide user interfaces for operations
Provide services for applications
Provide ATST Common Software for all systems
Data Handling System
• Roles
– Accumulate science data (including header information)
• handle data rates
• handle data volumes
– Analyze data for system performance (quick look)
– Provide archival, retrieval and distribution services
Telescope Control System
Requirements
• Coordination and control of telescope components
– Interface to the Observatory Control System
– Configuration management
– Safety interlock handling
• Ephemeris, pointing, and tracking calculations.
– Time base control and distribution
– Pointing models
– Target trajectory distribution
• Image quality
– Active and adaptive optics management
– Thermal management
Telescope Control System
• Subsystems
–
–
–
–
–
–
M1 Control
M2 Control
Feed Optics Control
Adaptive Optics Control
Mount Control
Enclosure Control
TCS
M1
M2
Feed Adaptive
Mount Enclosure
Optics Optics
Acquisition, Track, and Guidance
• General Systems
– Time base
– Global Interlock
– Thermal Management
Image Quality
Thermal Management
Global Interlock
Telescope Control System
• High-Level Data Flows
– OCS configurations
– ICS configurations
– TCS events and archiving
OCS
cfgs
ICS
• Low-Level Data Flows
–
–
–
–
Subsystem configurations
Trajectories
Image quality data
Interlock events
cfgs
TCS
events
Trajectory
Interlock
cfgs
FOCS
ECS
MCS
M1CS
AOCS
M2CS
Image quality data
Global Interlocks
Telescope Control System
• Virtual Telescope Model
– Tip of the hat to Pat Wallace (DRAL).
– Several points of view (Instruments, AO WFS, aO WFS).
• Pointing and Tracking
–
–
–
–
–
–
Off-axis telescope should be irrelevant.
Tracking will be at solar rate.
Coordinate systems include ecliptic and heliocentric.
Limited references to build pointing map (1 object/6 months).
Open-loop tracking for coronal and non-AO work.
Closed loop tracking uses AO as guider.
• Thermal Management
– Daily thermal profile.
– Monitor heat loads and dome flushing.
Mount Control System
Pointing and Tracking
•
•
•
•
All ephemeris and position calculations are done by the TCS.
The MCS follows a 20 Hz trajectory stream provided by the TCS.
This stream consists of (time, position) values that the MCS must follow.
Current position, demand position, torques, and rates are output at 10 Hz.
Thermal Management
•
•
The MCS must keep the mount structure at ambient temperature.
May be provided by a separate controller (TBD).
Interlocks
•
•
•
Interlock conditions cause a power shutdown, brakes on, cover closed.
Caused by: GIS, over-speed, over-torque, mechanical obstruction (locking
pin, manual drive crank, liftoff failure).
Provided by a separate controller (PLC-based) that is always operational.
Mount Control System
Servo Requirements
Pos = 3 arcsec
Vslew = TBD °/sec
Vtrk = TBD °/sec
Aslew = TBD °/sec2
Atrk = TBD °/sec2
Jitter = TBD °/sec
Azimuth
Bias
•
•
•
•
•
•
M2 Bias
0.01 Hz
Trajectory
Elevation
Trajectory
20 Hz
Trajectory
summing
and
smoothing.
Coudé
M1 Mirror Control System
•
•
•
•
Axial Support: blending
and averaging AO
information, applying
force map, correcting
servo feedback.
Mirror Position:
detecting translation
and rotation errors,
(feedback to
actuators?).
Thermal management:
controlling temperature,
applying thermal profile
estimates.
Controller: interfacing to
TCS & GIS, simulator.
TCS
GIS
AOCS
Axial
Support
Actuators,
Force Sensors
Force Map
M1CS
Mirror
Position
Translation,
Rotation
Sensors
Thermal
Management
M1
Controller
Aperture
Stop
Interlocks
Thermal
Profile
Interfaces
Blowers,
Coolers,
Exchangers
Simulator
M2 Control System
Tip-Tilt-Focus
•
•
•
•
Base
Configuration
TCS
Base configuration set
by TCS.
Corrections from AOCS
in 10 Hz stream.
Blending data
Conversion into off-axis
translation and rotation.
aO & AO TTF
10 Hz
Tip-Tilt Offset
0.01 Hz
AOCS
M2CS
Thermal
Management
MCS
Blending
Thermal Management
•
•
•
Secondary Mirror
Heat Stop
Lyot Stop
Conversion
X
Translation
Y
Z
X
Rotation
Y
Z
Feed Optics Control System
Small Mirrors
•
•
M3: Gregorian feed.
M7: Coudé feed.
Other Optics
•
•
•
aO WFS beamsplitter
Polarizers
Filters?
Thermal Management
•
•
•
Mirror Cooling
Tube Ventilation
Coudé entrance
Adaptive Optics Control System
aO WFS
Active optics
system
AOCS
AO WFS
On Demand
TTM6
DHS
OCS
2 KHz
On Demand
On Demand
DM5
M1
M2
0.1 Hz
Mount
10 Hz
Command Channel
Event Channel
Data Channel
0.01 Hz
Adaptive Optics Control System
M2
Tip-tilt bias
offload
~0.01 Hz
TTF bias offload
~10 Hz
Low-order figure
offload
~0.1 Hz
aO/AOCS
Active Optics
10 Hz
Adaptive
Optics/
Active Optics
Control System
~2 KHz
WFS
Mount
M5
M1
M6
Deformable
Mirror
~2 KHz
Tip-Tilt Mirror
~2 KHz
Adaptive Optics
~2 KHz
WFS
Enclosure Control System
• Azimuth: drives,
brakes, encoders,
sensors.
• Shutter: drive, brakes,
encoders, sensors, sun
shade
• Auxiliary: vent gates,
thermal controller,
cranes
• Controller: interface,
interlock, simulator
TCS
GIS
ECS
Azimuth
Shutter
MCS
Auxiliary
M1
Controller
Drives,
Brakes
Drives,
Brakes
Thermal
Management
Interlocks
Encoders,
Sensors
Encoders,
Sensors
Vent
Gates
Interfaces
Sun Shade
Cranes
Simulator
Instrument Control System
Requirements
• Laboratory/Experiment Style Observing
– Flexible instrument configuration
– Use of multiple components
• Instruments
– Facility instruments follow all ATST interfaces
– Visitor instruments obey a minimal set of ATST interfaces
– Multiple instruments must work together.
• Telescope
– Control the beam position
– Control modulators, AO, and other image modifiers
• Development
– Several development locations with numerous partners.
Instrument Control System
Management
•
•
Controls the lifecycle of
virtual instruments
Allocates components
Interface
•
•
•
Controls configurations
from the OCS
Provides user interfaces
Presents TCS and DHS
as resources.
Development
•
OCS
Provides standard
instrument as template
ICS
TCS
NIrSP
ViSP
Component
Component
VisTF
Component
Component
DHS
VI
Component
Component
VI
Component
Component
VI
Component
Component
Available Components
Component
Instrument Control System
Management Lifecycle
1.
2.
3.
4.
5.
6.
7.
8.
Select from list of components
Build a VI or retrieve an
existing VI
Register VI with ICS
Submit configuration to OCS
OCS schedules configuration
ICS enables your VI
VI takes control of
components
Interact with your VI
[Engineering Mode]
OCS
5
4
ICS
3
8
2
My VI
7
1
Available Components
6
Technical Design Overview
• Purpose
– Focus on implementation issues (how)
– Must allow implementation of functional design
– Identify options, make choices
• Tiered hierarchy
– Isolates technology layers
– Allows technology replacement
ATST Technical Architecture
Applications
Admin Apps
UIF
Libraries
Data
Handling
Support
Archiving
System
Astro
Libraries
Component Config Error
Log
Data
Time
Support
DB System System Channels System
Device
Drivers
High-level
APIs/Tools
App
Framework
Services
Container
Support
Core Support
Base Tools
APIs &
Libraries
Development
Tools
Scripting
Support
Alarm
System
Communications
Middleware
Integrated
APIs/Tools
Communications
•
•
•
•
Communications Bus
Notification Service
Synchronous Communications
Asynchronous Communications
Services
•
•
•
•
•
Logging
Events
Alarms
Connection
Persistent Stores
Interfaces
• Logically separated into three classes:
– Lifecycle (startup/reset/shutdown of components)
– Functional (command/action/response - behavior)
– Service access (connection, log, event, alarm, stores)
• Functional interfaces define accessible behavior
– Physically extend the lifecycle interface
– Narrow interfaces (few commands used)
– All devices use a common interface
• Formally specified and enforced using communications
middleware
Container/Component Model
• Industry standard approach to distributed
operations (.NET, EJB, CORBA CCM, etc).
• Components implement functionality
• Containers provide services to components and
manage component lifecycle
Functional interface
Component
Container
Service interface
Lifecycle interface
Containers
• ATST supports two types of Containers
– Porous – Components provide direct access to their functional
interfaces
– Tight – Containers wrap component functional interfaces
Interface
Wrapper
Components
• Hierarchically named
• Can (currently) be either Java or C/C++
• Three lifecycle models for components
– Eternal: created on system start, run to system stop
– Long-lived: created on demand, run until told to stop
– Transient: exist only long enough to satisfy request
• Exist only inside containers
– Common base interface allows manipulation by container
• Critical attributes maintained in separate persistent store
Observations - 1
• Program of configurations
• Configurations flow through system: status is updated as
they are operated on by system
• Components and devices respond to configurations
• Configurations implemented as sets of attributes
(name/value pairs)
• Configurations are uniquely named and permanently
archived (as are observations)
Observations - 2
•
•
•
•
Observations constructed using Observing Tool
Choice of OT is TBD (ALMA, Gemini, STScI, others)
External representation is XML
Configurations may be composed from other
configurations and components may decompose a
configuration into a set of smaller configurations
Observations - 3
• Observation managers track sets of observations as
they are operated on by the system
• Components tag header information to identify the
component associated with the generation of that
header information
Real-time Systems
• Laboratory-style operations
– Rapid setup and reconfiguration
– Engineering observations
• Inexpensive
– Existing software implementation or design
– Rapid deployment
– Code reuse
• Contractual design and development
– Easily partitioned work packages
– Common infrastructure and tools
– Simulators
Device Model
• The device model is used by all real-time components
– Other models are available for OCS services (log, notification, etc.).
• It provides a common interface to:
–
–
–
–
High-level objects (OCS and DHS).
Other devices.
Low-level hardware drivers.
Global services (log, event, database, etc.).
• It can be inherited by more complex devices.
• It forces all systems to obey the command/action model.
• It operates in a peer-to-peer environment.
Device Model
• Devices have common properties:
– Attributes: state, health, debug, and initialized.
– Command Interface: offline, online, start, stop, pause, resume, get, set.
– Services Interfaces: event, database.
• Devices have common operations:
– Initialize, power-up, check parameters, change state, execute actions,
handle errors, respond to queries, receive and generate asynchronous
events.
• Devices have common communications:
– Name/connection registration.
– Event listeners and posters.
– Databases, log, and alarm services.
Device Command Interface
• Devices have a simple
command interface:
– Offline, online, start, stop,
pause, resume, set, and get.
• Each command moves the
device state machine to
another state:
OFF
offline
online
offline
offline
offline
IDLE
– States are: OFF, IDLE, BUSY,
PAUSED, FAULT
(fault)
FAULT
stop/done PAUSED
start
pause
(fault)
resume
BUSY
Configurations
• All devices use configurations to transport information.
– A group of attributes and corresponding value
– i.e., a filter wheel might act upon: {position=red; rate=10;
starttime=10:38:18}.
– Values may be any native data type, arrays, and lists.
• Configurations are followed throughout the system.
– Each device action is traceable to a configuration.
– Header information is reconstructed from configuration events.
• Configurations have states:
– A configuration may be in multiple states in multiple devices.
– States do not iterate.
– Final state has an associated completion code.
Created
Queued
Running
Done
Inherited Devices
• The Device class is an abstract class; it needs to be inherited by
another class.
– These classes specify information and operations unique to a particular
device. They may create configuration templates, run background
tasks, handle hardware control, and generate specific information.
– For example, the MotorDevice inherits the Device class to operate
servo motors and define positions, limits, power, and brake operations.
• An extended class may itself be extended.
– The DiscreteMotorDevice inherits the MotorDevice class to provide
defined positions and name-to-position conversion.
High-Level Devices
• Some devices do not operate hardware, they operate other devices
or connect to high-level, non-device objects (OCS/DHS).
– The SequenceDevice runs other devices in a defined order and phase.
– The ControllerDevice executes scripts.
– The MultiAxisDevice coordinates simultaneous actions.
• High-level devices are an aggregation pattern.
– They do not override or hide the low-level devices.
– They allow the low-level devices to operate independently.
Command/Action Model
Commands cause external actions
to occur.
•
•
A command will return immediately.
The action begins in a separate
thread.
Multiple commands can be given
while actions are on-going. This
allows us to “stop” an action, or
queue up the next “start”.
Command
Process
Config
•
A device will transition to the BUSY
state, then either back to the IDLE
state, or to the ERROR state.
Synchronization
Mechanism
Shared
Data
Actions return asynchronous state
information.
Actions
Response
Action
Process
Completion
Actions
Actions
Peer-to-Peer Communications
• Devices must have flexible connections.
– Depending upon the requested operation, a device may need to
communicate with several different devices.
– Devices must not exclusively control another device.
• Peer-to-peer communications allows a loose federation of devices.
– No single point failures, outside of the communications system and the
naming service.
– Multiple federations can exist simultaneously in the system.
Download