Howard Chivers
University of York
Practical Security for e-Science Projects
25 November 2003
This talk presents my personal perspective, not the considered view of the project or any of its partners.
But credit and thanks must go to busy developers and industrial partners who have been consistently helpful and generous with their time, and to Martyn Fletcher who is the primary author for study deliverables.
DAME Introduction
The Method: Dependability and Security
Stage One: System Context
Stage Two: Asset Analysis
Summary
Airline office
Engine flight data
London Airport
Grid
New York Airport
Maintenance Centre
European data center
American data center
Develop a Grid-enabled diagnostic system
Demonstrate this on the Rolls-Royce AeroEngine diagnostics problem
– A Diagnostic Grid
– Grid management tools for unstructured data
– An practical application demonstrator
Develop the understanding needed for industrial deployment:
– Grid middleware and application/services layer integration
– Scalability and Deployment options
– Security and Dependability issues
Support on-line diagnostic workflow in real time
Deal with the data from 1000’s engines in operation
Prove distributed pattern matching methodology
Address customer concerns about grids, including scalability & security
Demonstrate the business case for the technology
Implementing a distributed, integrated, workflow has considerable potential customer value
The workflow requires collaboration between multiple stakeholders
An integrated business process is needed to provide evidence for any diagnosis, and traceability to subsequent action
The data is high volume, and is distributed between stakeholders’ sites (eg maintenance, factory, airports)
The variable computing load makes resource sharing attractive for some processes
Universities:
– University of York
– University of Sheffield
– University of Oxford
– University of Leeds
Industrial:
– Rolls-Royce
Aeroengines
– Data Systems and
Solutions
– Cybula
Infrastructure:
- White Rose Grid
- National e-Science
Support Centre
Provide analysis to enable ultimate deployment of DAME in engine domain.
Provide analysis as basis for deployment in other domains.
Contribute to Grid community research in dependability and security.
Attributes:
– Reliability
– Safety
– Maintainability
– Security
(Confidentiality, Integrity, Availability)
Attributes have varying significance in different systems.
Focus on risk to the overall business process
Process (see previous talk by Jonathan Moffett)
– Define system context:
» Boundary / actors / assets / external assumptions.
– Analyse assets:
» Identify impact / threat for each.
– Attackers perspective.
– Vulnerabilities.
» Identify likelihood.
From matrix, identify unacceptable deployment risks, example:
– High impact and high likelihood need to be reduced.
System Context
System
Boundary
External
Assumptions
Actors Assets
Attackers’
Perspective threats
Asset
Analysis
Vulnerabilities
Likelihood
H
M
L
Impact
L M H x o
–
High level analysis for complex systems developed at York is rooted in the need for safety cases of layered systems.
Distributed services
Service 0 Service N
Distributed Middleware Infrastructure
Distributed Hardware Infrastructure
Analysis
Interface
Component under analysis
Focuses on infrastructure.
Approach at York (based on FMEA – Failure
Modes an Effects Analysis + SHARD - Software
Hazard Analysis and Resolution in Design):
– Define high level functions at specified interface.
– Apply guidewords (omission, commission etc.) – undesirable situations.
– Cause.
– Effect.
– Derived requirements - to prevent / mitigate.
Satisfy derived requirements to provide dependability.
Approaches have complementary strengths
In combination:
– Use security risk analysis to establish whole-system issues
– Use ‘high level analysis’ to deal with non-security attributes, and provide infrastructure vulnerabilities into the main risk analysis
– Combined study minimises project cost and customer involvement
Take advantage of other sources of vulnerability information
The security risk method provides a useful overall framework .
.. but in many projects a wider set of attributes will be needed.
Using both forms of analysis explicitly deals with the flexible deployment of applications envisaged in the grid.
.. but it remains to be seen if the interface requirements between applications and infrastructure are mature enough to allow dependability analysis.
System Context
System
Boundary
External
Assumptions
Actors Assets
Attackers’
Perspective threats
Vulnerabilities
Likelihood
H
M
L
Impact
L M H x o
–
Asset
Analysis
System Context document
(DAME/York/TR/03.007)
– Business process.
– System boundary.
– Actors (primary and supporting).
– Assets (service and data).
– Service interactions.
– External assumptions.
Purpose:
– Provides a concise reference – allows stakeholders to agree on a description of the system.
– Identifies Assets: Services and Data
» .. but not hardware?
Engine
Manufacturer
(RR)
Airline / Maintenance Contractor
(at Airport)
Information / request for advice
Perform
Minor Repair
Perform
Inspections
Local
Diagnosis
Dowload
Engine
Data
Ground
Support
System
DAME
Diagnosis
Maintenance
Engineer (ME)
Remove engine and dispatch for major overhaul
Return overhauled engine to service
Investigate using tools
Request advice from MA
Update Engine
Record
Upload
Engine
Data
Update Engine
Record
Remote / Distributed
Tools and Services
Provide
Diagnosis
/ Prognosis
/ Advice
Update Engine Records
Distributed Aircraft
Maintenance Environment (DAME)
- Miscellaneous Providers.
Engine Data Center (EDC) - DS&S
Service Data Manager (SDM)
including Workscope Generator- RR
Request advice from DE
Investigate using tools
Information / request for advice
Provide
Diagnosis
/ Prognosis
/ Advice
Update Engine
Records
Domain Expert (DE)
- engine expert
Maintenance Analyst (MA)
- maintenance expert
Data Center
(DS&S)
Engine Maintenance
Repair and Overhaul
(MRO) Facility
(RR / Contractor)
stores Engine Data Record in
QUOTE / GSS
*
* stores / retrieves DAME results, annotations, etc.
gets EDR from
ArrivalNotification
Portal-CollaborationEnvironment
RoleDatabase
*
*
Encoder-G
*
1
*
1
ZModViewer-G
1
1
1
1
1
1
1
WorkflowManager
1
* seaches for patterns using
1
1
1
1..*
AURA-G
-EncodedZmodDataFeature
XTO-G
1
1 gets extracted orders gets EDR from
1..*
1 visualises engine data using
Chart-G
MyProxy
1
1 models engine using
EngineModel-G
1 1 diagnoses fault using
CBRAnalysis-G searches for clusters using getsWorkflowAdvice
1
DataBaseMiner-G
-ClusterData
1
1
CBRWorkflowAdvisor-G
1
1
1
1
1
1
EngineDataStore-G
EngineDataCenter
1
1
1
1
The EDC contains various independent tools and facilities - only the
EngineDataStore is shown here.
gets EDR from gets EDR from gets SDM Record from
SDM-G
1 gets SDM Records from
1
1
TrackedOrder
0..*
0..1
1
EncodedData
1
1
XTOFeatureResult
0..1
1
AURAEncodedData
*
1 *
AURAResult
0..1
CBRRuleSet
1
*
CBRResult
*
0..1
WorkFlowRuleSet
1
*
SuggestedWorkflow
*
QUOTEFeatureResult
1
Airframe
1 *
1
Flight
1
WorkflowRule
EngineDataRecord
1
1
0..1
1
1 1
1
1
WorkflowRecord
* processPerfomance inputParamSet
1
1
0..*
1 1 *
0..1
Case deadline status userStatus[3]
* * *
1
1
1
1
1
0..1
1
User
0..1
distinguishedName 1
1
1
UserView
UserRole
1..3
*
*
*
1 0..1
FlightEvent
1
* 1
Engine
1
Role
1
1
*
SDMRecord
*
ChartResult
0..1
0..1
ZmodViewerResult
0..1
EngineModelResult
*
Annotations
SDMRecord
AURAResult
Uses
Get Maintenance Data
CBRAnalyser
Produces
Uses
CBRRuleSet
CBRResult
Business Use-Cases & initial Service diagram derived from design documents
Aim for a Deployment-neutral description
Checks:
– Build & check data and service models from the interactions specified in the use-cases.
– Is the data required by each service consistent with the data model?
– Do members of the project, and its customers, think this represents their system?
Control granularity:
– Services at deployment granularity.
– Data, sufficient to distinguish between different use or origin.
– Assets must be meaningful to customers to allow a discussion of threat & impact.
Result:
– 24 Data Types and 14 Services.
– Contrast with
» ‘Initial brainstorm’ meeting: 4 data types & 4 services
» Previous slide (9): 3 data types & 13 services (2 different!)
Methodological analysis is necessary.
Need to be flexible about representations & models to align with project methods.
Control:
– Granularity
– Avoid mechanisms, keep to requirements
The ‘grid’ nature may make it difficult to establish hardware assets - may be a problem or blessing, but needs to be recognised.
The system is ‘virtual’ – need to be explicit about the management needed.
Just Started.
Generated pro-forma of assets and generic concerns.
Reviewed with Industrial Partners:
– Reviewed system context document.
– Preliminary assets analysis - assigned concerns and impacts to:
» Data assets
» Service assets
Need to document and confirm results with project and industrial partners.
Keyword list to prompt discussion on each asset:
– execution, confidentiality, integrity, availability, privacy, completeness,provenance, non-repudiation…
Only about half these categories used, and not all for every asset.
Impact rating: L/M/H in business terms:
– L: significant cost
– M: impact on company bottom line
– H: long term impact on company bottom line
Confidentiality of key industrial properties.
– The most critical, at present, are algorithms
Integrity of data used to make business decisions.
Provenance of critical decisions made using the system.
New system requirements will probably emerge from this study:
– Finer grain control of users within roles
– The need for provenance for data items as well as decisions (workflows)
– The possible separation of different types of raw data to facilitate grid processing
– The need to audit services in the (virtual) system
Need to be careful about responsibilities when data or services are shared with other systems– e.g. long term data integrity for some data items is important, but outside DAME.
The customers have real security concerns – this is not a system where all parts will be allowed to ‘run anywhere’.
– security analysis informs deployment options
Keywords (e.g. integrity’) are very broad – need to record the actual concern in each case.
Linking impact (L/M/H) to business criteria helps prevent ‘drift’ of assessments.
Discussion / working documents:
– DAME Initial Dependability Assessment -
AME/York/TR/03.001. From meeting with industrial partners on 17 th March 2003.
– Analysis of the Grid – Phillipa Conmy
– Security Risk Brief – Howard Chivers
– Options for Merging Dependability and Security
Analysis - Howard Chivers.
This includes a neutral terminology.
– DAME Dependability and Security: Asset Analysis proforma.
DAME Dependability and Security: System
Context Document DAME/York/TR/03.007.
Complete System Context document and asset analysis.
Assess vulnerabilities, including the use of high level analysis function and dependability key word analysis.
Produce likelihood - impact matrix.
Target unacceptable risks.
Identify deployment constraints & requirements
Identify mitigation mechanisms e.g., encryption, access controls, replication, etc.
Security risk analysis is best carried out as an integrated part of the system design:
– The context can be part of the standard system documentation
– Deployment and other design tradeoffs can be made early
– The security analysis will highlight requirements that might otherwise be missed.
The grid nature of the problem introduces new challenges: DAME is a ‘virtual system’
– Mapping to hardware is deferred
– Requirements for administration of the ‘virtual’ system, as well as individual resources
Appropriate security is essential before systems of this sort can be exploited commercially.