Taming Semantic Interoperability Demons Lurking in Military Training & Testing Events

advertisement
Taming Semantic Interoperability
Demons Lurking in Military Training &
Testing Events
By Exploiting Semantic Web Technology
David Hanz
Senior Principal Engineer
20 March 2013
4955-1
Background – Challenge (1)
• With the advent of Net-Enabled Capabilities,
Precision Guidance, and similar technological
advances, military operations have become
exceedingly complex
• But before conducting such operations:
• The capabilities of the systems must be verified
• The personnel using the systems must learn how to
employ them effectively
• Creating the environment -- that is both
appropriate and affordable -- in which to conduct
such complex testing and training events has
become a significant challenge
4955-2
Background – Challenge (2)
• General consensus has emerged on the
approach: lash together various Live, Virtual, and
Constructive (LVC) simulation systems in a
confederation
• Much progress has been made in standards &
tools to simplify the formation of such LVC
confederations (e.g., DIS, HLA, TENA)
• But these only establish interoperability at the
technical & syntactic levels (bits flow and
message structures are mutually comprehended)
• Systems can still reach wildly different
conclusions from the same data
4955-3
Background – Challenge (2)
• General consensus has emerged on the approach:
lash together various Live, Virtual, and
Constructive (LVC) simulation systems in a
confederation
• Much progress has been made in standards &
tools to simplify the formation of such LVC
confederations (e.g., DIS, HLA, TENA)
• But these only establish interoperability at the
technical & syntactic levels (bits flow and message
structures are mutually comprehended)
• Systems can still reach wildly different conclusions
from the same data
That’s a problem!
4955-4
Typical Source of the Problem
• The root cause of these problems typically turns
out to derive from different native semantics –
different understandings of shared information
objects
 A “Semantic Gap”
• Semantic gaps are not necessarily
troublesome… only when they are large enough
to cause disagreement on a result derived from
the same data
• But determining whether they will cause a
disagreement is “complicated”
• It depends not only on properties of the specific
systems involved, but also on what they are doing
• And sometimes where, when, and other factors
4955-5
Semantic Interoperability Problem Example
(LOS Fair Fight Issue)
Live OPFOR hiding behind
real terrain feature can’t be
seen by Live Blue combatant
But Virtual Blue combatant
has no problem seeing avatar
for Live OPFOR
• This example (from a training exercise held in California) was
traced to ~ 2 meters of tectonic plate movement since 1984 that
was included in the native semantics of the Live system -- but not
in native semantics of the Virtual system
• If the exercise had been held in Kansas, this semantic gap would
probably still be undiscovered
4955-6
6
ONISTT
• The Open Net-centric Interoperability Standards for
Training and Testing (ONISTT) was developed as a
framework and toolsuite that facilitates planning LiveVirtual-Constructive (LVC) training exercises and test
events that:
• Satisfy a specific event’s training / testing objectives
• Optimize the utilization of available resources
• Avoids (or mitigates) pernicious problems resulting from Semantic
Gaps among the participating systems
• But the technology under the hood deals with a more
general problem set
• It can be (and is being) used to solve other kinds of problems that
require logical reasoning
4955-7
How can ontologies & AI help with the thorny
problem of semantic interoperability?
• By capturing the knowledge about root causes of
past Semantic Interoperability problems in a
declarative form, and then…
• Performing automated reasoning to predict if those
kinds of issues might pop up in a new candidate
confederation, and then…
• For those cases where there appears to be a problem,
determine if a relatively simple inline mediator (active
gateway) could be synthesized to “pre-warp” the
data-in-transit and bridge the semantic gap, or…
• For those cases where the gap is too large to bridge,
warn the human-in-charge to find another resource
(or relax the expectations)
4955-8
Inline Mediator: Concept & Motivation
• Mitigate semantic interoperability problems by
reducing the size of the semantic gap for
specific information objects exchanged between
designated systems
• But avoid the cost of changing (“fixing”) the
native semantics of the systems
• Intuitive example:
“Corrective” lenses distort
images to match the
persons visual acuity deficit
4955-9
“Purpose-aware” Interoperability Analysis
Event
Knowledge Base
Resource
Knowledge Base
•Task
•Role
•Capability Needed
•Constraints
•Confederation
•Resource
•Capability
Capabilities
Needed
Capabilities
Available
Representation and analysis of capabilities needed is
structured around tasks (i.e., description of purpose) that
the systems are intended to perform
4955-10
Basic ONISTT Concept
• Create semantic-rich Knowledge Bases (KBs) that
describe:
• The precise capabilities_needed to support interactions
between roles associated with a specific task
• The precise capabilities_available from candidate resources
that may be available to play those roles
• Employ a domain-agnostic inference engine to use the
information from the capabilities_needed KBs as a
template for performing reasoning against the
capabilities_available KBs
• Objective: Synthesize a confederation of resources that
meets the specific needs of the exercise/event
4955-11
ONISTT CONOPS for Exercise Planning
Automate the composition of LVC confederations
Deployment
Knowledge Bases
•
•
•
•
•
Resource pools
Confederations
Taskplans
Role assignments
Task constraints
3. Testing /Training Planner
uses Knowledge Bases to
a) Define Taskplans
(full or partial)
b) Propose candidate
Confederation(s)
(full or partial)
•
•
•
•
Task
Knowledge Bases
Resource and Domain
Knowledge Bases
Tasks
Roles
Capabilities needed
Task constraints
• Resources
• Capabilities
• Domain knowledge
Verified
Taskplan(s) &
Confederation(s)
Develop formal ontologies for
ONISTT core, DoD, domain
and and general domain
knowledge, suitable for
machine reasoning
2.
SMEs populate and maintain
distributed Knowledge Bases
with ontology-based
descriptions of tasks &
resources,
4. Analyzer uses information in
Knowledge Bases to
complete Taskplan and
a) Assess given Confederation or
b) Generate & rank possible
Confederations from Resource Pool
Analyzer
Decision
5a. Return Taskplan with
problem diagnosis
and solution options.
Back to Step 1
1.
5b. Return verified
Taskplan(s),
Confederation(s),
and Configuration
Artifacts for Mediator
Configuration
Artifacts
4955-12
Analyzer/Synthesizer Architecture
• Leverages standards-based semantic and logical reasoning technologies
• Knowledge captured declaratively in Web Ontology Language (OWL) +
Semantic Web Rule Language (SWRL)
• Prolog well suited to the kind of reasoning we need to do with tasks
– COTS Description Logic (DL) reasoner engines inadequate
• Task Engine is implemented as meta-interpreter. Task plan is proof tree
• Can be hosted as a web service
4955-13
ONISTT Top-level Ontology (simplified)
4955-14
Example Domain Ontology
(Spatial Reference Frame -- partial)
4955-15
Accomplishments vs. Remaining Issues
• The technical feasibility of the ONISTT concept has been
demonstrated by conducting a half-dozen, increasingly
complex experiments using real tasks and real systems
• The Analyzer/Synthesizer renders solutions that are at least as
good as the traditional BOGSAT approach
• Practicality-related issues remain an open question, and
are the focus of our current efforts
• These include:
• Making the semantic artifacts (ontologies and rules) accessible to
domain subject matter experts (SMEs) who are not also Semantic
Technology Experts (STEs)
• Tools to help deal with the well known “Knowledge Acquisition
Bottleneck” for legacy systems
• Standards, patterns, & best practice guidelines to allow useful
semantic artifacts to be obtained as part of new system
acquisitions
4955-16
OMG activities that have helped
• The Ontology Definition Metamodel (ODM) has
allowed development of tools (like the Visual
Ontology Modeler) that scratches a portion of the
“Make Accessible to SMEs” itch
4955-17
Possible new activity for OMG
• There is a dearth of standards, patterns, & best
practice guidelines that would allow useful semantic
artifacts to be obtained as part of new system
acquisitions
• Given prior history (e.g., ODM) OMG seems like the
most logical choice as the standards-making body
to pursue that goal
4955-18
Conclusions
• Ontologies/Rules provide a means to express key
capabilities needed and key capabilities available from
component resources
• Formal declarative expression understandable by a machine
• Can be extended to an arbitrary level of granularity
• AI inference engine technology provides a tool that can
• Discover/Synthesize resource compositions tailored for a given purpose
and determine if known interoperability defeaters are potentially present
• Determine if potential problems can be mitigated by in-line data mediation
• Although Description Logic is inherently “black/white” (not
fuzzy) at the atomic level, the ONISTT framework provides
means to reason about “gray areas” at the molecular level
• A necessity since many of the important issues cannot be reduced to
strictly-true or strictly-false facts
4955-19
4955-20
Download