Transcript

advertisement
We turn now to level-4 processing. The focus of level-4 processing is to seek to optimize the on-going
data fusion process (and sub-processes). In particular, we seek to improve the timeliness, relevance,
accuracy and comprehensiveness of the fusion process to meet the needs of the system users. This
lecture provides an introduction to level 4 processing, describes two broad approaches including
optimization and market-based methods, and provide some insight into future research.
In the JDL model, the level-4 process is deliberately shown as being part “inside” the data fusion process
and part “outside” the fusion process. This is because the level 4 process is a “meta-process” – it’s a
process that monitors the other portions of data fusion processing and seeks to provide a means of
optimizing the overall fusion process.
Conceptually we want to monitor the on-going data fusion process (at all levels) and make appropriate
changes to improve both the efficiency of the processing and the resulting data fusion “products”.
This optimization may include; i) changes in how we task sensors or sources of information, ii) use of
“active” (energy emitting) sensors versus “passive” (energy collection) sensors, iii) dynamic selection
among alternative processing algorithms, iv) modification of selected algorithms to process the data, v)
allocation and re-allocation of information processing among processing resources, vi) information
requests to a user or analyst to provide both immediate and long term feedback to the fusion process,
etc.
While level-4 processing might be considered as a classic “system optimization” problem, it can be more
challenging for several reasons. First, some things might be out of our control – that is, we don’t
necessarily control the sensors or sources of information but instead must deal with what’s available. A
data fusion system often operates as part of a larger system such as a command and control system, an
environmental monitoring system, or other application. Thus, the ‘resources’ are not necessarily ours
to control. Second, things that might be recommended to optimize the fusion process may be suboptimal or even detrimental to the overall process in which our fusion system is embedded. For
example, if we’re using multiple passive sensors at night to hunt for, and identify an elusive animal (e.g.,
the search for Sasquatch!), it might be tempting to turn on search lights (use an “active sensor) to see
better. However, turning on the lights or active sensor could very well scare off the thing we’re trying
to observe.
There are several factors that motive the evolution of level-4 processing. Some key factors that
provide both new opportunities for level 4 processing as well as new complexities for level 4 processing
include the following.
1) Intelligent sensor/processing systems - The rapid evolution of intelligent sensor/processing
systems provides both opportunities and challenges for more sophisticated tasking. Modern
sensors often have the ability to dynamically change their sensing patterns, use advanced
waveforms (for active sensors), and perform complex signal and image processing “on the fly”
and in reaction to the environment. Simple examples are the advances in mobile phone video
and image processing which can adapt to light changes and content of the environment
(presence of people). Cell phones are rapidly becoming intelligent multi-sensor devices with
sophisticated sensor processing. Such devices provide both the opportunity for dynamic
tasking changes but require level-4 algorithms to take advantage of that flexibility.
2) Human observers can augment the use of traditional sensor systems. Humans can provide
observations and commentary using mobile devices via text messages. They can also act as
“sensor platforms”, carrying smart phones with multi-sensor systems available to collect data.
The potential availability of 7 billion human sources and mobile sensor platforms
(http://www.digitaltrends.com/mobile/mobile-phone-world-population-2014/#!bkDXk0) is a
potential huge source of information for applications such as crisis management, environmental
monitoring and citizen science. However, the ability to task, access, and calibrate such sources
is quite challenging.
3) Distributed, dynamic, network systems - Information fusion systems are routinely becoming
distributed systems. Sensors, sources of information, processing and even users are generally
distributed, rather than co-located. Sensors can include sensors embedded in numerous entities (e.g.,
the Internet of Things), mobile sensors via smart phones, and platforms such as drones and robots.
This provides both a rich environment for potential allocation of resources for improved data fusion
(and hence situation awareness), as well as more complexity in the tasking and resource allocation.
4) Increased bandwidth of local and global communications systems provides opportunities to share
information such as high-definition images, videos, and even three-dimensional image data available
from sources such as a Lidar. Thus, level 4 processing must consider not only the availability of
increasing bandwidth to collect and distribute more data, but also the potential burdens of such
collection and distribution.
5) Service-based architectures are evolving to allow improved connectivity and linkages between
previously “unconnected” systems and entities. These evolving tools, data standards, and access to
distributed (cloud-based) resources provide opportunities and challenges for level-4 processing.
6) There are ever increasingly complex algorithms that are being developed and are capable of being
executed on mobile devices. For example, smart phones routinely allow the application of
sophisticated image processing techniques directly on the phone. As meta-data generation methods
(e.g., semantic labeling of images become available), this provides new opportunities for information to
be available for level-4 processing. In addition, the availability of cloud computing provides
opportunities for the use of computationally demanding optimization techniques for level-4 processing.
A partial list of types of functions involved with Level-4 processing is shown in this chart. This is not
meant to be an exhaustive list, but rather an indication of the types of functions that can be part of the
implementation of a level-4 process. We will summarize these briefly.
•
Mission management – Recall that Level-4 processing must be done in the context of the
mission or application supported by the data fusion system. The data fusion system must
operate within the constraints of the overall system (whether it be a system to monitor the
health of a mechanical system or a military situation awareness system, etc.). Thus, it is
necessary to represent the mission objectives and constraints of the overall system and develop
computational methods to adjudicate between the demands of the system mission and the goal
of optimizing the data fusion system performance.
•
Target prediction – If we are monitoring or tracking an entity (such as a maneuvering target or
fault condition in a machine), it is necessary to represent the current state of an entity (including
location, characteristics and identity) and perform state estimation or prediction of future states
of the entity.
•
Sensor and platform modeling – Part of process refinement involves modeling the location and
conditions of our sensor system, the performance of the sensors in the current environment
(e.g., to account for effects such as the atmosphere on signal propagation), as well as the
interaction between the targets or entities being observed and the signals emitted by the
sensors (or signals emitted by the entities).
•
System performance modeling – In addition, to the performance of the sensors, we need to
model the performance of the data fusion processing (viz., how well the data fusion algorithms
are performing), and compute measures of performance and measures of effectiveness.
•
System control – Finally, the overall system control must be modeled. For optimization
problems this involves i) establishment of optimization criteria (such as data fusion system
accuracy, timeliness of the data processing, utilization of resources such as the communications
networks, etc.), ii) selection of optimization algorithms, and iii) explicit development of a control
philosophy.
Again, we emphasize that optimal control theory is an entire field of study in it’s own right. We provide
here only a brief glimpse of issues involved for optimizing the functioning of a data fusion system.
To review, sensor management is a process which seeks to manage and coordinate the use of sensing
and information resources in a manner that improves the process of data fusion for improved accuracy
and decision-making for human users. Sensor management techniques can be based on variety of
classes of techniques including; heuristics, expert systems, utility theory, automated control theory,
cognition, decision theoretic approaches, probabalistic based methods, stochastic dynamic
programming, linear programming, neural networks, genetic algorithms, and information theory. The
purpose or functions of sensor management are to; i) allocate sensor utilization (pointing, moding and
emission control for active sensors), ii) prioritize and schedule service requests for use of resources such
as sensors and communications infrastructure, iii) assist in sensor data fusion, iv) support the
reconfiguration of sensors, communications and computations in challenging or degraded operational
modes, v) optimize the use of available sensor assets, and vi) assist in communications modeling and
use.
In the next series of charts, we will provide an example based on a modern military tactical aircraft.
The example provides an opportunity to show the interaction between a data fusion system and a
complex mission-focused platform. The concepts introduced for this application and clearly adaptable
to other types of systems and domains.
For this example, the aircraft has an integrated avionics system with agile and programmable/steerable
sensors which may be both active and passive. The observing environment is challenging due to the
very dynamic platform motion and mission, the potential for jamming and deception, and the
potentially large number of entities to be observed (which might be closely spaced). In addition, the
environment might involve so-called “low observables” in which entities use stealth, camouflage, or
other methods to reduce their observability.
While the integrated platform has extensive computing and sensing capabilities, because they are part
of a military tactical aircraft, there are certainly limitations and challenges. These include; limited
sensor resources, the variability of sensor performance, unplanned sensor failures, and mission
constraints such as “emission control” (EMCON) conditions in which the use of active sensors is limited
or prohibited based on the overall mission.
This chart shows a conceptual interaction between functions such as; i) mission control/management, ii)
evaluation of data fusion and mission performance, and iii) a source and fusion control loop which
focuses on the sensor resources and the fusion processing methods.
There is clearly a coupling between three areas; i) the sensor management system, ii) the sensor suite,
and iii) the sensor data fusion processing.
The sensor management system involves scheduling the
sensors, seeking to obtain synergy between different types of sensors (to improve accuracy of locating,
characterizing and identifying entities) and understanding the optimal performance of the overall
system of sensors. The sensor suite involves a collection of sensing resources, each having different
performance specifications, differing ability to contribute to estimating the state of entities, and
different performance agility (e.g., ability to change modes, perform internal processing, etc.). Finally,
the sensor data processing involves previously discussed functions such as alignment, association and
correlation, tracking, etc. The overall mission requirements and priorities drive what design choices are
made for these interacting systems.
A useful construct for managing the use of sensors comes from the mathematics of utility theory. The
concept of utility was originally introduced by Daniel Bernoulli in 1738 based on the observation that
when playing games of chance, people do not make choices based strictly on the potential payoff or
money involved, but rather based on an internal assessment of the usefulness or utility of an outcome.
In data fusion, for example, we might seek to optimize the use of sensors to determine the location of
an entity. If we are locating a vehicle in a field, we might need to know only an approximate location
(e.g., within a few feet of accuracy). Using sensors to determine the location to within an inch,
however, may be of little added value. By contrast, if we are seeking the location of a tumor within a
patient’s body, it may be necessary to know the location to within a hundredth of an inch. The value of
the location is based on the application and our “need” to know the information. Thus, instead of
optimizing the absolute accuracy of the location, we optimize a utility function that provides a measure
of how useful the information is to us.
For our tactical aircraft example, we may define a utility function to describe aspects of covertness,
tracking accuracy, target detection performance, and other factors. The control variables to allow us to
optimize the utility may include factors such as time on target (how much time we task a sensor to
observe a target), the sampling rate (the number of observations per minute or second), what is
measured, and search versus tracking tradeoffs. A search utility might involve a function of the
number of detected versus undetected targets. Similarly, a tracking utility function may involve
calculations of the track covariance (the accuracy with which we predict that we can observe a target’s
location and motion).
This chart provides a summary of some of the types of sensor controls that may be available for the
optimization process. For smart sensors with embedded computing capabilities, we may have access
to:
•
Mode controls include factors such as; turning an sensor on or off, selecting a sensor mode of
operation (e.g., power level, waveform or processing mode, scan, track or track-while-scanning),
and sensor processing parameters such as decision thresholds and detection, tracking and
identification criteria.
•
Spatial control functions include; where to point or direct a sensor (either be physically moving
an antenna or electronic beam directions, selecting a field of view, scan and search rates, scan
and search pattern selection, controlling individual sectors of observation, and parameters for
designated targets.
•
Temporal control functions include; start and stop times for modes and sector controls, sensor
look times, dwell times on target, and maximum allowed emissions (duty cycle)
•
Reporting control functions include; filters on reporting based on target attributes (e.g., target
class or identity, or filtering by anticipated lethality of an observed target), reporting based on
the spatial attributes such as minimum and maximum range limits or altitude limits, and finally
prioritized reporting of designated targets.
Factors that may affect how to set the priority of observed targets include; target identity, what
information is needed, anticipated threat evaluations, potential opportunities for offensive or defensive
actions, and finally fire control needs (if we need to guide a weapon to address a particular target or
target type).
Another view of general sensor management functions is provided by Ed Waltz and James Llinas in the
book, Multisensor Data Fusion, published by Artech House, Inc. in 1990. The chart shows an expansion
of the JDL Level 4 process with connections to Level 1 processing (on the bottom left hand side of the
figure) and with the Level 2 and Level 3 processing shown on the top left hand side of the figure. The
types of functions described by Waltz and Llinas include; i) prioritization of threats and modeling of what
information is “needed” by the other fusion processes, ii) assignment of target or entity priorities (what
should be observed first), iii) an assignment algorithm that links targets to sensors that could observe
those targets, iv) allocation, scheduling and control of the overall sensor system, v) and an interface to
sensors to control or task the sensor observations. These functions are supported by sensor
performance and prediction models, and spatial-temporal control computations.
These functions are generic and meant to show the types of data and computations that might be
included in the level-4 fusion processing, to optimize the use of sensors to obtain vital information.
The final chart for the optimization discussion is simply a list of potential measures of performance
(MOP) and measures of effectiveness (MOE) for the level-4 process. This list is a set of possible metrics
that could be used to evaluate how well the sensors are being managed or tasked to obtain information
for the remaining part of the data fusion system. Thus, categories of MOP/MOE include; i) detection
and leakage – how well are we detecting targets of interest and to what extent are targets “slipping by”,
ii) target tracking – how accurately are targets or entities being tracked, iii) identification – how well is
our system identifying potential targets or entities, iv) raid assessment – for a military operation such as
a raid against opposing targets, how well did we correctly determine the number of targets and their
locations, was the information timely enough to support the operation, v) emissions – how well did we
manage the overall power used and emitted by the sensors, did we emit more energy than necessary,
vi) sensor utilization – were the sensors used effectively both individually and in concert, vii) system
response – did we obtain data and information in a sufficiently timely manner to meet the needs of the
system users, and viii) computational performance – how much computing, memory and
communications bandwidth were used to obtain the results.
We turn now to another concept for level-4 processing, namely models based on commerce. In the
electronic-business world, we are very familiar with exchanges such as e-Bay which effectively acts as a
global yard sale. Potential consumers use an electronic exchange service to access products being
offered by sellers. The consumers don’t know a priori where or who the sellers are, and the sellers
don’t know, a priori, where or who the potential consumers/buyers are. An automated auction
mechanism is used to match the available services or goods to the requested services or goods by
potential buyers. Also, rating systems are available to assist buyers in evaluating the honesty and carrythrough of sellers. Other types of exchanges include eLance ( for linking free-lance workers with
people needing their services, Amazon for buying books and other goods, and many others.
It is natural to consider the use of e-commerce concepts for level-4 processing. Numerous algorithms,
software-based exchange services, and other concepts can be adapted for level-4 processing.
Continuing this concept, we see that potential information consumers, shown on the right hand side of
the chart may include analysts, decision-makers, or even other information processing systems. On
the left hand side or the chart, information suppliers included sensors, other information processing
systems, and communications channels to link the data to the potential users. Thus, conceptually, a
sensor manager can be considered as an information/resource broker linking information suppliers (the
sensors, external databases, possible human observers) and the communications channels to the
information consumers (the analysts, decision-makers and other fusion processes).
Thus, it would appear easy to use the e-commerce, market-based concept for level-4 processing. After
all, we now routinely use on-line services to link sellers to consumers. Moreover, dynamic auction
techniques allow determining which consumer “wins” the services of which seller or sellers. However,
there are some special opportunities and challenges for level-4 processing. Let’s consider two cases in
the next two charts.
In our first case, we have a situation in which a single supplier (as sensor) can satisfy multiple consumers
with a single product or service. That is, if two different consumers want image data about a particular
region, it would certainly be possible for a single data collect (e.g., by an aircraft, drone, or satellite) to
provide the requested information. Thus, the “seller” can use the same information to be “sold” to
multiple customers. This provides both an opportunity and challenge for level-4 processing using
auction methods.
The second case is more challenging. Suppose an information consumer seeks information that cannot
be supplied by a single sensor or source. For example, a user wants to know both the location of an
entity within a specified level of accuracy AND know the entity’s identity to a specified level. It is
common that a single sensor may not be able to satisfy both requirements. Moreover, it may not be
possible that any set of sensors can satisfy this information. Thus the market exchange must somehow
be able to find a suitable collection of resources that, when fused, best meet the set of information
requirements. So, the information broker must be intelligent to do the best job possible in scheduling
and tasking the information resources.
An example of a market-based design for sensor allocation is shown in this chart. It is based on work
conducted by Tracy Mullen et al. (see V. Avasarala, T.Mullen and D. L. Hall (2009), “A market-based
sensor management approach”, Journal of Advances in Information Fusion, April 2009). The design
treats sensors and network communications as “sellers” of services and system users as “consumers”.
A sensor manager, acts as market auctioneer to seek resources in response to information needs from
consumers and a mission manager. In effect the “sellers” (the sensors, sensor processing and
transmission channels produce finished products of data related to environmental scans and target
tracks. The consumers request or “bid” for information, which is translated into specific types of
tasking by the sensor manager. The sensor manager sends information about the status of the sensing
and communications system to the mission manager which has responsibility for providing information
about resource budgets and availability.
The previously referenced paper by Avasarala, Mullen and Hall describes the actual market-based
allocation and auction mechanisms. The auction process involves the concept of tatonnement. This is
a type of optimization technique to dynamically seek a price equilibrium which best meets the demands
of the “buyers” (viz., what constitutes an acceptable “price” for the requested resources) and the
capabilities of the suppliers (viz. what the suppliers can “deliver” at an acceptable “price”).
We close the discussion of level-4 processing by making some general remarks about sensor
management.
•
First, ever increasing capabilities of infrastructure technologies such as improved connectivity
and data rates, improved access to cloud-based computing, enhanced sensing and processing
capabilities within individual sensors provide opportunities for increasingly sophisticated sensor
(and system management).
•
Second, sensors and sources of information are required by more than just the fusion
process/processor. Sensors and sources of information are not necessarily “owned by” or
under the strict control of the data fusion system. As a result, this creates a contention for
sensor service as well as a contention for computational and communications resources.
Hence, the level-4 process is more complicated than simply a problem of optimal process
control.
•
Third, in an integrated system (such as our example of the modern tactical aircraft, a coupling
exists between sensor management, data fusion, and the sensors. This leads to a challenging
global optimization problem which is exacerbated by issues such as the potential inability of any
set of resources to satisfy the needs and requests of users.
•
Finally, this subject has been “under-researched” in the data fusion community. There are
many research opportunities here, especially when considering distributed systems that involve
both physical sensors and human observations.
A final tip is that situation awareness for any application such as military situation awareness, control of
complex machinery, or monitoring emergency events involves effectively using system resources, such
as sensors, computational capabilities and communications resources so that the user gets the
information that they need, when the need it. In a sense, it is analogous to how the human brain helps
to manage our attention to provide critical information when it is needed, rather than simply trying to
react to all of our senses.
Download