INFOSYMBIOTIC SYSTEMS

advertisement
INFOSYMBIOTIC SYSTEMS
The Power of Dynamic Data Driven Application Systems
The Power of Dynamic Data Driven Application Systems
Report of a Workshop held at Arlington VA
August 30-31, 2010
Funded by AFOSR xxxx, NSF yyyy
Table of Contents
EXECUTIVE SUMMARY ........................................................................................................................... 3
1. INTRODUCTION AND MOTIVATION ............................................................................................ 5
1.0 DDDAS INFOSYMBIOTIC SYSTEMS ............................................................................................................... 5
1.2 WHAT NATIONAL AND INTERNATIONAL CRITICAL CHALLENGES REQUIRE DDDAS CAPABILITIES? .7
1.3 WHAT ARE THE SCIENCE AND TECHNOLOGY CHALLENGES AND WHAT ONGOING RESEARCH
ADVANCES ARE NEEDED TO ENABLE DDDAS? ...................................................................................................9
1.4 WHAT KINDS OF PROCESSES, VENUES AND MECHANISMS ARE OPTIMAL TO FACILITATE THE
MULTIDISCIPLINARY NATURE OF THE RESEARCH NEEDED IN ENABLING SUCH CAPABILITIES? ............... 11
1.5 WHAT PAST OR EXISTING INITIATIVES CAN CONTRIBUTE, AND WHAT NEW ONES SHOULD BE
CREATED TO SYSTEMATICALLY SUPPORT SUCH EFFORTS? ............................................................................. 11
1.6 WHAT ARE THE BENEFITS OF COORDINATION AND JOINT EFFORTS ACROSS AGENCIES, NATIONALLY
AND IN SUPPORTING SYNERGISTICALLY SUCH EFFORTS? ................................................................................ 12
1.7 WHAT KINDS OF CONNECTIONS WITH THE INDUSTRIAL SECTOR CAN BE BENEFICIAL? HOW CAN
THESE BE FOSTERED EFFECTIVELY TO FOCUS RESEARCH EFFORTS AND EXPEDITE TECHNOLOGY
TRANSFER? ............................................................................................................................................................. 12
1.8 HOW CAN THESE NEW RESEARCH DIRECTIONS BE USED TO CREATE EXCITING NEW OPPORTUNITIES
FOR UNDERGRADUATE, GRADUATE AND POSTDOCTORAL EDUCATION AND TRAINING? ........................... 13
1.9 WHAT NOVEL AND COMPETITIVE WORKFORCE DEVELOPMENT OPPORTUNITIES CAN SUE?............. 13
2. ALGORITHMS, UNCERTAINTY QUANTIFICATION, MULTISCALE MODELING & DATA
ASSIMILATION ....................................................................................................................................... 13
2.1 DYNAMIC DATA ASSIMILATION
…………………………………………………………………………………………12
2.2 LARGE SCALE MODELING………………...……………………………………………………….12
2.3 UNCERTAINTY QUANTIFICATION (UQ) AND MULTISCALE MODELING …… 14
2.4 KEY CHALLENGES ........................................................................................................................................... 17
3. BUILDING AN INFRASTRUCTURE FOR DDDAS....................................................................... 19
3.2 EXISTING INFRASTRUCTURE ........................................................................................................................ 19
3.3 DYNAMIC RESOURCE MANAGEMENT ......................................................................................................... 20
3.4 RESEARCH NEEDS .......................................................................................................................................... 21
4. SYSTEMS SOFTWARE ...................................................................................................................... 24
4.1 DDDAS & SYSTEMS SOFTWARE ................................................................................................................. 24
4.2 PROGRAMMING ENVIRONMENT .................................................................................................................. 25
4.3 AUTONOMOUS SYSTEMS & RUNTIME APPLICATIONS SUPPORT ............................................................ 25
5. SUMMARY OF FINDINGS AND RECOMMENDATIONS ........................................................... 27
WORKS CITED ........................................................................................................................................ 28
APPENDIX A APPLICATIONS ............................................................................................................. 29
A.1 DYNAMIC DATA-DRIVEN COMPUTATIONAL INFRASTRUCTURE FOR REAL-TIME PATIENT-SPECIfiC
LASER TREATMENT OF CANCER ......................................................................................................................... 29
Executive summary
Over the last decade the Dynamic Data Driven Applications Systems (DDDAS) paradigm has
put forward a vision of dynamic integration of simulation with observation and actuation, in a
feed-back control loop, engendering new scientific and engineering capabilities; thus, this
vision, coupled with other recent disruptive technological and methodological drivers and
advances such as the advent of ubiquitous sensoring capabilities and multicore systems, is
poised to transform all areas where information systems impact human activity. Such
integration can impact and transform many domains, including critical infrastructure, defense
and homeland security and mitigation of natural and anthropomorphic hazards. Two recently
released studies, the Air Force Technology Horizons Report and the National Science
Foundation Foundation CynerInfrastructure for the 21st Century (CF21) Report, put forth
visions for science and technology that highlight the need for such integration of sensing, data,
modeling and decision making. The challenges for enabling the capabilities envisioned by
DDDAS were articulated from the outset, starting with the 2000 DDDAS Workshop, and
advances needed along several research directions, namely in applications modeling under
conditions of dynamic data inputs streamed into the executing model, in algorithms that are
stable under perturbations from dynamic data inputs, interfaces of executing applications
models with observation and actuation systems, and support for the dynamic execution
requirements of such environments; it was also recognized that efforts in these directions
needed to be pursued in the context synergistic, multidisciplinary research. Such efforts for
enabling the DDDAS vision have started under governmental support, and progress has been
made, together with the increasing recognition of the power of the DDDAS concept. However,
before such dynamic integration can be created and supported in robust ways, further efforts
needed to fully address the challenges articulated above. Moreover it has also become
recognized that, while seeding-level and some limited collaborative government sponsorship
has been highly fruitful, the multiple scientific (in computing, networks and systems software,
large and streaming data, error and uncertainty, sensor networks and data fusion and
visualization) that need to be overcome, require sustained and systematic support.
Surmounting these challenges requires multi-disciplinary teams and multi-agency
sponsorship at stable and adequate levels to sustain the necessary extended and
extensive inquiry over the next decade. . In this report … InfoSymbiotic Systems and
InfoSymbiotics - the power of Dynamic Data Driven Applications Systems …
Essentialcharacteristics of DDDAS environments are the dynamic nature of the data streamed
into the application, typically large-scale and complexity of the applications, and the analysis
and likely feedback mechanisms. Ensemble Kalman/particle filters, methods for non-Gaussian
dynamical systems, large scale parallel solution methods and tools for deterministic and
stochastic PDEs like those encapsulated in the PeTSc library and stochastic Galerkin/collocation
methods, new algorithms for large-scale inverse and parameter estimation problems and
advances in large-scale computational statistics and high-dimensional signal analysis are
enabling application of DDDAS to many realistic large scale systems. Key challenges remain in
integrating the loop from measurements to predictions and feedback for highly complex
systems, dealing with large, often unstructured and streaming data and complex new computer
architectures, developing resource aware and resource adaptive methodology and application
independent algorithms for model analysis and selection.
Infrastructure capable of supporting DDDAS need to support complex, intelligent applications
using new programming abstractions and environments able to ingest and react to dynamic
data. Components of the infrastructure include sensors, actuators, resource providers or
decision makers. Data flows among them may be streamed in real-time, historical, filtered,
fused, or metadata. Research challenges include architecture to support the complex and
adaptive applications, data and networks, tools to manage the workflows and execution
environments and integration and interoperability issues. Test beds (hardware and software)
are needed for advancing methodology and theory research.
Systems software must evolve to support DDDAS components need to execute on heterogeneous
platforms with widely varying capabilities fed by real-time sensing. Algorithms and platforms
must evolve symbiotically to effectively utilize each other’s capabilities. Research challenges in
systems software remain in runtime systems support to support program adaptation, faulttolerance, new retargeting compilers that can generate efficient code from a high level
mathematical or algorithmic description of the problem and rapid proliferation of heterogeneous
architectures.
1. Introduction and Motivation
1.0 DDDAS InfoSymbiotic Systems
The core ideas of vision engendered by the dynamic data driven application systems
(DDDAS) concept have been well articulated and illustrated in a series of workshop reports
and research projects presentations (Douglas and Darema, DDDAS Report 2000, Douglas
and Darema, DDDAS Report 2006, Douglas 2000) and Series of International DDDAS
Workshops – www.dddas.org ). InfoSymbiotic Systems embody the power of the DDDAS
paradigm, where data are dynamically integrated into an executing simulation to update or
augment the application model and conversely the simulation steers the measurement
(instrumentation and control) process. Work on DDDAS supported through seeding has
accomplished much but the confluence of technological and methodological advances in the
last decade has produced added opportunities for integrating simulation with observation
and actuation, in ways that can transform all areas where information systems impact
human activity.
Starting with the NSF 2000 DDDAS Report, efforts for enabling the DDDAS vision have
commenced under governmental support, in the form of seeding-level projects and a 2005
across agencies proposal solicitation. Under this initial support, progress has started to be
made, together with the increasing recognition of the power of the DDDAS concept. The
2005 NSF Blue Ribbon Panel characterized DDDAS as visionary and revolutionary concept.
The recently enunciated National Science Foundation vision for Cyberinfrastructure for the
21st Century (CF21) (NSF 2010) lays out “a revolutionary new approach to scientific
discovery in which advanced computational facilities (e.g., data systems, computing hardware,
high speed networks) and instruments (e.g., telescopes, sensor networks, sequencers) are
coupled to the development of quantifiable models, algorithms, software and other tools and
services to provide unique insights into complex problems in science and engineering.” The
DDDAS-IS paradigm is well aligned and enhances this vision. Several task forces set up by
NSF have also reported back with recommendations reinforcing this thrust. In a similar if
more focused and futuristic vision the recent Technology Horizons Report developed under
the leadership of Dr. Werner Dahm, as Chief Scientist of the Air Force, declares that “Highly
adaptable, autonomous systems that can make intelligent decisions about their battle space
capabilities … making greater use of autonomous systems, reasoning and processes
...developing new ways of letting systems learn about their situations to decide how they can
adapt to best meet the operator's intent” are among the technologies that will transform the
Air Force 20 (“10+10”) years from now (Technology Horizons 2010, Dahm 2010).
In essence a DDDAS
is one in which data
is used in updating
an executing
simulation and
conversely
simulation outcomes
are used to steer the
observation process.
Capitalizing on the promise of the DDDAS concept, a workshop was convened
to address further opportunities that can be pursued and derived from
DDDAS-IS approaches and advances. The Workshop was attended by over 100
representatives from academia, government and industry and co-sponsored
by the Air Force Office of Scientific Research and the National Science
Foundation explored these issues on Aug., 30-31, 2010. The Workshop was
organized into Plenary Presentations and Working Groups sessions, and outbriefs of the Working Groups. The plenary presentations addressed several
key application areas, addressing the impact for new capabilities enabled through DDDAS,
progress made by researchers in advancing several research areas contributing towards
enabling DDDAS capabilities for the particular application at hand. Prior to the workshop, a
number of questions had been developed by the workshop co-chairs together with the
working groups co-chairs and participating agencies program officials. The working group
discussions addressed these questions posed to the attendees, as well as items that were
brought-up during the discussions. This report summarizes the deliberations at and
subsequent to the workshop. In the first chapter of this report are addressed key questions
related to new opportunities, key challenges and impacts in pursuing research on DDDASIS. Subsequent chapters are organized around the questions posed to the participants of the
address more specific issues related to research on algorithms and dynamic data
assimilation, uncertainty quantification, data management, systems software and
supporting cyberinfrastructure that DDDAS-IS environments entail.
1.1 Why is now the right time for fostering this kind of research?
1.1.1. Scale and Complexity of Natural, Engineered and Societal Systems: The increase
in both complexity and degree of interconnectivity of systems, including natural and
engineered systems, large national infrastructure systems (“smart grids”) such as
electric power delivery systems, and threat and defense systems has provided
unprecedented capabilities, yet at the same time this complexity has added fragility to
the systems, and the interconnectivity across multiple systems has tremendously
increased the impact of cascading effects across the entire set systems of even small
failures in a subset of any of the component systems. This new
Modern electric
reality has led to the need for more adaptive analysis of
power grids use
systems, with methods that go beyond the static modeling and
complex control
simulation methods of the past, to new methods such as those
systems to guide
that can benefit from augmenting the system models through
power production
monitoring and control/feedback aspects of the systems, thus
from distributed
creating the need for DDDAS approaches for designing and
energy sources
managing such systems. In this report we state this are today’s
and distribute
complex systems are DDDAS or InfoSymbiotic in nature. While
power more
preliminary efforts in DDDAS (such s those created through the
efficiently, yet new 2005 cross agencies DDDAS Program Solicitation) brought
vulnerabilities
advances in DDDAS techniques in some applications before
arise that may
(including for example for management and fault-tolerant
allow massive
electric power-grids), many current systems have complexity
power outages
and dynamicity in their state space, that make the use of DDDAS
illustrating the
approaches essential and imperative.
complexity and
fragility.
1.1.2: Algorithmic Advances A second factor that favors acting
now is the advance in a number of algorithms that enable
DDDAS technologies, including non-parametric statistics that allows us inference for
non-Gaussian systems, uncertainty quantification and advances in numerics of
schocastic differential equations (SDEs), parallel forward/inverse/adjoint solvers,
smarter data assimilation (that is: dynamic assimilation of data with feed-back control to
the observation and actuation system), math-based programming languages and hybrid
modeling systems. Simulations of a system are becoming synergistic partners with
observation and control (the “measurement” aspects of a system).
1.1.3 Ubiquitous Sensors A third factor is the increasing ubiquity of sensors – low cost,
distributed intelligent sensors have become the norm. Some, like phone geo-location
information and instruments in automobiles, are paid for and already in place, collecting
and/or transmitting data without the user’s knowledge or involvement. There are
tradeoffs between data and bandwidth, but in general there is a flood of data that
needs to be filtered, transferred to applications that require the data, and possibly
partially archived.
1.1.4 Transformational Computational Capabilities A fourth factor is the disruptive
transformation of the computing and networking environment with
multicore/manycore chips, heterogeneous architectures like GPUs, cloud computing,
and embedded computing leading to an unparalleled levels of computing at minimal
cost. Network bandwidths have also undergone transformative advances – for e.g. the
ESNET network of the DOE advertises the ability to transfer 1TB in less than 8 hour
(Dept. of Energy 2010). Commercial networks expect to provide 100Gbps in the near
future (Telecom 2010).
In summary, new technology advances drive the world and research has to stay ahead
of trends. We are in the midst of a leap-frogging phenomena due to simultaneous
changes in sensors, data collection and analysis, networking and computing. These
create new platforms and environments for supporting the complex systems of interest
here providing further motivation for embarking on comprehensive efforts for creating
DDDAS capabilities
1.2 What National and International critical challenges require DDDAS
capabilities?
National and international critical challenges that need DDDAS capabilities include








the Big Data problem
advancing weather and climate prediction technology
mitigation, forecasting, and response to natural hazards (hurricanes/typhoons,
floods, wildland fires, tornadoes, thunderstorms, and other severe weather)
homeland and national security
detection of network intrusions
transportation (surface, sea, rail, air)
water management
environmental monitoring





protection of space assets
critical infrastructure like power delivery systems, reinvigorating longstanding
power sources (nuclear), current issues with the power grid, and renewable
energy (e.g., solar, water, and wind power)
searching in visualizations
medical and pharmaceutical applications – cancer treatment, surgery
treatments, pill identification and delivery, misuse of medications, and gene and
proteomics
industrial applications – manufacturing, medical, aerospace,
telecommunications, information technology/computer industry
DDDAS has direct application for decision-making in anti-terrorism, homeland security
and real battlefields. For example, in real, dense, cluttered battlefields with fixed and
moving objects, myriad of sensor types (radar, EO/IR, acoustic, ELINT, HUMINT etc.) that
need to be fused in real time. It contains deluge of data like video data which is
uncorrelated with radar, SIGNIT, HUMINT and other non-optical data. Thence, Lt. Gen.
Deptula stated ``swimming in sensors and drowning in data’’. These data are
incomplete, with errors and needs to be optimally processed to give unambiguous and
target state vectors, including time.
As another example in civilian critical infrastructure environemnts,the recent oil spill in
the Gulf of Mexico showed the need of better predictions of the spread of the oil in
order to take more effective mitigating actions, and moreover address the issue of the
aftermath which created a new problem, that of determining the residual oil and its
locations. The observations of residual oil involve a large set of heterogeneous sources
of data, from satellites and to physical inspection and ocean water sampling,
measurements that are dynamic in nature as well as at different scales requiring data
fusion to combine the data.
Similar
autonomous
river
basin
simulation
multi-scale
data
fusion,
physical
and
surgical control
management.
We are in the
midst of a leapfrogging
phenomena due to
simultaneous
changes in
sensors, data
collection and
analysis,
networking and
computing.
applications exist in problems with
vehicles, protection of space assets, real-time
management, structural health monitoring,
assisted
surgery,
space
weather
prediction/modeling with swarm of satellite,
dynamic gene expression and proteomics
intelligent search machines for searching in
virtual environments, image-guided real-time
and in production planning and supply chain
1.3 What are the Science and Technology Challenges and what ongoing
research advances are needed to enable DDDAS?
1.3.1 Cyberinfrastructure Advances in the mathematical modeling, algorithms and
understanding errors and uncertainty invoke additional pressures on the need for
efficient infrastructure (e.g., operating at scale, with concomitant increase in failures at
all levels and failsafe implementation requirements). Multiple coordination strategies in
the infrastructure in a single DDDAS is essential to ensure successful results. The
infrastructures for DDDAS need to support complex, intelligent applications using new
programming abstractions and environments able to ingest and react to dynamic data.
Different infrastructures will be needed for different application types. National,
persistent DDDAS infrastructure connecting new Petascale compute resources via 100
Gbps networks to special purpose data devices could support a range of large scale
applications. Easily deployable and reliable systems will be needed to be deployed over
ad-hoc networks in the field to support medical, military, and other applications
operating in special conditions. The majority of researchers operating in university and
national or industrial laboratories will require DDDAS systems that securely connect
external data sources to institutional and distributed resources.
1.3.2 Dataology and Big Data A new definition of what is data needs to be developed.
Digital, analog, symbolic, picture, and computational data are just the beginning of
things that encompass data. A whole new field called Dataology is being developed,
both in academia and industry.
There have been impressive recent advances in commercial and academic capabilities
for the Big Data problem (Berman 2010). However efficient, scalable, robust, generalpurpose infrastructure for DDDAS has to address the Big Data problem (particularly for
Clouds and Grids) as well the Dynamic Data problem – characterized by either (i) spatialtemporal specific information, (ii) varying distribution, or (iii) data that can be changed
in some way, e.g., either operated upon in-transit or by the destination of data so that
the load can be changed for advanced scheduling purposes.
The Big Data problem is now a near catastrophe. Sensors streaming data and
supercomputers generate vast amounts of data. In some cases, nothing ever uses the
stored data after archiving. It is imperative that means be developed to dynamically
handle this flood of data so that train-loads of disk drives and tapes are not wasted and
that the computations are useful. Annotating data with ontologies is one approach so
that data and models are matched. By identifying multiple models, different ones can be
compared to see which ones are better in the same context.
1.3.3. Streaming Data Typical algorithms today deal with persistent data, but not
streaming data. New algorithms and software are needed for streaming data that allow
on the fly, situation-driven decisions about what data is needed now and to reconfigure
the data collection sensors in real-time to push or pull in more useful data (rather than
just pull in more data). The granularity, modality, and field of view should all be
targeted. The data mining part of a DDDAS requires similar advances, too. Data security
and privacy issues frequently arise in the data collection and must be addressed. Smart
data collection means faster results that are useful.
1.3.4 Error and Uncertainty Data integrity and quality are essential to DDDAS.
Uncertainty quantification (UQ) is mathematical field that has made great strides in
recent years and now positioned to lead improvements in DDDAS. Data in applications
is essentially worthless unless the error in the data is known or estimated well. Both
systematic and random errors in the data must be found, identified, and catalogued.
There is a cost for using UQ, which must be part of an optimization process of time
versus quality of results.
Reducing the quantity of data is essential. This comes back to the Big Data problem. We
need to develop a formal methodology and software for general problems to specify
what is important in data (i.e., data pattern recognition through templates or some
other system) and what to do when something important is found along with a measure
of uncertainty. Reducing redundancy and describing data by features instead of quantity
is essential. A common method in game stations is that only the data changes are
transmitted, not whole scenes. Similar strategies need to be developed for DDDAS.
Scaling the models computationally to reduced data means faster results.
1.3.5 Sensor Networks and Data Fusion Searching and discovering sensors and data
must become a simple function, both algorithmically and by software. Different
stakeholders will benefit from the ability to detect content on the fly and to couple
sensor data with domain knowledge.
Fusing data from multiple sensors and models dynamically will have to be developed
that is on demand, context dependent, actionable, and fast. A good example is
identifying when someone is stressed in a manner that would be of interest to
homeland security at transportation or building sites.
New strategies are needed for sensor, computing, and network scheduling. Scheduling
should be quasi-optimal, intelligent and automatic, similar to what is expected when
using a batch system on a supercomputer. Where, when, and how to do the processing
must be decided so that data can be delivered and reconfigured, models changed, and
symbiotically make the DDDAS work. Where and how include locally, centrally,
distributed geographically through networks, or some combination. When and how
include now or later and must evaluate if the results are critical in nature or not.
1.3.6 Visualization Very large-scale data visualization is an area of interest in DDDAS.
What is now visualized in a CAVE environment will be visualized in a few years using flat
panel screens. Already there is a new area of research in 3D visualization on power wall
that does not require special glasses. Where a person stands to see in 3D is dependent
on features of the person’s eyes. Since people are different, a modest number of people
can see the material together by standing in different locations. More research in this
area is needed and will make DDDAS more useful.
Tools are needed to disambiguiate semantic non-orthogonality in data and models
(time, space, resolution, fidelity, science, etc.). We need to also bridge the gap between
the differential rates of innovation in data capture, computation, bandwidth, and
hardware.
1.4 What kinds of processes, venues and mechanisms are optimal to facilitate
the multidisciplinary nature of the research needed in enabling such
capabilities?
Numerous processes, venues and mechanisms exist to facilitate the multidisciplinary
nature of the research. Multidisciplinary, cross-directorate programs sometimes
including participation with other agencies have been extremely popular, leading to
overwhelming response of proposals from the research community, initiated crossdisciplinary teams. The NSF IGERT program is an opportunity to educate students in
multidisciplinary education. A challenge is to establish stable, long-term funding on a
regular basis. An advance in the past five years has been more journals and networking
venues for presenting multidisciplinary research results, some specifically for DDDASrelated work.
1.5 What past or existing initiatives can contribute, and what new ones should
be created to systematically support such efforts?
Past and existing initiatives such as DDDAS, ITR, CDI, and IGERT can contribute to
facilitating the multidisciplinary nature of the research. These have largely been
discontinuous programs, enabling continuity of nucleated collaborations remains a
challenge. The vast response of researchers to the calls for proposals have led to a low
success rates (below 5%) highlighting the need for further significant continuous
investment from other agencies. Long term continuity is particularly important for
DDDAS teams, as each problem requires the participation of people from multiple fields
– a domain/application specialist, computer science specialist, and an algorithm
specialist. Several academy reports + interdisciplinary report + Academy report 2000 +
new NSF report each make the point that interdisciplinary projects require a longer
gestation/spin up period/incubation, because a necessary part of spin up for each
project has been seen to be developing ability to communicate across fields, they
cannot get 5 disparate people together and produce results in a few months. This issue
is expected to continue as DDDAS projects advance to the point where they ‘close the
loop’, as previous funding linked at least 2 of several components needed for a DDDAS
system, more mature projects encompass the breadth of more problems.
1.6 What are the benefits of coordination and joint efforts across agencies,
nationally and in supporting synergistically such efforts?
There are numerous benefits of coordination and joint efforts across agencies,
nationally, and in supporting synergistically such efforts. Mission-oriented agencies can
provide well-defined problems, clarity on the specific decision information needed,
feedback, access to key datasets, sensors, or personnel (who may need this formal
partnership, even through a Memorandum of Understanding) to spend time on these
interactions, leading to higher impact results. Moreover, buy-in from agencies as one of
several sponsors of a project, leads to ownership in the result. Finally, sponsorship
across agencies contributes to continuity and stability of funding.
In recent years, there have been several initiatives from various funding agencies to
support research related to various components of DDDAS. These include the ITR, CDI,
and CPS programs of NSF, the PSAAP program of DOE, and UQ MURI of AFOSR. DOE also
had calls on multi-scale research and a recent UQ program. Some of the NIH RO1’s have
certain flavor of multi-scale and data-driven research. There is a need for direct DDDAS
specific calls.
Finally, having multiple agencies involved will foster the creation of new research fields
that will lead to new industries, jobs, wealth creation, and tax revenues as a payback.
New tools will be created after integrating tools from divergent fields that normally
would not work together. Solving these types of new problems is only possibly by
integrating computer science, mathematics, and statistics with researchers from the
application areas.
1.7 What kinds of connections with the industrial sector can be beneficial? How
can these be fostered effectively to focus research efforts and expedite
technology transfer?
Connections with the industrial sector can be beneficial and can be fostered effectively
to focus research efforts and expedite technology. Some obvious partners include
energy sector, manufacturing, medical, aerospace, telecommunications, information
technology/computer industry. The immediate effect could be another source for
funding for research as part of partnerships with academia - sponsorship creates
ownership that enhances interest and participation in research and results.
Participation in such joint workshops and other methods to enhance communication
and exchange of information are recommended.
1.8 How can these new research directions be used to create exciting new
opportunities for undergraduate, graduate and postdoctoral education and
training?
These new research directions can be used to create exciting new opportunities for
undergraduate, graduate, and postdoctoral education and training by providing exciting
multidisciplinary problems that can excite students and draw them into this field. This
work educates people who bridge academia and industry, providing better employment
flexibility and opportunities than one specific program. These programs are helping
universities modernize and adapt to the interconnected, complex environment. It is
creating new alliances within departments, connecting alliances between national
laboratories, universities, and industry, nurturing relationships that may endure as the
students graduate and seek employment. DDDAS projects help universities modernize,
creating new programs.
They reinvigorate departments with interdisciplinary
programs that create links between departments.
1.9 What novel and competitive workforce development opportunities can
ensue?
Novel and competitive workplace development opportunities can evolve from this field.
An example is adult education programs to retrain analysts for DDDAS problems. This
work educates people who bridge academia and industry, providing better employment
flexibility and more opportunities than just one specific program.
This program will provide training to achieve multidisciplinary workers by creating new
curriculum and degrees in fields such as UQ, bio-engineering, and HPC. It will also foster
collaboration between federal labs and universities. Graduate fellowships and REU
programs on DDDAS are necessary. This way it will create multidisciplinary researchers
who will be indispensable in government, industry, and academia.
2. Algorithms, Uncertainty Quantification, Multiscale Modeling
& Data Assimilation
Algorithms for integration of measurements, models, towards predictions and feedback
mechanisms are
Findings & Recommendations
a
key
1. Disruptive technological and methodological advances in the last decade have component of
produced an opportunity to integrate observation, simulation and actuation in
DDDAS
ways that can transform all areas where information systems impact human
activity. Such integration will transform many domains including critical technologies.
infrastructure, defense and homeland security and mitigation of natural and Essential
characteristics
anthropomorphic hazards.
2. Many challenges in cyberinfrastructure (computing, networks and software), large of
DDDAS
and streaming data, error and uncertainty, sensor networks and data fusion and
visualization have to be overcome.
3. Surmounting these challenges needs multi-disciplinary teams and multi-agency
sponsorship at stable and adequate levels to sustain extended and extensive inquiry
over the next decade.
environments are the dynamic nature of the data flow, large-scale and complexity of the
applications, and the analysis and potential feedback mechanisms. The primary
challenges are the development of integrated DDDAS systems and closing the loop from
measurements to feedback mechanisms and decision-making. We will now examine
these in the context of three major areas – data assimilation, uncertainty quantification
and multiscale modeling.
*-
2.1 Dynamic Data Assimilation
In data assimilation, ensemble Kalman/particle filters and variational methods have
found their way into operational weather prediction codes and their use has been
adopted in other application areas, such as hydrology, chemical transport and
dispersion, and discrete event simulation wildfire models. Furthermore, research
activities in filtering methods for non-Gaussian dynamical systems have intensified
significantly in the last decade though it is still an open field of research.
2.2 Large Scale Modeling
In modeling and simulation, parallel solvers for partial differential equations have
allowed rapid solution of problems with billions of unknowns. This enables us to
consider applying real-time DDDAS approaches to very complex phenomena that
involve multiple spatial and temporal scales. Furthermore, libraries like Deal.II, Trilinos,
and PETSc provide frameworks for the discretization and scalable solution of new
applications.
Similar advances have been realized in software tools for optimization solvers, which are
a critical component of DDDAS technologies: from denoising data, to solving estimation
problems we need to solve different flavors of optimization problems (least squares,
mixed-integer programs, or nonlinear programs). Open source tools like DAKOTA,
APPSPACK, and IPOPT have enabled the solution of very complex problems. …. And …
Advances in algorithms for large-scale inverse and parameter estimation problems will
enable DDDAS technologies to unprecedented scales. Although theory and basic
algorithms for inverse problems is a very mature field, the emphasis has been on theory
and numerics for small-scale problems and there was very little in large-scale parallel
algorithms with optimal convergence properties. However, in the last decade significant
breakthroughs in adjoint/Hessian-based, matrix-free large-scale Newton methods,
regularization methods, adaptive methods, and multigrid preconditioners for compact
operators have enabled solution of large scale inverse problems with millions of
unknown parameters using thousands of computer cores. It remains to be seen if the
methods can be extended to 1,000 to 1,000,000 times the size of current problems. ….
And …
2.3 Uncertainty Quantification (UQ) and Multiscale Models
Most, if not all, existing research efforts on UQ and multi-scale modeling are on the
models and algorithm design, achieving significant progress. Yet, the availability of data
presents new challenges and opportunities. This is mostly due to the fact that sensors
and hardware for data acquisition are becoming increasingly cost effective.
Consequently the size of data is exploding in many fields, and in many cases, real-time
data are abundantly available. This presents a unique opportunity to conduct datadriven UQ and multi-scale analysis: real-time analysis by integrating computation and
sensors, efficient decision-making and optimization utilizing the growth of networking
capability, and much more.
In addition to the existing difficulties in UQ and multi-scale modeling, the incorporation
of data introduces new challenges. For example, the majority of real-time data is nonGaussian, and often multi-modal. In this area the traditional stochastic analysis and UQ
study are severely lacking. The ability to handle and process large amount of high
dimensional data is also insufficient. Additionally, there are extreme data, not
necessarily of small/rare probability, that are difficult to analyze and to predict in the
current framework. These unique difficulties can intertwine with the existing difficulties
in UQ and multi-scale modeling and significantly increase the research challenges.
However, it must be recognized that the presence of data also presents a unique
opportunity to address the existing difficulties in UQ and multi-scale modeling. Most
notably, one of the major goals of UQ and multi-scale modeling is to produce high
fidelity predictions of complex systems. As these predictions are of modeling and
simulation nature, observational data, though often corrupted by noises, are also
(partially) faithful reflections of the systems under study. Therefore it is natural to
combine the two kinds of reflections, simulation-based and measurement-based, to
achieve more reliable and accurate predictions of the systems.
In the broader context of UQ, one of the persistent challenges is the issue of long-term
integration. This refers to the fact that stochastic simulations over long-term may
produce results with large variations that require finer resolution and produce larger
error bounds. Though none of the existing techniques is able to address the issue in a
general manner, it is possible to apply the DDDAS concept of augmenting the model
through on-line additional data injected into targeted aspects of the phase-space of the
model in-order to reduce the solution uncertainty by utilizing data. Other notable
challenges include effective modeling of epistemic and aleatory uncertainties,
particularly epistemic uncertainty where few studies exist.
In multi-scale modeling, a major challenge is to determine and validate models at
different scales and their interfaces. Since most, if not all, multi-scale models are
problem specific, it is crucial to utilize observational data to effectively quantify the
validity of the models and to conduct model selection. It must be recognized that data
may arrive from different sources at different scales. Thus successful analysis and
integration of such data into the modeling and decision-making process is crucial.
The main and unique challenges of DDDAS research hinge on real-time setting. These
include uncertainty fusion of both simulation and observational data in dynamic
systems, design of low-dimensional and/or reduced order models for online computing,
decision-making and model selection under dynamic uncertainty.
To address these challenges, we need to take advantage of the existing tools in UQ and
multi-scale modeling. Notable tools include generalized polynomial chaos methodology
for UQ, Bayesian analysis for statistical inference and parameter estimation (particularly
to develop efficient sampling methods as standard Markov Chain Monte-Carlo (MCMC)
does not work in real time), filtering methods (ensemble Kalman filter, particle filter,
etc) for data assimilation, equation-free, multi-scale finite element methods, scalebridging methods for multi-scale modeling, sensitivity analysis for reduction of the
complexity of stochastic systems, etc. These methods have been widely used. And their
capabilities need to be extended to DDDAS domain, especially in the context of
incorporating real-time data. And their properties need to be thoroughly examined.
Equally important is the need to develop new tools for UQ and multi-scale modeling of
DDDAS. For example, methods for adaptive control of complex stochastic and multiscale systems, efficient means to predict rare events and maximize model fidelity,
methods for resource allocation in dynamic settings, tools to reduce uncertainty (if
possible) and mitigate its impact.
Major advances have taken place in numerical methods for large-scale stochastic
differential equations. In particular, stochastic Galerkin and collocation methods have
been studied and applied to forward uncertainty propagation, uncertainty estimation
for inverse and control problems, adaptive methods for non-Gaussian random fields,
and data assimilation methods. Once again, it is unknown if the methods work for the
much larger problems that is interesting in the DDDAS context.
Furthermore, advances in large-scale computational statistics and high-dimensional
signal analysis are enabling tackling of complex uncertainty estimation problems. New
algorithms in large-scale kernel density estimation algorithms, on reduced order models
and model reduction, interpolation on high-dimensional manifolds, multiscale
interpolation, and manifold discovery for large sample sizes are examples of major
breakthroughs in the last decade. …. And …
Developments in computing hardware have at the same time enabled solution of
complex problems in real-time and created opportunities and needs for novel
algorithms and software tools. Such developments include accelerators like GP-GPUs,
embedded chips, cloud computing, and Petaflop-scale HPC platforms. …. And …
2.4 Key challenges













Integrating the loop from measurements to predictions and feedback for highly
complex applications with incomplete and possibly low quality data.
Dealing with new architectures (on embedded, cloud, manycore, and HPC
platforms). We need new distributed/parallel, fault-tolerant algorithms.
We need better resource aware/resource adaptive algorithms.
How do we make sure we can make sense from Terabytes of data? Such data
often do not come from well-planned experimental protocols.
We need more efficient new Bayesian inversion methods for uncertainty
estimation to do inversion on the larger data sets needed.
Inverse methods for streaming data with robust algorithms for incomplete highly
corrupted data;
We will be doing high-dimensional sampling in constrained manifolds, which is
an open area of research.
We currently have many HPC and manycore algorithms and libraries for existing
tools that are not scalable. This needs to be addressed.
Algorithms for model analysis and selection (model error, model verification
validation, model reduction for highly-nonlinear forward problems, data-driven
models) still need more research to create an application independent
formulation.
Test beds and DDDAS software tools for workflows must be available in open
source format.
Novel assimilation for categorical/uncertain data (graphical models, SVMs) must
be developed.
Theoretical analysis for DDDAS for bilinear and hybrid systems.
Discrete, combinatorial algorithms must be scaled up to be useful on Exaflopscale computers before the computers exist.
For stochastic systems and multi-scale systems, it is crucial to:





determine the quantity of interest (QoI),
develop tools and metrics to validate models at different scales and the
aggregation procedures to link them under uncertainty,
conduct model selection at different scales,
quantify the relative importance of models at different scales, and
calibrate the “actions” and interfaces between scales under uncertainty.
In particular, for problems such as social network modeling where physical models (e.g.,
those based on conservation laws) do not exist, these issues become even more
challenging.
Decision-making and resource allocation methods with dynamic data on different scales
will make significant impact in DDDAS.
(#3. Below: Key challenges remain…
Findings & Recommendations
1. Essential characteristics of DDDAS environments are the dynamic nature of the
data flow, large-scale and complexity of the applications, and the analysis and
potential feedback mechanisms.
2. Ensemble Kalman/particle filters and filters for non-Gaussian dynamical systems,
large scale parallel solution methods and tools for deterministic and stochastic
PDEs like those encapsulated in the PeTSc library and stochastic
Galerkin/collocation methods, new algorithms for large-scale inverse and
parameter estimation problems and advances in large-scale computational
statistics and high-dimensional signal analysis are enabling application of DDDAS
type ideas to many realistic large scale systems.
3. Key challenges remain in integrating the loop from measurements to predictions
and feedback for highly complex systems, dealing with large, often unstructured
and streaming data and complex new computer architectures, developing
resource aware and resource adaptive methodology and application independent
algorithms for model analysis and selection.
4. Test beds (hardware and software) are needed for advancing methodology and
theory research.
3. Building an Infrastructure for DDDAS
(the)
3.1 Adaptive & Intelligent Infrastructure
DDDAS connects real-time measurement devices and special purpose data processing
systems with distributed applications executing on a range of resources from mobile
devices operating in ad-hoc networks to high end platforms connected to national and
international high-speed networks. Supporting infrastructure for these environments
must go beyond static computational grids and include the instrumentation systems
(integrated and autonomous components that ingest data and drive adoption at all
levels). Here such instrumentation components can be sensors, actuators, resource
providers or decision makers. Data can be streamed in real-time, orfrom archival
storage, and can be filtered, fused, or metadata. Adaptation can be applied at all levels
such as choosing resources or mediating between data sources.
Infrastructures for DDDAS need to support complex, intelligent applications using new
programming abstractions and environments able to ingest and react to dynamic data.
Different infrastructures will be needed for different application types. National,
persistent DDDAS infrastructure connecting new Petascale and beyond compute
resources via 100+ Gbps networks to special purpose data devices could support a range
of large-scale applications. Easily deployable and reliable systems must be deployed
over ad-hoc networks in the field to support medical, military, and other applications
operating in special conditions. The majority of researchers operating in university,
national, or industrial laboratories will require DDDAS systems that connect external
data sources to institutional and distributed resources.
General infrastructure for DDDAS is thus seen as focusing on extensible, general
application-agnostic capabilities.
3.2 Existing Infrastructure
In thinking about a future DDDAS infrastructure it is important first to review the
existing landscape. Broadly speaking, DDDAS applications can be seen as operating in
the following environments:

High Performance Computing Resources: The NSF TeraGrid and planned XD and
Blue Waters facilities are prime targets for DDDAS. Such environments are
targeted at high-end users with the highest levels of concurrency. Usage of these



resources is typically highly contested by scientists whose research agendas are
dependent on CPU cycles. Traditionally, policy restrictions have hindered the
broad and regular use of shared HPC environments for DDDAS applications (e.g.,
through the use of static batch queues), although supercomputing centers are
now beginning to embrace the new demands of data intensive science.
High Throughput Computing Resources: Open Science Grid.
Research Testbeds: Experimental environments to support DDDAS computing are
available at different levels of production use. The NSF sponsored Global
Environment for Network Innovations (GENI) provides exploratory environments
for research and innovation in emerging global networks, the EAVIV Testbed
project provides a dynamically configurable network testbed that provides high
speed connectivity connected end-to-end with TeraGrid resources. More
recently, the NSF Future Grid is being deployed to allow researchers to tackle
complex research challenges in computer science related to the use and security
of grids and clouds.
Cloud Computing: Cloud computing is an emerging infrastructure that builds
upon recent advances in virtualization and data centers at scale to provide an
on-demand capability. There are both commercial clouds (EC2, Azure,and IBM
Deep-Cloud) and academic clouds (DoE Science-Cloud and NSF FutureGrid) that
are viable infrastructure for DDDAS applications. They provide different models
for data transfer, localization, and data affinity. An open question is how
different data capabilities needed for DDDAS interact in different on-demand
cloud computing environments.
3.3 Dynamic Resource Management
Infrastructure will need to address myriad issues arising from diverse, dynamic data
from different sources. Integrating sensors into the DDDAS infrastructure will
necessitate rethinking network architectures to support new protocols for push-based
data, and two-way communication to configure sensors. Data in the DDDAS
infrastructure will be stored and accessed in new hierarchies based on locality, filtering,
quality control, and other features.
Underlying hardware needs to be elastic and able to respond to dynamic requirements.
Persistent national infrastructure is envisioned, as well as infrastructure that is portable
and that can be quickly deployed in the field to support medical, military, and other
application scenarios. End user connectivity must be addressed, connecting national
infrastructure to researchers in academic laboratories as well as to mobile users and
devices in the field. Infrastructure itself thus needs to be dynamically configurable. A
fundamental need for end resources supporting DDDAS, whether storage, compute,
network, or data collecting, is that they support dynamic provisioning which is flexible,
adaptive, and fine grained. This issue involves both technical developments (e.g., the
ION dynamic network protocols) along with appropriate policies to allow dynamic use of
resources. Production resources focused on CPU utilization have the technologies to
provide dynamic use, but their usage models do not typically allow for dynamic usage
policies.
Once dynamic behavior is provided at all levels of the infrastructure the question
becomes how can resources be provisioned and used by applications and middleware. A
common definition is needed to describe the quality of service (QoS) provided by the
resource. This description needs to include the capabilities provided by the resource
(e.g., bandwidth, memory, and available storage) along with usage characteristics (e.g.,
cost, security, reliability, and performance). Requirements for DDDAS systems overlap
with known needs for many complex end-to-end scientific applications. Additional
fundamental requirements are introduced to support dynamic data scenarios, such as
the ability to handle events, and the integration of temporal and spatial awareness into
the system at all levels necessary to support decision-making. Systems need to react
swiftly and reliably to deal with faults and failure to provide a guaranteed quality of
service.
Autonomic capabilities are important at all levels to respond to the content of dynamic
data or changing environments. The need for autonomic capabilities arises at many
levels of DDDAS. For example, wherever dynamic execution and adaptivity is required –
models and algorithms, the software and systems services, infrastructure capabilities –
autonomic capabilities (such as behaviors based upon planning and policy) provide an
effective approach to manage the adaptations and mechanics of dynamical behavior. In
many DDDAS scenarios, application workflows need to be dynamically composed and
enacted based on real-time data and changing objectives. An example includes an
instrumented hurricane modeling, which can achieve efficient and robust control and
management of diverse model by dynamically completing the symbiotic feedback loop
between measured data and a set of computational models.
3.4 Research Needs
Research is needed to provide persistent and fully featured infrastructure, integrating
frameworks, programming abstractions, and deployment methods into an overall
architecture, developing common APIs and schemas around which powerful tools can
be provided, providing methods for decomposing applications to take advantage of
emerging environments such as Clouds or GP-GPUs in an integrated infrastructure, and
deploying persistent DDDAS infrastructure for research and production use.
Specific research challenges include:
Architecture

Application scenarios, characteristics, and canonical problems to drive
infrastructure research and development.



Network architectures to support new protocols for sensor data (push, pull, and
subscribe).
Architecture of data hierarchy for dynamic data processing and access.
Integration of location and time awareness.
Tools




Dynamic workflow tools building on above capabilities (unique demands: run
time environment with changing services, event controlled workflows, discovery,
etc.).
Visualization, analysis and steering of large and dynamic data (e.g., haptics) for
closed loop scenarios, real-time data, and changing characteristics.
Security issues for
o sensors and autonomy and
o generally for new software.
Execution environment supporting collaboration and decision making (social
networking), crowd-sourcing, and citizen engineering.
Integration and Interoperability
Findings and Recommendations
1. Infrastructures for DDDAS need to support complex, intelligent
applications using new programming abstractions and environments able
to ingest and react to dynamic data.
2. Components of the infrastructure include sensors, actuators, resource
providers or decision makers. Data flows among them may be streamed in
real-time, historical, filtered, fused, or metadata.
3. Research challenges include architecture to support the complex and
adaptive applications, data and networks, tools to manage the workflows
and execution environments and integration and interoperability issues.






How to define, carry, and operate on provenance information.
Generalized interoperability, collaboration, and negotiation in decentralized
decision-making.
Generalization of allocation across different resources (networks, data, etc.)
combined with new methodologies of allocation.
Negotiation mechanisms between applications and infrastructure.
Description for QoS (includes cost, availability, security, performance, and
reliability).
More effective integration of computable semantics throughout the
infrastructure (e.g., tradeoff between simplicity and expressiveness).


Policies/cost models for dynamic resource allocation and contention (e.g., for
different applications).
Integration with cloud computing to take advantage of business models and
scalability and collaboration, virtualization, and mutual collaboration between
cloud computing and DDDAS.
4. Systems Software
4.1 DDDAS & Systems Software
In the context of DDDAS, systems software involves specification languages,
programming abstractions and environments, software platforms, and execution
environments, including runtimes that stitch together dynamically reconfigurable
applications. Given the vast diversity of DDAS application areas, platforms of interest
encompass the range from distributed and parallel systems to mobile and/or energy
efficient platforms that assimilate sensors inputs.
Core DDDAS components by definition have evolved from executing on static platforms
with fixed inputs to executing on heterogeneous platforms with widely varying
capabilities fed by real-time sensing. Algorithms and platforms must evolve
symbiotically to effectively utilize each other’s capabilities.
Algorithmically, we need to develop along three axes in a complementary manner:



Specification languages that can be used to define the performance
characteristics of algorithms.
Methodologies for algorithms to adapt to changing resource availability or
heterogeneity resource availability.
Methodologies for algorithms to change behavior predictably, based on data and
control inputs.
Similarly, advances are need in execution platforms to support dynamically adapting
applications. Platforms capabilities and interfaces need to be extended to include:



Interfaces to define and specify the performance characteristics of the execution
platform.
Ability to reallocate resources in response to the changing needs of algorithms.
DDDAS algorithms stress dynamicity – symbiotically, DDDAS platforms should
expose interfaces that enable applications to sense and respond to resource
availability.
Interfaces that expose control inputs and monitoring of DDDAS application
behavior to ensure their observability and controllability.
4.2 Programming Environment
A programming environment consists of programming abstractions, interfaces that
support co-development of components, and runtime systems that handle the nonfunctional requirements of DDDAS applications. A core challenge in dynamically
adapting algorithmic components lies in developing one or more programming
abstractions that simplify the process of decomposing and reasoning about such
compositions, particularly as execution platforms are rapidly evolving from homogenous
collections of processors to include GP-GPUs, FPGAs, and application specific
accelerators. Furthermore, DDDAS applications are frequently composed from
components that model different domain physics with multiple interface boundaries.
Software abstractions are needed that aid the coupling between components of a
DDDAS application without tying the interfaces to programming languages.
Over the last few years, the rapid proliferation of heterogeneous architectures presents
new challenges in developing efficient algorithmic components without tying them to an
underlying platform. Resolving this issue requires new retargeting compilers that can
generate efficient code from a high level mathematical or algorithmic description of the
problem. Such compilers will be instrumental in enabling just in time compilation based
on specific deployment decisions.
Non-functional requirements such as fault tolerance and adaptivity continue to be
challenges. Given the range of DDDAS applications from soft real-time to best effort, a
range of fault tolerance methods are needed that provide performance fault tolerance
under resource constraints. DDDAS applications can provide guidelines based on the
criticality of the computation and data that can guide the methods used to achieve fault
tolerance. Anticipatory fault tolerance should be investigated for time-critical
applications.
4.3 Autonomous Systems & Runtime Applications Support
New runtime systems support is needed to support program adaptation with the goal of
achieving a desired application level quality of service (QoS). This work involves
developing methods to determine the delivered quality of service, interfaces to
determine available resources, and runtimes that support program adaptivity.
Adaptivity is needed at multiple levels of granularity – for processes, threads, and a suite
of communicating processes.
New data abstractions must be developed that are dynamic and hidden from the
user/programmer that adapts to different hardware configurations that can change
over the course of a DDDAS. Having to rewrite a code to take advantage of a hardware
accelerator like a GP-GPU is unacceptable and currently wastes too much time,
particularly of recent Ph.D.s and graduate students. Just in time compilers should be
intelligent enough to do the optimizations automatically.
System management tools like computational steering have been used in the past that
are quite complicated and require significant resources. Improvements are needed so
that steering can be achieved using smart phones and iPad-like devices that are
becoming more and more like small, but very effective computers.
We must use the symbiotic relationship between applications and systems more
effectively. This includes interfaces and knowledge bases that enable applications to
monitor and manage their environment and vice-versa, with or without a human in the
loop. The system needs to be easy enough to use so that alerts can be devised by nonprogrammers, i.e., actual DDDAS users.
In fact, what we really need is to develop and use a set of best practices for DDDAS as a
Findings and Recommendation
1. Systems software must evolve to support DDDAS components need to
execute on heterogeneous platforms with widely varying capabilities fed by
real-time sensing. Algorithms and platforms must evolve symbiotically to
effectively utilize each other’s capabilities.
2. Research challenges in systems software remain in runtime systems support
to support program adaptation, fault-tolerance, new retargeting compilers
that can generate efficient code from a high level mathematical or algorithmic
description of the problem and rapid proliferation of heterogeneous
architectures.
major thrust in future research.
I
5. Summary of Findings and Recommendations
1. Disruptive technological and methodological advances in the last decade have
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
produced an opportunity to integrate observation, simulation and actuation in ways
that can transform all areas where information systems impact human activity. Such
integration will transform many domains including critical infrastructure, defense and
homeland security and mitigation of natural and anthropomorphic hazards.
Many challenges in computing, networks and software, large and streaming data,
error and uncertainty, sensor networks and data fusion and visualization have to be
overcome.
Surmounting these challenges needs multi-disciplinary teams and multi-agency
sponsorship at stable and adequate levels to sustain the necessary extended and
extensive inquiry over the next decade.
Essential characteristics of DDDAS environments are the dynamic nature of the data
flow, large-scale and complexity of the applications, and the analysis and potential
feedback mechanisms.
Ensemble Kalman/particle filters, methods for non-Gaussian dynamical systems, large
scale parallel solution methods and tools for deterministic and stochastic PDEs like those
encapsulated in the PeTSc library and stochastic Galerkin/collocation methods, new
algorithms for large-scale inverse and parameter estimation problems and advances in
large-scale computational statistics and high-dimensional signal analysis are enabling
application of DDDAS to many realistic large scale systems.
Key challenges remain in integrating the loop from measurements to predictions and
feedback for highly complex systems, dealing with large, often unstructured and
streaming data and complex new computer architectures, developing resource aware
and resource adaptive methodology and application independent algorithms for model
analysis and selection.
Test beds (hardware and software) are needed for advancing methodology and theory
research.
Infrastructures for DDDAS need to support complex, intelligent applications using new
programming abstractions and environments able to ingest and react to dynamic data.
Components of the infrastructure include sensors, actuators, resource providers or
decision makers. Data flows among them may be streamed in real-time, historical,
filtered, fused, or metadata.
Research challenges include architecture to support the complex and adaptive
applications, data and networks, tools to manage the workflows and execution
environments and integration and interoperability issues.
Systems software must evolve to support DDDAS components need to execute on
heterogeneous platforms with widely varying capabilities fed by real-time sensing.
Algorithms and platforms must evolve symbiotically to effectively utilize each other’s
capabilities.
Research challenges in systems software remain in runtime systems support to support
program adaptation, fault-tolerance, new retargeting compilers that can generate
efficient code from a high level mathematical or algorithmic description of the problem
and rapid proliferation of heterogeneous architectures.
Works Cited
Berman, F. Sustainable Economics for a Digital Planet . Blue Ribbon Task Froce on
Data, http://brtf.sdsc.edu/biblio/BRTF_Final_Report.pdf, 2010.
Dahm. 2010. http://www.af.mil/news/story.asp?id=123213717 (accessed
November 10, 2010).
Dept. of Energy. ESnet Network Performance Knowledge. November 10, 2010.
http://fasterdata.es.net/ (accessed November 10, 2010).
Douglas. C. 2000. www.dddas.org (accessed September 2010).
Douglas, C., and F. Darema. DDDAS Report. Arlington, VA: NSF, 2000.
Douglas, C., and F. Darema. DDDAS Report. Arlington VA: NSF, 2006.
NSF. Cyberinfrastructure Vision for the 21st Century. 2010.
http://www.nsf.gov/pubs/2010/nsf10015/nsf10015.pdf (accessed November 10,
2010).
Technology Horizons. Technology Horizons. Air Force, 2010.
Telecom, Fierce. FierceTelecom 2010 Prediction: Forget 40 Gbps, I want 100 Gbps
Read more: FierceTelecom 2010 Prediction: Forget 40 Gbps, I want 100 Gbps FierceTelecom http://www.fiercetelecom.com/story/fiercetelecom-2010-predictionforget-40-gbps-i-want-100-gbps/2010-01-03#ixzz14xt5q4X2 Subscribe:
http://www.fiercetelecom.com/signup?sourceform=Viral-Tynt-FierceTelecomFierceTelecom. January 3, 2010.
http://www.fiercetelecom.com/story/fiercetelecom-2010-prediction-forget-40gbps-i-want-100-gbps/2010-01-03 (accessed November 10, 2010).
Appendix A Applications
A.1 Dynamic Data-Driven Computational Infrastructure for Real-Time PatientSpecific Laser Treatment of Cancer
The objective of this research project was to develop a dynamic data-driven
infrastructure to seamlessly integrate high-performance computing with imaging
feedback for optimal planning, monitoring, and control of laser therapy for the treatment
of prostate cancer. The project involved the development of computational models of
bioheat transfer and cell damage for the prediction of tumor ablation during treatment as
well as the set-up of a real-time feedback control system based on the integration of
observations from magnetic resonance imaging (MRI) and MRTI (T for temperature)
devices at M.D. Anderson Cancer Center in
Houston and finite element simulations
performed at The University of Texas at Austin,
all connected by a high-bandwidth network, as
shown in Figure 1. The laser source for ablation
of the cancerous region was thus controlled by a
model-based predictive control system with near
real-time patient-specific calibration using MRTI
imaging
data.
The
research
outcome
successfully demonstrated a viable proof-ofconcept for a dynamic data-driven application
but still fell short of being fully operational, as
the simulations and optimization loops were all
treated deterministically and did not include the
various sources of uncertainties in the data
Figure1: Schematic of the dynamic
and in the computer models. Including the
data-driven infrastructure for real- uncertainty quantification within the process
time patient-specific laser treatment will certainly enhance the usefulness of the
of prostate cancer.
results for critical decision-making.
For illustration, Figure 2 shows a threedimensional computer representation of a canine prostate in the pretreatment stage. The
anatomical MRI data has been color-rendered to illustrate the geometries of the different
anatomic organs. In the treatment stage, a stainless steel stylet (applicator) is used to
insert the laser catheter, which consists of a diffusing-tip silica fiber within a water-cooled
catheter. Figure 3 shows images describing the delivery of energy during therapy. In
particular, the figure illustrates two important features of the computer modeling of laser
therapy. The top picture of the figure displays the temperature field obtained from
thermal imaging data at a given time of the therapy. The bottom pictures of the figure
show the predicted temperature fields in a region of a
canine prostate containing a laser supplying energy through a catheter using an
uncalibrated model of homogeneous tissue properties (left) and a calibrated model that
takes into account heterogeneous heat-transfer properties of the tissue (right). Excellent
quantitative agreement is attained between the temperature fields predicted by the
calibrated model and those detected by MRTI. This example shows that calibration of
the mathematical model for bio-heat transfer, followed by a validation of the model,
represent key features for predictive modeling of the thermal environment in the tissue
and should be essential components of a dynamic data-driven infrastructure.
Fgure 2: Three-dimensional computer
Figure3: Temperature predictions using either
representation of a canine prostate in
the pretreatment stage.
Uncalibrated homogeneous or calibrated
heterogeneous model parameters of the tissue
properties.
Download
Related flashcards

Classes of computers

19 cards

Theory of computation

16 cards

Computer science

25 cards

Computer science

29 cards

Create Flashcards