rti conducting

advertisement
Experiences with Data Distribution Management in Large-Scale Federations
Bill Helfinstine
Deborah Wilbert
Mark Torpey
Wayne Civinskas
Lockheed Martin Information Systems
Advanced Simulation Center
164 Middlesex Turnpike
Burlington, MA 01803
781-505-9500
bhelf@lads.is.lmco.com, dwilbert@lads.is.lmco.com, mtorpey@lads.is.lmco.com, wayne.j.civinskas@lmco.com
Keywords:
HLA, RTI, DDM, scalability, Joint Experimentation
ABSTRACT: The JSAF Federation is one of several simulation tools at US Joint Forces Command used to support
large-scale simulation exercises for Joint Experimentation. Given both the large-scale nature of joint operations (i.e.,
theater-level conflict) and the forward-looking aspect of the Joint Experimentation process, these experiments
inherently involve the simulation of many thousands of weapons systems platforms. Although some experiments are
performed at an aggregated level of representation, other experiments require representation at the weapons system
platform level. Two recent experiments, J9901 and Attack Operations 2000 (AO'00), required the simulation of 20,000
to 100,000 objects using over 100 federates.
The HLA Data Distribution Management (DDM) service provides an abstract, application-driven data filtering
capability. The JSAF Federation has taken advantage of this capability to provide a highly scalable, widely distributed
simulation system. This paper discusses why DDM is important to scalability, how the JSAF Federation uses its
filtering capabilities in a data-driven fashion, and the RTI features and network hardware capabilities that were used
to build this system.
1. Introduction
Joint Experimentation is an iterative process of
collecting, developing and exploring concepts to
identify and recommend the best value-added solutions
for changes in the doctrine, organization, training,
material, leadership and people required to achieve
significant advances in future joint operational
capabilities. The US Joint Forces Command supports
large-scale
simulation
exercises
for
Joint
Experimentation. This kind of exercise is invaluable
for conducting theatre-level training in joint operation
as well as for providing experimental data to evaluate
the potential effectiveness of future weapons systems
and strategic approaches.
The JSAF federation, which is one of several
simulation tools used at US Joint Forces Command,
unites a diverse set of federates to conduct large-scale
distributed simulation at the weapons system platform
level. Because of the joint nature of the exercises,
different participants bring different federates which
have been developed to suit specific needs.
Joint experiments tend to be conducted among
geographically dispersed sites, as each participant
brings local resources to bear. Thus, WANs, and their
associated costs and limitations, are usually
incorporated into the communications infrastructure of
joint experiments.
Theatre-level exercises operating at the platform level
require the simulation of many entities. For example,
the Joint Operations 1999–1 (J9901) exercise and the
Attack Operations 2000 (AO00) exercise required the
simulation of 20,000 and 100,000 platforms
respectively [1]. These exercises were designed to
investigate human performance in the scenario of
Critical Mobile Target detection, given differing levels
of computer analysis support, human-machine
interfaces, communications capabilities, and attack
capabilities.
1.1 Joint Experimentation
requirements
and
its
data
The exact amount of data transferred in a simulation is
contingent upon the design of the scenario used by the
federation. In order to construct a scenario that
demonstrates joint operations in an interesting fashion,
it is a requirement that the geographic area played on
be quite large. A corollary is that the number of
vehicles played be sufficient to populate such a large
expanse of terrain sufficiently, particularly when longrange radars are played.
Without sufficient
background clutter to hide in, a Critical Mobile Target
scenario
cannot
be
realistically
simulated.
Furthermore, since it is important to find individual
vehicles in such a scenario, it has become a design
requirement that these simulations happen at the
platform level of representation. This leads to the
conclusion that joint exercises must publish very large
volumes of dynamic data, either in the form of
changing attributes or ongoing interactions.
Based on these data requirements, as well as the fact
that we run with human operators as part of the
experiment, our federation designs have resulted in a
very DIS-like design [1]. We do not use the time
management services of the RTI, since we must run in
real-time in order to include the human operators. We
do not use the reliable messaging services of the RTI,
since we have such a large quantity of data to send,
and the overhead of reliable messaging is too high to
handle the load. And most importantly, each federate
must filter out as much data as possible in order to
keep from becoming overloaded.
Joint exercises also require the use of many federates.
While the JSAF federation has succeeded in populating
the battlespace with the needed entities using a limited
number of federates, many additional federates are
needed to directly support the exercise participants.
Even more importantly, joint exercise participants
operate at a high level, and the federates they use
provide a theatre-level display of the battlespace. For
example, the display may represent the entire sum of
situational awareness for a given force. Thus, these
federates may need to know the state of all (or at least
a significant fraction of) the entities in the federation
execution.
This aspect of joint experiments turns out to be a major
constraining factor. These display federates need to
respond in a timely fashion to commands (e.g., pan and
zoom) from users. It is very challenging for the
federates to remain responsive at the theatre-level,
where the state of thousands of entities may need to be
displayed simultaneously. Thus these federates need to
be spared unnecessary processing whenever possible,
and at the same time be able to change their interests
quickly and efficiently.
Without any filtering, simply processing all the
incoming state information of a large theatre-level
exercise would completely occupy a federate. Thus, it
is incumbent upon distributed systems meeting the
need of joint experiments to reduce the total amount of
irrelevant information which must be processed by
federates.
1.2 Declaration Management
One mechanism provided by the RTI, which on the
surface appears promising in the context of joint
experimentation, is static subscription-based filtering
offered by the Declaration Management service. If a
division of responsibility between federates can be
reflected in the FOM, Declaration Management can be
used to form disjoint publications and subscriptions. In
this case, the flow of simulation data between disjoint
federates could be greatly reduced or even completely
suppressed.
One might think that at the theatre level, the
responsibilities of the federates would be disjoint in
domain (e.g., air vs. sea), in function (i.e., logistics vs.
attrition) or in other dimensions. Perhaps some
federates in a joint exercise would be so loosely
coupled that there is little or no need to exchange
simulation data. However, such a scenario does not
explore the joint qualities of the scenario in a
meaningful way. Thus this solution does not fit well
with the problem domain. Furthermore, in practice, the
federates that most need filtering (theatre-level
displays) must subscribe to the most simulation classes
for potential display. While DM can be used to filter
out some traffic, it does not offer an adequate solution
for the system bottleneck.
1.3 Data Distribution Management
The other service offered by the RTI for relevance
filtering of simulation data is the Data Distribution
Management service. Because this service provides
dynamic value-based filtering, rather than static
hierarchy-based filtering, it is well suited for federates
which may need to display any entity, but at a given
time will likely only be viewing a subset of them.
Although the exact nature of DDM has changed during
the development of the HLA specification, the basic
capability has remained the same. DDM allows the
federate to dynamically specify abstract regions of
interest that are overlap-matched against out-of-band
data sent along with the object attributes or interaction
parameters in order to filter the simulation data
presented to the federate.
For example, in J9901 and AO00, theatre-level
displays would subscribe for only those vehicles whose
type and location was of interest to the user. As the
user panned or zoomed the map or selected different
kinds of vehicles to display from a menu (red/blue,
aggregate/actual, tracks on/off, etc), the federate
dynamically updated its subscription. With many
federates of this nature (fifty or more) participating in
the exercise, the interests of the federation as a whole
were updated quite frequently.
1.4 IP Multicast
One technology that has been used quite heavily to
implement best-effort DDM services is IP multicasting
[2] [3]. The basic unit of abstraction of IP multicasting
is known as a multicast group, which is simply an IP
address that falls within a specified range of values.
When a machine wants to receive data sent to a
particular multicast group, it subscribes to that group.
Data is sent to a particular group by a sending
machine, and all machines subscribed to that group
will receive that data. On LAN technologies such as
Ethernet, the network inherently supports multicasting,
and different network interfaces can handle differing
quantities of filtering in hardware. Additionally, wide
area networks are capable of using IP multicast
through the use of additional subscription and routing
protocols that modern routers support, albeit with the
higher latencies inherent in WANs.
There are a number of different algorithms that have
been used to implement DDM, some of which do a
good deal of computation to reduce network bandwidth
requirements. However, these computations can
become a bottleneck in themselves. For example, it has
been shown in [4] [5] that dynamic configuration of
the connectivity grid between federates to provide
optimal filtering of data is an NP-complete problem.
Joint experimentation, with its large numbers of
entities, federates, and even sites, is particularly
vulnerable to inefficient algorithms, and therefore a
non-polynomial algorithm such as the Multicast
Grouping method is not appropriate for such exercises.
Thus, the approach used to implement DDM in the
RTI is critical for performance of joint federation
executions as a whole. DDM is required to reduce
simulation data processing, which is a known
constraint, but at the same time, the complexity of the
implementation of DDM must not in itself become the
bottleneck.
Luckily, the RTI is free to implement heuristic
approaches to DDM that reduce but do not necessarily
optimize data flow. These may offer much improved
performance profiles. Also, DDM can opt to filter
irrelevant simulation data at many different points: at
the source (if no federate is currently interested), at a
router (if no remote federates require forwarding of the
simulation information), at the network interface (if
DDM has appropriately arranged multicast addresses
for filtering at the receiver) or in the receiving RTI
software itself if necessary.
Past implementations of DDM have leveraged this
flexibility to provide meaningful reductions in required
federate data processing at manageable computation
costs. The use of DDM has enabled larger scale
exercises than were previously possible with DIS-style
state update broadcasts, and requisite application-level
processing. The following sections of this paper
describe
our
experiences
using
different
implementations of DDM to successfully conduct
these large joint experiments.
2. Best Effort DDM implementations using
IP Multicast
The ability to send data to a dynamically controllable
subset of possible receivers is a powerful tool that can
be used in a number of ways to implement the DDM
abstraction. However, there are a number of limitations
of the current implementations of IP multicast that
constrain its use. In particular, the total number of
multicast groups that can be used simultaneously is
less than 6000 in current (2001) commercial router
implementations. Furthermore, the amount of
dynamicism of group membership is limited as well,
with current routers being capable of less than 1000
subscribes or unsubscribes per second from all the
client machines. In the routers and switches used in
the J9901 and AO00 exercises, those limits were only
3000 groups and 300 subscription messages per
second, which proved even more limiting than current
technology.
Thus, the subject of just how to map the DDM Region
and Routing Space abstractions onto multicast groups
has been and is still a subject of a good deal of
research [6] [7] [8] [9] [10] [11] [12] [13].
Furthermore, each design used to implement this
mapping has its own limitations and drawbacks. There
are two general categories of such mappings, statically
assigned grids and dynamic assignment based on
connectivity requirements.
2.2 Sender-side overlap calculation and receiver
grouping
2.1 Statically Assigned Grids
The simplest way to map abstract regions onto
multicast groups is to divide the entire routing space
into an n-dimensional grid, and assign a multicast
group to each n-dimensional grid square. Therefore,
when a message is sent, the interests of the receivers
are implicitly taken into account by the simple
expedient of moving their multicast subscriptions to
cover the groups that are overlapped by their
subscriptions. This is the approach taken by the STOW
RTI-s prototype, as well as current versions of the
RTI-NG 1.3 [10] [14].
This design has the advantage of requiring very little
computation to do the mapping, as well as requiring no
extra communication between the senders and
receivers. Therefore, its scaling characteristics are very
good, which is a major advantage to joint
experimentation.
However, its disadvantages can be problematic. One
characteristic of a typical exercise is that the density of
objects across the space is very non-uniform, with rear
areas containing only logistics vehicles, and scattered
battles containing a large number of platforms in a
small space. Therefore, to get good coverage of
regions of high density, a large number of grid cells are
wasted on regions of low density, where they are much
less necessary. Since multicast groups are a limited
resource, this can be a critical problem. One way of
working around this limitation is to provide a
mechanism to specify areas of higher grid density
within the grid [15]. This allows more efficient use of
the limited multicast group resource, at the expense of
significantly reduced dynamicism in the simulation
scenario.
Further, this design leads to scalability problems in the
presence of non-point publication regions. In order to
satisfy the receipt requirements, the data must be
duplicated and sent to each group overlapping the
region, which leads to obvious inefficiencies.
Finally, this design inherently results in federates
receiving more data than they asked for. Since regions
are implicitly expanded to grid boundaries when
subscribing and publishing, each federate will receive
more data than it asked for, and receive-side filtering
must be done to remove extraneous data. This effect
can be reduced but not eliminated by careful selection
of grid densities, but that carries its own penalties of
reduced dynamicism.
In this design, each federate lets each other federate
know about its subscription regions. This lets the
sending federate calculate exactly who wants to
receive the data being sent, and lets it send the data to a
multicast group that is joined only by the subset of
federates that should receive it. Each federate is
required to calculate overlaps when any region changes
so it can subscribe to groups in a way that possible
senders can reach it. This is the design implemented by
DMSO RTI 1.3 [16].
This design minimizes the amount of data received by
each federate since it does perfect matching on the
source side. Furthermore, it minimizes the network
traffic for the updates, since it will only ever send each
message once.
However, the overhead incurred by this scheme is very
high. Each subscription must be sent to all the
federates in the federation, which negates the
bandwidth advantages of the actual data flow,
particularly in the joint experimentation setup, with its
highly dynamic interest changes. Further, calculation
of region overlap is an algorithm that scales poorly
with number of regions [17].
Additionally, this scheme scales poorly with the
number of federates. In fact, in the worst case, the
number of multicast groups required is N3 with the
number of federates. Since multicast groups are a
limited resource, this leads to situations where either
perfect filtering is lost or the routers are overwhelmed.
3. Large-scale Joint Experiment use cases
In the Joint Experimentation world, there have been
three large-scale exercises that have heavily depended
on DDM to provide the scalability required by the
scenarios. The STOW97 ACTD, J9901, and AO00
have all scaled distributed platform-level simulation to
levels previously unattained. In particular, each system
produced an order of magnitude more objects than any
single federate could accept. See Figure 3.1. See also
[18] [19] [20] for more details.
Event
STOW97
J9901
AO00
Federate
Count
300
100
150
Vehicle
Count
3600
20000
100000
Object
Count
7000
40000
160000
MaxObjects
PerFederate
400
5000
20000
Figure 3.1. Acceptable object counts vs. total counts
The only way these levels of performance were
attained was through the use of DDM. Since each of
these exercises was run using RTI-s, the
implementation used was a statically gridded layout,
with high-resolution inset grids. See [21].
Furthermore, each of these exercises was
geographically distributed across a number of sites,
with multicast filtering hardware performing a lot of
the DDM filtering before the traffic reached the
operating system. See Figure 3.2 and 3.3.
Figure 3.2. Sites for STOW 97
together and intermixed during the course of the
exercise. J9901 and AO00 were attack operations
scenarios where a ground war had not commenced, and
all ground units were OPFOR and neutral, and
BLUEFOR units were aircraft and ships. A very large
part of the network-level load in the latter two
exercises was due to the presentation of the BLUEFOR
sensors' perceived truth information across the
network.
Due to these differences, the gridding assignments
were reconfigured heavily for each exercise. In
STOW97, a great deal of attention was paid to the
geographic layout of all the grids, with heavy tuning
over a period of several months. In J9901, it became
more important to differentiate force alignment as a
first filter, with geography becoming a secondary but
still important factor. In AO00, finer control over the
perceived truth data became the most important factor,
and more attention was paid to that.
One of the most important factors in each of these
exercises was the fact that we had knowledge of the
algorithm that the RTI used to map regions to multicast
groups. Using network monitoring tools, we were able
to easily identify areas that required tuning. Further,
since this assignment was controlled by the RTI's RID
file, we could quickly and simply try further
refinements simply by distributing a single data file to
the participating machines. Since we believe that
automatic tuning of multicast group assignment is not
feasible in a joint experimentation scenario due to
efficiency constraints, such manual tuning is very
important, and needs to be possible in a flexible,
convenient fashion. This sort of configurability and
visibility is an area that we believe needs to be
explored further by RTI developers.
The use of RTI-s and static inset grids has performed
extremely well for our purposes. The static assignment
means that the overhead of doing DDM is minimal. In
tests performed using the sender-side squelching
approach, we have not been able to replicate the
scalability that is provided by the static grids [22]. The
fact that inset grids are available has meant that we can
tune our implementation very carefully for each
exercise and provide very close to optimal filtering
without the overhead of the actual overlap calculations.
Figure 3.3. Sites for J9901 and AO00
The major differences between the three exercises
were due to the differing scenarios in play. STOW97
was a ground-heavy force-on-force exercise with very
high concentrations of ground forces that came
4. Future
We are currently preparing for the Millennium
Challenge 2002 (MC02) experiment. This exercise
provides even greater challenges than previous joint
experiments, due to two major factors.
For MC02, we will be running with dozens of new
federates, none of which have ever implemented DDM
before, and some of which are still implementing their
HLA interfaces. The amount of development required
to indicate to the RTI what the simulation's interests
are is not insignificant. Scheduling pressure may not
allow us to use DDM to its full capabilities.
Also, MC02 will be the first large-scale platform-level
joint experiment to be run with HLA v1.3-compliant
federates. Due to corner cases in the way non-DDM
and DDM federates interact, the 1.3 specification binds
each attribute to a single routing space, which does not
allow the use of separate spaces for a single platform
class, or even a set of related platform subclasses of a
single base class. To get around this limitation, we will
be using the "hyperspace" concept detailed in [22]
which allows the use of a dimension of a common
routing space to represent the routing differentiation
that we previously used routing spaces to represent.
This does have major drawbacks, though. It leads to
incredible confusion, both on the part of the
implementers, and on the part of documenters. It
carries a great deal of complexity, most of which was
previously contained within the RTI, and which is now
the requirement of the application to implement.
Combine that with the fact that there are dozens of new
federates, each of which needs to implement the same
code, none of which share a common codebase, and
the problem becomes very difficult indeed.
Furthermore, since we are assuming that RTI-NG will
be used as the RTI for MC02, and since it implements
uniform static grids without multiple density insets as
its multicast mapping scheme, we may not be able to
assign multicast groups in an efficient fashion.
Given these restrictions, the MC02 federation may not
be able to take full advantage of DDM. However, some
use of DDM will probably be required, since the
scalability requirements of the exercise likely cannot
be met by a static DM design.
5. Caveats regarding the use of DDM
When doing scenario design and configuration of a
federation, it should be possible to allocate federates to
physical sites in such a way to minimize the amount of
cross-site interaction required, and therefore minimize
the amount of data that is sent across a WAN.
However, it is often the case that the roles of the sites
are defined based on the organizational hierarchy of
the project running the experiment, or the
organizational hierarchy of the simulated units, rather
than the likely interaction patterns of the units. This
makes the exercise much easier to explain, control, and
analyze, however, it also makes it impossible to
minimize WAN communications. For example, if it
was decided that one of your sites would
simulate/publish all of the OPFOR, it is likely that
every site that simulates BLUEFOR will need to
receive all of the data from that site, and that site
would need to receive all of the data from all the
BLUEFOR sites. If it were possible to simulate
different regions of the battle at different sites, WAN
communications would become much less of a
bottleneck. For an example of this strategy, the
STOW97 exercise included a separate area of
operations well to the south of much of the battle that
was simulated in the United Kingdom. Due to there
being relatively little interaction between the two areas
of operations, the transatlantic communication was
quite minimal.
For after action review purposes, it is often a
requirement that huge amounts of data be logged to
permanent storage devices for later processing.
Therefore, the approach used to log the data is
important. Optimally, a local logging approach could
be utilized that doesn’t affect the network at all, with
each federate sending data directly to a local disk.
However, this concept of local logging has a number
of practical difficulties, particularly in the mechanics
of collecting and integrating the logs from each
machine. Therefore, centralized logger federates are
often used, which results in a single site receiving all
the data in the federation, and a single machine having
to cope with all the data being produced.
Another situation that hampers effective DDM usage is
the presence of data hungry federates, which is similar
to the logger problem described above. Often, as is the
case for MC02, certain federates are one-box-solutions
that need to consume the entire simulated situation in
order to accomplish their task. These federates receive
no or little benefit from using DDM, and in fact could
cause negative effects depending upon how DDM is
implemented in the RTI.
A major issue that we have not addressed directly is
that we have been enjoying state-of-the-art networking
hardware and networking design expertise. Even with
such support, we have had a great deal of difficulty
configuring the many pieces of hardware required to
handle the quantity of multicast groups we use and the
rapidity of change in subscriptions our systems require.
Newer technology is reducing the number of boxes in
use, which should help reduce some of the
configuration load. However, not all the configuration
is in the network. Some is in the operating system
software, and some is in the actual NIC card in each
machine. We have used 3Com 3C905B and 3C905C
network cards, which have the best hardware multicast
implementation available in a COTS Fast Ethernet
card, with a Linux OS kernel that uses a 3Comsupplied driver to activate the multicast support, as
well as IP multicast hash tables similar to those
described in [23].
The HLA 1.3 specification added a new requirement
that binds attributes to routing spaces. This means that
our previous habit of routing data to different spaces in
a human-understandable fashion is no longer possible
[22]. As noted above, we get around this through
methods that require much more complexity on the
part of the federate developer, and which are much less
intuitive to us. Further, the IEEE 1516.1 HLA
specification changes the way DDM is done again, but
in a fashion that lends itself to the “hyperspace”
implementation as well [24]. We therefore believe that
the federation being assembled for MC02 will be
relatively easily transitioned to 1516 when it becomes
feasible to do so.
6. Conclusion
Joint Experimentation requires the continued scaling
up of exercises until reality can be matched down to
the platform level. Since simulation requirements
continue to outpace advances in networking
technology and processor speed, DDM will continue to
play an important role in meeting the needs of the Joint
Experimentation community.
We have had great success in the use of statically
assigned grids to implement DDM, and we encourage
its further use and development. Additionally, if the
extra bandwidth and computation requirements of
source-side squelching can be reduced to more
reasonable levels, it could become a much more
powerful expression of the DDM capability, but its
current overhead renders it unsuitable for Joint
Experimentation.
We are working towards the Millennium Challenge
2002 experiment, which will lead into the Olympic
Challenge 2004 experiment. We believe that DDM
will be a very important piece of the success of these
experiments, although the challenges in implementing
it are quite daunting.
There are a number of important factors that need to be
taken into account when planning the use of DDM in
an exercise. Most importantly, the design of the
exercise scenario usually determines the dataflow
requirements, and the effectiveness of DDM can be
significantly reduced if care is not taken.
We are very interested in developments in the DDM
services implemented by future RTIs.
We are
currently not particularly worried about the differences
in DDM introduced by the IEEE 1516.1 spec, since it
was taken into account in the implementation of DDM
we are using.
Finally, DDM has been a primary factor in the success
of the STOW program in particular, and of platformlevel Joint Experimentation in general.
7. References
[1] M. Torpey, D. Wilbert, B. Helfinstine, W.
Civinskas: “Experiences and Lessons Learned
Using RTI-NG in a Large-Scale, Platform-Level
Federation”
Simulation
Interoperability
Workshop, 01S-SIW-046, March 2001.
[2] S. Deering: “Host Extensions for IP Multicasting”
IETF Network Working Group, RFC 1112,
August 1989.
[3] W. Fenner: “Internet Group Management
Protocol, Version 2” IETF Network Working
Group, RFC 2236, November 1997.
[4] K. Morse: “An Adaptive, Distributed Algorithm
for Interest Management.” Ph.D. Dissertation,
University of California, Irvine, May 2000.
[5] M. Petty, K. Morse: “Computational Complexity
of HLA Data Distribution Management”
Simulation Interoperability Workshop, 00F-SIW143, September 2000.
[6] J. Calvin, C. Chiang, D. Van Hook: “Data
Subscription” 12th Workshop on Standards for the
Interoperability of Distributed Simulations, 95-12060, March 1995.
[7] J. Calvin, D. Cebula, C. Chiang, S. Rak, D. Van
Hook: “Data Subscription in Support of Multicast
Group Allocation” 13th Workshop on Standards
for the Interoperability of Distributed Simulations,
95-13-093, September 1995.
[8] S. Rak, D. Van Hook: “Evaluation of Grid-Based
Relevance Filtering for Multicast Group
Assignment” 14th Workshop on Standards for the
Interoperability of Distributed Simulations, 96-14106, March 1996.
[9] D. Van Hook, S. Rak, J. Calvin: “Approaches to
RTI Implementation of HLA Data Distribution
Management Services” 15th Workshop on
Standards for the Interoperability of Distributed
Simulations, 96-15-084, September 1996.
[10] J. Calvin, C. Chiang, S. McGarry, S. Rak, D. Van
Hook, M. Salisbury: “Design, Implementation,
and Performance of the STOW RTI Prototype
(RTI-s)” Simulation Interoperability Workshop,
97S-SIW-019, March 1997.
[11] A. Berrached, M. Beheshti, O. Sirisaengtaksin, A.
deKorvin: “Approaches to Multicast Group
Allocation
in
HLA
Data
Distribution
Management”
Simulation
Interoperability
Workshop, 98S-SIW-184, March 1998.
[12] K. Morse, L. Bic, M. Dillencourt, K. Tsai:
“Multicast Grouping for Dynamic Data
Distribution Management” Society for Computer
Simulation Conference, July 1999.
[13] K. Morse, M. Zyda: “Online Multicast Grouping
for Dynamic Data Distribution Management”
Simulation Interoperability Workshop, 00F-SIW052, September 2000.
[14] S. Bachinsky, R. Noseworthy, F. Hodum:
“Implementation of the Next Generation RTI”
Simulation Interoperability Workshop, 99S-SIW118, March 1999.
[15] S. Rak, M. Salisbury, R. MacDonald: “HLA/RTI
Data Distribution Management in the Synthetic
Theater of War” Simulation Interoperability
Workshop, 97F-SIW-119, September 1997.
[16] D. Van Hook, J. Calvin: “Data Distribution
Management
in
RTI
1.3”
Simulation
Interoperability Workshop, 98S-SIW-206, March
1998.
[17] M. Petty: “Geometric and Algorithmic Results
Regarding HLA Data Distribution Management
Matching” Simulation Interoperability Workshop,
00F-SIW-072, September 2000.
[18] L. Budge, R. Strini, R. Dehncke, J. Hunt:
“Synthetic Theater of War (STOW) 97 Overview”
Simulation Interoperability Workshop, 98S-SIW086, March 1998.
[19] R. Dehncke, T. Morgan: “Virtual Simulation and
Joint Experimentation: STOW and Joint Attack
Operations”
Simulation
Interoperability
Workshop, 99F-SIW-079, September 1999.
[20] A. Ceranowicz, P. Nielsen, F. Koss: “Behavioral
Representation in JSAF” 9th Conference on
Computer Generated Forces and Behavioral
Representation, 9TH-CGF-058, May 2000.
[21] S. McGarry: “An Analysis of RTI-s Performance
in the STOW 97 ACTD” Simulation
Interoperability Workshop, 98S-SIW-229, March
1998.
[22] D. Coffin, M. Calef, D. Macannuco, W.
Civinskas: “Experimentation with DDM schemes”
Simulation Interoperability Workshop, 99S-SIW053, March 1999.
[23] B. Helfinstine, D. Coffin, M. Torpey, W.
Civinskas: “Low Cost Hardware for Large Scale
Entity-Level
Simulation”
Simulation
Interoperability Workshop, 98S-SIW-230, March
1998.
[24] S. Bachinsky, F. Hodum, R. Noseworthy: “Using
the IEEE 1516.1 RTI C++ API” Simulation
Interoperability
Workshop,
00F-SIW-062,
September 2000.
8. Author Biographies
BILL HELFINSTINE is a Staff Software Engineer at
Lockheed Martin Information Systems Advanced
Simulation Center (LMIS-ASC). Bill is the
implementer of the JSAF DDM scheme. Among his
many duties, Bill is currently the lead developer of the
RTI-s implementation.
DEBORAH WILBERT is a Senior Staff Software
Engineer at Lockheed Martin Information Systems
Advanced Simulation Center (LMIS-ASC). Deborah is
currently implementing tools that support FOM Agility
in HLA compliant federates. Her experience in
distributed simulation includes other HLA-related
design and development efforts, environmental
simulation, CGF development, scalability research,
protocol development, and C4I simulation.
MARK TORPEY is the lead developer and lead
integrator of the Joint Semi-Automated Forces (JSAF)
Computer Generated Forces (CGF) simulation system
for
USJFCOM’s
JSAF
Federation
Joint
Experimentation team. He is a Staff Software Engineer
at Lockheed Martin Information Systems Advanced
Simulation Center (LMIS-ASC).
WAYNE CIVINSKAS is a Senior Manager of
Lockheed Martin Information Systems Advanced
Simulation Center (LMIS-ASC). His responsibilities
include advanced concept development of simulation
system architectures, technologies and techniques that
support distributed simulation. Wayne's current focus
is on transition of distributed simulation technologies
from the prototype stage to full-scale engineering
development and production programs.
Download