3 The Component Workload Emulator Util i zation Test Suite

advertisement
Applying System Execution Modeling Tools to Evaluate
Enterprise Distributed Real-time and Embedded System QoS
John M. Slaby
Steve Baker
James Hill
Douglas C. Schmidt
Raytheon
Portsmouth, RI, USA
john_m_slaby@raytheon.com
Raytheon
Portsmouth, RI, USA
steven_d_baker@raytheon.com
Vanderbilt University
Nashville, TN, USA
j.hill@vanderbilt.edu
Vanderbilt University
Nashville, TN, USA
d.schmidt@vanderbilt.edu
Abstract
Service-Oriented Architecture (SOA) middleware is becoming popular for enterprise distributed real-time and
embedded (DRE) systems because it provides effective
reuse of the core intellectual property (i.e., the “business
logic”) of an enterprise. SOA-based enterprise DRE systems, however, incur a new set of system integration problems associated with configuration and deployment of
components. As a result, the integration phase has become
more difficult and time-consuming, which motivates the
need for R&D to minimize the gap between the development and configuration/deployment of SOA components,
such that optimal configuration/deployment strategies can
be evaluated a priori to simplify the integration phase.
This paper illustrates qualitatively and quantitatively how
system execution modeling tools can help software and
system engineers emulate and evaluate the performance of
SOA-based enterprise DRE systems to minimize time and
effort in the integration phase.
1 Introduction
Meeting integration challenges of SOA-based enterprise DRE systems. Enterprise systems in many domains
are increasingly developed using applications composed of
loosely-coupled, coarse-grained, distributed components
running on feature–rich middleware frameworks, in what
is often termed a Service-Oriented Architecture (SOA) [].
Examples of SOA middleware platforms include J2EE,
.NET, Web Services, and the CORBA Component Model
(CCM). In SOA platforms, software components are assembled to provide various types of reusable services to a
range of application domains.
The transition to SOA middleware is gaining momentum
in the realm of enterprise systems because it helps address
problems of inflexibility and reinvention of core capabilities associated with prior generations of monolithic, functionally-designed, and “stove-piped” legacy applications.
Whereas legacy applications were developed with precisely the capabilities required for a specific set of requirements and operating conditions, SOA components are
designed to have a range of capabilities that enable their
reuse in other contexts. Enterprise systems are often developed in layers, e.g., layer(s) of infrastructure middleware
services (such as naming and discovery, event and notifi-
cation, security and fault tolerance) and a layer of applications that use these services in different compositions.
Certain types of SOA-based middleware, such as Realtime CCM [], are also increasingly being applied to the
domain of enterprise distributed real-time and embedded
(DRE) systems, such as total ship computing environments
[], which are intended to provide users with quality of service (QoS) support to process the right data in the right
place at the right time. QoS properties required by enterprise DRE systems include the low latency and jitter expected in conventional real-time and embedded systems, as
well as high throughput, scalability, and reliability expected in conventional enterprise distributed systems.
Achieving this combination of QoS capabilities, each of
which is hard enough on its own, is much harder.
SOA middleware can also complicate software lifecycle
processes by shifting responsibility from software development engineers to other types of software engineers,
such as software configuration and deployment engineers,
and systems engineers. Software development engineers
traditionally created entire applications in-house using topdown design methods that could be evaluated throughout
the lifecycle. In contrast, software configuration and deployment engineers and system engineers must increasingly assemble enterprise DRE systems by customizing and
composing reusable SOA components, which are commonly evaluated during the integration phase. Unfortunately, problems uncovered during integration are much more
costly to fix than if they were discovered earlier in the development lifecycle. A key R&D challenge is therefore to
expose these kinds of issues (which often have dependencies on components that are not available until the end of
development) earlier in the lifecycle than during the integration phase.
SOA-based enterprise DRE systems require design- and
run-time configuration steps to customize the behavior of
reusable components to meet QoS requirements in the
context where they run. Finding the right configurations of
components that meet application QoS requirements is
hard. For example, tuning the concurrency configuration
of a multi-hypothesis tracker to support both real-time and
fault-tolerant QoS is tricky. Moreover, since application
functionality is distributed over many components in a
SOA-based architecture, developers must interconnect and
integrate their components in a manner that is correct and
efficient, which is particularly tedious and error-prone using conventional handcrafted configuration processes.
QoS in a representative enterprise DRE system environment.
In addition to being configured properly, the components
assembled to form an application must be deployed on the
appropriate nodes in an enterprise DRE system. This deployment process is tricky since characteristics of hosts
onto which components are deployed and the networks
over which they communicate can vary statically (e.g.,
due to different hardware/software platforms used in a
product-line architecture) and dynamically (e.g., due to
battle damage, changes in mission modes of the system, or
due to differences in the real vs. expected behavior of applications during actual operation). Evaluating the operational characteristics of these system deployments can
therefore be tedious and error-prone, particularly when
deployments are performed manually.
Paper organization. The remainder of this paper is organized as follows: Section 2 describes how our system execution modeling toolchain combines our prior work on QoSenabled SOA middleware with MDD tools with the new
CUTS to reduce the time and effort spent in the integration
phase due to configuration and deployment problems; Section 3 describes the architecture and resolution to design
challenges of CUTS; Section 4 shows how we applied
CUTS to quantitatively evaluate a representative enterprise
DRE system in the domain of shipboard computing; Section 5 compares our R&D efforts with related work; and
Section 6 presents concluding remarks and summarizes
lessons learned.
Another complexity of evaluating deployments of SOAbased enterprise DRE systems stems from the sharing of
components by applications with differing QoS requirements, such as a system resource manager responsible for
processing tasking requests from high-priority tactical applications and low-priority desktop applications. It is hard
enough to assure that a stand-alone application can meet
stringent QoS requirements using dedicated resources. It
is even harder to assure these requirements with a set of
loosely-coupled SOA components sharing resources with
other applications in an enterprise DRE system.
2
Despite the flexibility offered by SOA middleware, there
are often surprisingly few configuration and deployment
designs that can satisfy the functional and QoS requirements of an enterprise DRE system. What is needed,
therefore, are a new generation of system execution modeling tools [] that software engineers, systems engineers,
and customers can use to explore design alternatives from
multiple computational and valuation perspectives at multiple lifecycle phases using multiple quality criteria with
multiple stakeholders and suppliers. In addition to validating design rules and checking for design conformance,
these tools facilitate “what if” analyses of alternative designs to quantify the impacts and costs of certain design
choices on end-to-end system performance. In the context
of enterprise DRE systems, these system execution modeling tools help to discover, measure, and rectify integration
and performance problems early in a system's lifecycle
(e.g., in the architecture and design phases), as opposed to
waiting until the integration phase, when mistakes are
much harder to fix. This paper describes a system execution modeling toolchain that combines QoS-enabled SOAbased middleware, model-driven development (MDD)
technologies, and the Component Workload Emulator
(CoWorkEr) Utilization Test Suite (CUTS) o create emulations of SOA-based applications rapidly and then perform experiments that systematically evaluate end-to-end
Solution Approach  Combining QoS-enabled SOA Middleware with MDD Tools
Our initial solution approach. To address the configuration and deployment problems common to integrating
SOA components in enterprise DRE systems, our initial
work combined QoS-enabled SOA middleware platforms
with MDD tools. QoS-enabled SOA middleware supports
the provisioning of key QoS properties, such as
(pre)allocating CPU resources, reserving network bandwidth/connections, and monitoring/enforcing the proper
use of DRE system resources at runtime to meet end-toend QoS requirements. MDD tools help combine (1) domain-specific modeling languages (DSMLs), which provide programming notations that formalize the process of
specifying application logic and QoS-related requirements,
(2) metamodeling, which helps to automate the definition
of type systems that precisely express key characteristics
and constraints associated with DSMLs for particular application domains, and (3) model transformations and code
generation ,which automate and ensure the consistency of
software implementations via analysis information associated with functional and QoS requirements captured by
models of domain-specific structure and behavior.
In prior work, we developed a QoS-enabled SOA middleware platform called the Component-Integrated ACE ORB
(CIAO) [] that combines Lightweight CCM [] capabilities
(such as standards for specifying, implementing, packaging, assembling, and deploying components) with Realtime CORBA [] features (such as thread pools and priority
preservation policies) to create a Real-time CCM SOAbased middleware platform. Likewise, we created an MDD
toolsuite called Component Synthesis using Model Integrated Computing (CoSMIC) [], which is an integrated set
of DSMLs that support the development, deployment, configuration, and evaluation of enterprise DRE systems
based on CIAO. CoSMIC is implemented using the Generic Modeling Environment (GME) [], which is an open-
source MDD toolkit for creating and using domain-specific
modeling languages (DSMLs).
By combining CIAO and CoSMIC, we tackled many integration challenges associated with configuration and deployment of enterprise DRE systems by leveraging the
power of MDD tools to enforce correct-by-construction
design. For example, we used (1) CoSMIC’s model interpreters to automatically generate the required XML configuration files [] and (2) CIAO’s Deployment And Configuration Engine (DAnCE) to deploy the resulting component assemblies on nodes in a DRE system [], as shown
in Figure 1.
Figure 1. Integrating CIAO, DAnCE, and CoSMIC
Limitations with our initial approach. Despite the benefits of combining CIAO, DAnCE, and CoSMIC, our experiences applying these technologies to a large-scale shipboard computing system indicated that they were insufficient to evaluate the QoS of applications in enterprise DRE
systems due to the following limitations:

Modeling Limitations. CIAO, DAnCE, and CoSMIC
alone provided insufficient support for evaluating
QoS-related characteristics (such as communication
delay, temporal phasing, parallel execution, and synchronization) that arose in production-scale enterprise
DRE system environments where many different applications ran concurrently across a network that included both shared and dedicated components.

Phase ordering dependencies. Application components that exercise the infrastructure middleware services are often not developed until later in the system
lifecycle, which means that infrastructure services are
not tested adequately under realistic workloads needed
to validate their architecture and design.
We initially considered evaluating enterprise DRE system
QoS characteristics via simulation []. The complexity, interdependencies, and sheer number of variables involved
in SOA-based enterprise DRE systems, however, made the
development and maintenance of realistic models to simu-
late scenarios impractical. Moreover, while pure simulation can provide valuable information about system QoS
behavior, it is hard to leverage simulation results directly
in the production operational environment. We therefore
needed technologies to emulate and/or integrate key portions of an enterprise DRE system so we could evaluate
the end-to-end QOS characteristics of applications in a
production environment accurately, even before any actual
application components were developed. We wanted the
results of our evaluations to be reflected automatically in
the SOA-based application and middleware configurations
and deployments we were creating, so that our evaluations
would become more accurate as our understanding of the
characteristics of the applications increased.
Overcoming limitations via system execution modeling
tools. To overcome limitations with our initial approach –
and with pure simulation-based solutions – we created the
Component Workload Emulator (CoWorkEr) Utilization
Test Suite (CUTS) which enabled us to build a system
execution modeling toolchain for creating arbitrarily complex SOA-based applications rapidly and performing experiments that systematically evaluated interactions that
would otherwise have been hard to simulate. In particular,
CUTS provides data reduction and visualization tools to
manage and comparatively analyze results from various
alternate execution architectures, thereby helping to coevolve the static and dynamic aspects of our enterprise
DRE system design. It also allowed us to better evaluate
enterprise DRE system QoS by importing measured performance data from CUTS-based applications running
over actual configured and deployed infrastructure middleware services.
Combined with our prior efforts, CUTS allowed us to create a more robust and complete solution for emulating actual shipboard computing application components and
evaluating QoS earlier in the system lifecycle. For example, we used CoSMIC to create models of production applications composed of these components. Likewise, we
used CIAO and DAnCE to deploy these application components into a representative testbed and conduct systematic experiments that measured their performance against
production QoS specifications from the actual shipboard
computing system. The remainder of this paper describes
the capabilities of CUTS and evaluates it qualitatively and
quantitatively in the context of a representative enterprise
DRE shipboard computing system.
3 The Component Workload Emulator Utilization Test Suite
This section discusses the architecture and solutions to design challenges we encountered when developing CUTS.
3.1 CUTS Architecture
CUTS was designed to satisfy requirements for the flexible
emulation of enterprise DRE systems, and for the collection and analysis of performance statistics provided by that
emulation. At the heart of CUTS is a new assembly of
CCM components, called a CoWorkEr, which can be programmed to emulate an arbitrary application component.
For instance, in the context of total shipboard computing,
example application components include an environmental
detector, a planning and combat management application,
or a weapon control system (effector).
CoWorkErs can be connected together via their exposed
ports to create operational strings [], which capture the
partial ordering of a set of executing software components.
To simplify the configuration of CoWorkErs, we created
an MDD-based DSML to characterize the behavior of operational strings of components by specifying their processor, memory, database, and input/output usage profiles.
Figure 2 shows the key elements of the CUTS, which fall
into two broad categories: workload generation and test
control and analysis.
Trigger
CPU
Worker
Memory
Worker
Event
Handler
From
To
Other
WLGs
Event
Producer
Database
Worker
Benchmark
Agent
To
Other
CoWor
kErs

The MemoryWorker component performs allocation
and deallocation of memory.

The DatabaseWorker component performs a series of
insert, update, and delete operations on a specified database.

The EventProducer component (which is also a worker) publishes events that carry a data payload of the
desired size. Events are time-stamped prior to transmission.

The Trigger component is provided to represent external input to a simulated application, or regularly
scheduled, time-driven processing not resulting from
the receipt of an event. Triggers induce workers to
perform a workload at a specified interval with a certain probability, thereby providing both periodic and
pseudo-random behavior. A Trigger can also perform
startup workload during activation.
These monolithic CCM components can be programmed
and configured into CCM component assemblies via an
MDD tool called the Workload Modeling Language
(WML), which is a GME-based DSML that permits the
specification of workloads in a reliable and structured
manner via a GUI. XML characterization files are generated from a WML model, and subsequently parsed by
EventHandler and Trigger components to dictate their
behavior.
Test control and analysis in CUTS includes the following
elements:

The BenchmarkAgent component completes the
CoWorkEr assembly shown in Figure 2. It requests
test data collected by the EventHandler at a userdefined interval and transmits this data to the BenchmarkDataCollector.
Workload generation is implemented in CTS as an assembly-based CCM component composed of the following
monolithic CCM components:

The BenchmarkDataCollector is a CCM component
that uses ODBC to submit test data to a centralized
BenchmarkDatabase implemented with MySQL.

The EventHandler component can receive userdefined events. It records the number of events received for each type, along with data regarding the delay between original publication of the event and the
onset of processing. An EventHandler also tracks the
time required to process each event it receives. In addition, a workload may be associated with receiving
combinations and numbers of events, which is performed by the worker components described next.

The BenchmarkManagerWeb-interface implements
the bulk of the test control and analysis functionality
via an ASP.NET application. This manager processes
data captured in the BenchmarkDatabase and invokes
DAnCE’s ExecutionManager [] to start and end the
deployment of test assemblies. In addition to the
browser interface, a web-services interface allows the
use of scripting languages, such as PERL and Python,
for automated testing.

The CPUWorker component performs CPU-intensive
operations. As with all workers, the quantity of work
to perform is specified as a number of repetitions,
which represent an abstract unit of work.
Figure 3 illustrates how CUTS is used to evaluate the QoS
of enterprise DRE systems. It shows how dedicated hosts,
or test nodes, exist inside the test network and the BenchmarkDataCollector,
BenchmarkManagerWeb-interface,
and web-service clients exist outside the test network. This
setup prevents outside interference on tests run using
To/From Test DB
To BenchmarkDataCollector
Figure 2. A CoWorkEr Assembly
CUTS . It shows how user-specified “critical paths” (timesensitive chains of event processing) may be defined to
verify that critical application functionality can be provided reliably using the allocated resources. Either during or
after emulation, users can analyze the results with the help
of a series of charts and timelines.
Test Node
(WLGs)
Test Node
(WLGs)
Test Network
Test Node
(WLGs)
Test Database
Test Node
(WLGs)
Browser Client
Benchmark
Manager
Database
Benchmark Manager
(BDC & BMW)
Web Service Client
Figure 3. Using CUTS to Evaluate QoS in an Enterprise DRE System
3.2 CUTS Design Challenges and Solutions
We next discuss the design challenges and the respective
solutions encountered when designing and implementing
CUTS. Section 4 then illustrates how these CUTS solution
techniques were applied to an enterprise shipboard computing DRE system.
3.2.1 Emulating QoS Characteristics of Enterprise DRE Systems
Problem. Using conventional SOA-based development
processes, it is hard to evaluate the QoS characteristics of
enterprise DRE systems until late in development, i.e. at
integration time. In the case of very large systems, such as
those seen on major defense, aerospace, and commercial
programs, this means that inadequacies of system architectures and platforms may not be ascertained until years into
development. Significant changes made at this point can be
extremely costly, as they will likely affect the design, implementation, and (re)validation of many software and
hardware components.
Solution  Emulate application component behavior
to evaluate the QoS of the enterprise architecture.
CUTS provides a straightforward way to emulate basic
application component operations, such as intensive computation, database activity, memory access, and eventbased network communication. Using the WML DSML
described in Section 3.1, the expected behavior of individual components can be modeled based on initial requirements and refined easily as behavior is better understood.
The actual layout of operational strings of application
components can then be specified using CoSMIC, with
CoWorkErs initially used in place of the actual components that will ultimately comprise the applications.
Using output from WML and CoSMIC, CoWorkErs are
deployed by DAnCE in the same configuration envisioned
for the actual applications, and are characterized at runtime to behave like the application operational strings they
are emulating. Based on the data gathered from this emulation, it is possible to evaluate empirically how well the
hardware, middleware, and other supporting technologies
meet common QoS requirements of applications. If assumptions about the behavior of individual components
change during the course of development, the WML model
can be altered to reflect the new behavior. The emulation
can then be rerun to evaluate the effects of these changes
on overall system QoS. Finally, as actual components
reach maturity, the CoWorkEr placeholders in CoSMIC
models can be replaced with the genuine artifacts. Since
the CoSMIC models created to deploy and configure emulated applications eventually become the deployment plan
for the finished product, the effort invested in constructing
an accurate model for emulation is not wasted. In contrast,
in approaches based on pure simulation there is rarely a
direct link between simulations and the final product.
The remainder of this section describes key problems we
encountered when developing CUTS and explains the solutions we adopted to resolve these problems.
3.2.2 Non-intrusive Metrics Collection
Problem. An ad hoc metrics collection system might interfere with the emulation, skewing test results. Metrics
should therefore be collected in a minimally intrusive fashion.
Solution  … [Steve, can you please briefly summarize
the solution approach, a la what I did for the solution
in Section 3.2.1?]. The EventHandler, BenchmarkAgent,
and BenchmarkDataCollector components described in
Section 3.1 work together to collect metrics efficiently.
Each EventHandler maintains a local in-memory record of
the number and type of events received, the maximum and
minimum transmission and processing times for each event
type, and running totals for transmission and processing
time.
The BenchmarkAgent obtains the data from the
EventHandler at a user-specified interval in a dedicated
thread, resetting the Handler’s running totals. This interval
is typically in the range of minutes, ensuring minimal impact on both the application and the network. The Agent
then transmits the data to the BenchmarkDataCollector,
which immediately queues the data and returns. The data is
then dequeued and entered in a MySQL database within a
dedicated thread. These dedicated threads run at a lower
priority than the threads running the CoWorkErs, thereby
minimizing the impact on application and middleware behavior.
All data stored and transmitted by the EventHandler and
the BenchmarkAgent is numerical; memory usage is minimal and is bounded by a constant factor. The aspects of
metrics collection most likely to cause variable memory
usage and delays (queueing of data and entry of data into a
database) have deliberately been placed in a separate component (i.e., the BenchmarkDataCollector), which can be
hosted by a machine that is not involved in the emulation.
Moreover, separate network interfaces can be used to decouple the transmission of metric data from the transmission of CoWorkEr operations.
3.2.3 Simplify Characterization of Application
Workload
Problem. Some users of CoWorkErs are likely to be systems engineers or architects, who may not be accustomed
to programming with third-generation languages, such as
C++ or Java, or configuration languages, such as XML. It
is therefore important for CUTS to offer alternatives to
programmatic interfaces and dense configuration files so it
will be accessible to a key class of users.
Solution  … [Steve, can you please briefly summarize
the solution approach, a la what I did for the solution
in Section 3.2.1?]. CUTS allows users to design simulated
applications entirely through user-friendly visual models.
In particular, the CoSMIC and WML DSMLs allow users
to create structural and behavioral models of their applications without manually editing configuration files or thirdgeneration language code. Deployment and analysis of the
application is provided through the intuitive BenchmarkManagerWeb-interface.
inheritance support allows easy swapping of components.
Within a CoSMIC model, therefore, components can be
swapped globally, or for individual CoWorkErs.
3.2.5 Precise Analysis of Performance
Problem. If a particular emulation shows that a proposed
configuration and deployment of enterprise DRE system
components will not meet QoS expectations, CUTS users
must be able to pinpoint the source of the problem to correct it.
Solution  … [Steve, can you please briefly summarize
the solution approach, a la what I did for the solution
in Section 3.2.1?]. In addition to providing a high-level,
graphical representation of observed performance vs. deadlines along a critical path, CUTS BenchmarkManagerWebinterface allows users to view statistics for individual
CoWorkErs. A tabular display allows users to view summary statistics for operational strings of CoWorkErs simultaneously, whereas detailed graphs support scrutiny of an
individual CoWorkEr’s performance over time. Statistics
for processing time can also be subdivided to reflect the
four categories of work, thereby allowing analysts to determine whether its QoS target requirements are being
missed due to its reliance upon a sluggish database, paging
due to excessive memory allocation, saturation of network
bandwidth, etc.
4 Applying CTS to Evaluate Enterprise DRE
Systems
This section describes the design and results of experiments that use CUTS to evaluate a representative enterprise DRE system.
3.2.4 Simplify Customization
4.1 Overview of the Experiment
Problem. The CoWorkEr is capable of emulating four
categories of core application work, such as CPU,
memory, database, and network resource utilization, but
the need for more customized behavior may arise for particular types of enterprise DRE systems. The design of the
CUTS should therefore allow for easy expansion of its
basic repertoire.
ACME Defense Systems (ACMEDS) is designing their
next sensor-to-shooter product-line called DAnCER. It
consists of 2 sensors, 2 planners, 1 configuration, 1 error
recovery, and 2 shooter components. The system requires
information detected by the sensors be transmitted to each
planner in sequence, then to the configuration component,
and lastly to the appropriate shooter to perform an action.
Moreover, the deployment of it will be distributed across 3
nodes. Figure 4 shows the end-to-end layout of the system
with the vertical double lines illustrating how the components are deployed across the three nodes.
Solution  … [Steve, can you please briefly summarize
the solution approach, a la what I did for the solution
in Section 3.2.1?]. In the spirit of CCM, CoWorkErs employ a modular design. In particular, any of the monolithic
components making up the CoWorkEr assembly shown in
Figure 2 can be replaced with a customized component
implementing the same interface, without requiring modification or recompilation of other components. For example, it is straightforward to replace the default CPUWorker
with an FCPUWorker that only performed floating-point
arithmetic computations. In addition, GME’s convenient
[image to be replaced]
Command Event Spec
Command Event Spec
Figure 4. CoSMIC Model of DAnCER Showing the
Components and Their Interconnections
Based on the current development schedule for DAnCER,
integration does not occur for at least another 6 months.
But new system will use artifacts similar to past productlines, so ACMEDS already has an understanding of each
components behavior in DAnCER. The downside is they
do not know how the overall performance will be affected
by the new infrastructure for the system. In the past, ACMEDS waited until the integration phase to begin benchmarking the system only to learn none of their specifications were being meet, which prolonged the expected date
of completion and became costly. To prevent the same
scenario from happening, ACMEDS decides to use CTS to
benchmark system for the following reasons:
1.
Determine if their current configuration and deployment strategies of the components in DAnCER will allow them to meet the specifications.
2.
If test 1 fails, determine which configuration and
deployment will allow them to meet the required
specifications.
3.
If test 1 passes, test scalability of DAnCER, e.g.
how many replicas of DAnCER can be hosted on
their 3 node configuration and still meet the requirements.
ACMEDS believes that if they can determine as much of
this information prior to the integration phase, it will increase their productivity by decreasing the amount of time
integrating the real artifacts with the infrastructure.
The deadline for the completion of the critical path to meet
the required specifications, receiving data on sensor ed-2
to performing an action with shooter eff-3, is 400ms [will
probably decrease]. Table 1 outlines the predicted behavior of each component in DAnCER, which was defined
using the WLGML.
Ed-1 WLG*
Track Event Spec
Periodic Spec
Track Event Spec
ALLOC: 35 KB
CPU: 15 reps
DBASE: 30 Operations
CPU: 30 reps
PUBLISH: TRACK – SIZE 486
DEALLOC: 35 KB
Ed-2 WLG
ALLOC: 35 KB
CPU: 15 reps
DBASE: 30 Operations
CPU: 30 reps
PUBLISH: TRACK – SIZE 486
DEALLOC: 35 KB
Plan-1 WLG*
ALLOC: 30 KB
DBASE: 55 Operations
CPU: 45 reps
PUBLISH: ASSESMENT - SIZE 100
DEALLOC: 30 KB
Plan-3 WLG*
PUBLISH: COMMAND – SIZE 24
ALLOC: 30 KB
DBASE: 55 operations
CPU: 45 reps
PUBLISH: ASSESMENT - SIZE 132
DEALLOC: 30 KB
ConfigOptimization WLG*
ALLOC: 1 KB
DBASE: 25 operations
CPU: 1 rep
DBASE: 10 operations
DEALLOC: 1 KB
Assessment Event
ALLOC: 5 KB
Spec
DBASE: 40 operations
CPU: 1 rep
PUBLISH: COMMAND – SIZE 128
DEALLOC: 5 KB
Status Event Spec
DBASE: 1 operation
Startup Spec
Error Recovery (ER) WLG
ALLOC: 1 KB
DBASE: 25 operations
CPU: 1 rep
DBASE: 10 operations
DEALLOC: 1 KB
Assessment Event
ALLOC: 5 KB
Spec
DBASE: 40 operations
CPU: 1 rep
PUBLISH: COMMAND – SIZE 128
DEALLOC: 5 KB
Status Event Spec
DBASE: 1 operation
Startup Spec
Eff-1* & Eff-2 WLG
CPU: 25 reps
PUBLISH: STATUS – SIZE 32
Command Event Spec
ALLOC: 30KB
CPU: 25 reps
Periodic Spec
PUBLISH: STATUS - SIZE 256
DEALLOC: 30 KB
Table 1. Expected Behavior of Each Component in
DAnCER with * Signifying a Critical Path Component
4.2 Empirical Results
[disregard this section; running test as we speak]
We now discuss the results of running the experiment on a
3 node environment for 15 minutes. The x-axis represents
time in milliseconds to transmit a message along the critical path. The time spent in a component and transmitting
an event can be viewed via their respective colors. [James,
please explain what is shown in Figures 5 and 6.]
Figure 7. Performance Graph for Component Plan-1
and Assessment Event Transmission
Figure 5. Worse-case Critical Path with a Time of
380ms with a Deadline of 500ms
Figure 7 shows the performance of the plan-1 component
and an assessment message from the plan-1 to ConfigOptimization component. The x-axis corresponds to the number of samples at the time of recording and the y-axis corresponds to the average time. By utilizing this built in feature of the test suite, each component and event can be
examined more in-depth to determine if it behaving as expected, or as a point of origin for investigating any bizarre
behaviors.
5 Related work
Figure 6. Average Critical Path with a Time of 142ms
and a 500ms Deadline
4.3 Viewing and Interpreting the Results
[disregard this section; running test as we speak]
Based on the experiment and the results illustrated is Section 4.2, ACMEDS meet a deadline of 500ms with the current deployment configuration of DAnCER. In addition to
being able to view the results of the overall system with
respect to end-to-end performance, the BMW also allows
viewing the performance of individual components and
message transmission within the system on an average,
best, and worse case scenario.
In the R&D area of SOAs, several technologies exist for
configuring and deploying components within enterprise
DRE systems. This section discusses reasons for choosing
DAnCE as the implementation framework for the workload generators with respect of other research efforts.
Proactive [ ] is one framework for deployment and configuration of components, but is designed for Java applications on JVMs. Because we wanted to leverage the use
of open-source middleware, using Proactive opposed to
DAnCE would reduced the amount of open-source middleware available and would limited us to only the J2EE environment.
Globus Toolkit [ ] is another framework proposed for handling deployment and configuration of components. Unlike
DAnCE, this framework does not provide a DSML for
modeling various concerns of an enterprise DRE system,
and validating the system before attempting to deploy it.
Lastly, the DAnCE framework conforms to the OMG
D&C standards, which allows it to leverage other R&D
efforts based on the OMG D&C specifications, such as
OpenCCM.
6 Concluding Remarks
In this paper, we described the motivation for the Component Workload Emulator (CoWorkEr) Utilization Test
Suite (CUTS); the design and implementation of CTS
along with the challenges we encountered and our solutions; and a representative experiment for benchmarking a
system using CUTS. During our experience designing,
developing and using CUTS we learned the following:

Using DAnCE’s implementation of CCM allowed us
to focus more on designing the “business” logic of the
CCM components in CTS, instead of focusing on configuration and deployment issues.

Using CUTS allowed us to rapidly create and exercise
deployment strategies and benchmarking test, which
would have taken extended periods of time if done
manually, or without a majority of the functionality
provided by CUTS.

Combining other SOA middleware platforms, such as
.NET and web services, with CCM allowed is to leverage other utilities to help with deployment of tests
and analysis and presentation of benchmarking data
within CUTS.
CUTS also serves as a vehicle for exploring advanced
software platforms commonly found in enterprise computing environments. Although this presents interesting opportunities for the analysis of feature, performance, and
interoperability concerns when combining CCM applications with other SOA middleware platforms, such as
.NET and Web Services, we focused on the core emulation
functionality provided by CUTS.
As a result of our experience, CUTS will help decrease the
time system engineers spend in the integration phase. Instead of waiting until full integration to begin benchmarking and testing, CUTS allows system engineers to perform
benchmarking and testing on their target infrastructure
using CCM components, which can be replaced with “real” artifacts as they become available. Optimal configuration and deployment strategies can therefore be evaluated
a priori to fully integration, which will cut down on time
spent in the integration phase.
[James, can you please briefly summarize future work on
CUTS and related topics?]
References
Download