A Lightweight Application Hosting Environment for Grid Computing

advertisement
A Lightweight Application Hosting Environment for Grid Computing
P. V. Coveney, S. K. Sadiq, R. S. Saksena, M. Thyveetil, and S. J. Zasada
Centre for Computational Science, Department of Chemistry,
University College London, Christopher Ingold Laboratories,
20 Gordon Street, London, WC1H 0AJ
M. Mc Keown, and S. Pickles
Manchester Computing, Kilburn Building,
The University of Manchester, Oxford Road,
Manchester, M13 9PL
Abstract
Current grid computing [1, 2] technologies have often been seen as being too heavyweight
and unwieldy from a client perspective, requiring complicated installation and configuration
steps to be taken that are beyond the ability of most end users. This has led many of the
people who would benefit most from grid technology, namely application scientists, to avoid
using it. In response to this we have developed the Application Hosting Environment, a
lightweight, easily deployable environment designed to allow the scientist to quickly and easily
run unmodified applications on remote grid resources. We do this by building a layer of
middleware on top of existing technologies such as Globus, and expose the functionally as web
services using the WSRF::Lite toolkit to manage the running application’s state. The scientist
can start and manage the application he wants to use via these services, with the extra layer
of middleware abstracting the details of the particular underlying grid middleware in use. The
resulting system provides a great deal of flexibility, allowing clients to be developed for a range
of devices from PDAs to desktop machines, and command line clients which can be scripted
to produce complicated application workflows.
I
Introduction
ii. they are dependent on lots of supporting software being installed, particularly
libraries that are not likely to already be
installed on the resource, or modified versions of common libraries.
We define grid computing as distributed
computing conducted transparently by disparate organisations across multiple administrative domains. Fundamental to the interinstitutional sharing of resources in a grid is the
iii. they require non-standard ports to be
grid middleware, that is the software that alopened on firewall, requiring the intervenlows the institution to share their resources in a
tion of a network administrator.
seamless and uniform way.
While many strides have been made in the
iv. they have a high barrier to entry, meanfield of grid middleware technology, such as
ing that potential users have to develop a
[3, 4], the prospect of a heterogeneous, onnew skill set before they are able to use the
demand computational grid as ubiquitous as the
technology productively.
electrical power grid is still a long way off. Part
of the problem has been the difficulty to the end
To address these deficiencies there is now
user of deploying and using many of the current much attention focused on ‘lightweight’ middlemiddleware solutions, which has lead to reluc- ware solutions such as [6] which attempt to lower
tance amongst some researchers to actively em- the barrier of entry for users of the grid.
brace grid technology [5].
Many of the current problematic grid midII The Application Hosting
dleware solutions can be characterised as what
Environment
we define as ‘heavyweight’, that is they display
In response to the issues raised above we
some or all of the following features:
have developed the Application Hosting Envii. the client software is difficult to configure ronment (AHE), a lightweight, WSRF [7] comor install, very often requiring an experi- pliant, web services based environment for hostenced system administrator to do so.
ing scientific applications on the grid. The AHE
allows scientists to quickly and easily run unmodified, legacy applications on grid resources,
managing the transfer of files to and from the
grid resource and allowing the user to monitor
the status of the application. The philosophy of
the AHE is based on the fact that very often a
group of researchers will all want to access the
same application, but not all of them will possess
the skill or inclination to install the application
on a remote grid resource. In the AHE, an expert user installs the application and configures
the AHE server, so that all participating users
can share the same application. This draws a
parallel with many different communities that
use parallel applications on high performance
compute resources, such as the UK Collaborative Computational Projects (CCPs) [8], where a
group of expert users/developers develop a code,
which they then share with the end user community. In the AHE model, once the expert user
has configured the AHE to share their application, end users can use clients installed on their
desktop workstations to launch and monitor the
application across a variety of different computational resources.
The AHE focuses on applications not jobs,
with the application instance being the central
entity. We define an application as an entity
that can be composed of multiple computational
jobs; examples of applications are (a) a simulation that consists of two coupled models which
may require two jobs to instantiate it and (b) a
steerable simulation that requires both the simulation code itself and a steering web service to
be instantiated. Currently the AHE has a one to
one relationship between applications and jobs,
but this restriction will be removed in a future
release once we have more experience in applying
these concepts to scenarios (a) and (b) detailed
above.
III
Design Considerations
The problems associated with ‘heavyweight’
middleware solutions described above have
greatly influenced the design of the Application
Hosting Environment. Specifically, they have
led to the following constraints on the AHE design:
• the user’s machine does not have to have
client software installed to talk directly to
the middleware on the target grid resource.
Instead the AHE client provides a uniform
interface to multiple grid middlewares.
• the client machine is behind a firewall that
uses network address translation (NAT)
[27]. The client cannot therefore accept
inbound network connections, and has to
poll the AHE server to find the status of
an application instance.
• the client machine needs to be able to upload input files to and download output
files from a grid resource, but does not
have GridFTP client software installed.
An intermediate file staging area is therefore used to stage files between the client
and the target grid resource.
• the client has no knowledge of the location
of the application it wants to run on the
target grid resource, and it maintains no
information on specific environment variables that must be set to run the application. All information about an application
and its environment is maintained on the
AHE server.
• the client should not be affected by
changes to a remote grid resource, such as
if its underlying middleware changes from
GT2 to GT4. Since GridSAM is used to
provide an interface to the target grid resource, a change to the underlying middleware used on the resource doesn’t matter,
as long as it is supported by GridSAM.
• the client doesn’t have to be installed on a
single machine; the user can move between
clients on different machines and access
the applications that they have launched.
The user can even use a combination of
different clients, for example using a command line client to launch an application
and a GUI client to monitor it. The client
therefore must maintain no information
about a running application’s state. All
state information is maintained as a central service that is queried by the client.
These constraints have led to the design of a
lightweight client for the AHE, which is simple to
install and doesn’t require the user to install any
extra libraries or software. The client is required
to launch and monitor application instances,
and stage files to and from a grid resource. The
AHE server must provide an interface for the
client to launch applications, and must store the
state of application instances centrally. It should
be noted that this design doesn’t remove the
need for middleware solutions such as Globus
on the target grid resource; indeed we provide
an interface to run applications on several different underlying grid middlewares so it is essential that grid resource providers maintain a
supported middleware installation on their machines. What the design does do is simplify the
experience of the end user.
Communication in the AHE is secured using Transport Layer Security (TLS) [28]; our
initial analysis showed that we did not need to
use SOAP Message Level Security (MLS) as our
SOAP messages would not need to pass through
intermediate message processing steps. TLS
is provided by mutually authenticate SSL between the client and the AHE, and between the
AHE and GridSAM. This requires that the AHE
server and associated GridSAM instances have
X.509 certificates supplied by a trusted certificate authority (CA), as do any users connecting to the AHE. When using the AHE to access a computational grid, typically both user
and server certificates will be supplied by the
grid CA that the user is submitting to. Where
proxy certificates are required, for example when
using GridSAM to submit jobs via Globus, a
MyProxy server is used to store proxy certificates uploaded by the user, which are retrieved
by GridSAM in order to submit the job on the
user’s behalf.
IV
Architecture of the AHE
The AHE represents an application instance
as a stateful WS-Resource[7], the properties of
which include the application instance’s name,
status, input and output files and the target grid
resource that the application has been launched
on. Details of how to launch the application are
maintained on a central service, in order to reduce the complexity of the AHE client.
The design of the AHE has been greatly influenced by WEDS (WSRF-based Environment
for Distributed Simulations)[9], a hosting environment designed for operation primarily within
a single administrative domain. The AHE differs
in that it is designed to operate across multiple
administrative domains seamlessly, but can also
be used to provide a uniform interface to applications deployed on both local HPC machines,
and remote grid resources.
The AHE is based on a number of preexisting grid technologies, principally GridSAM
[10] and WSRF::Lite [11]. WSRF::Lite is a Perl
implementation of the OASIS Web Services Resource Framework specification. It is built using
the Perl SOAP::Lite [12] web services toolkit,
from which it derives its name. WSRF::Lite
provides support for WS-Addressing [13], WSResourceProperties [14], WS-ResourceLifetime
[15], WS-ServiceGroup [16] and WS-BaseFaults
[17]. It also provides support for digitally sign-
ing SOAP [18] messages using X.509 digital certificates in accordance with the OASIS WSSecurity [19] standard as described in [20].
GridSAM provides a web services interface
for submitting and monitoring computational
jobs managed by a variety of Distributed Resource Managers (DRM), including Globus [3],
Condor [21] and Sun Grid Engine [22], and runs
in an OMII [23] web services container. Jobs
submitted to GridSAM are described using Job
Submission Description Language (JSDL) [24].
GridSAM uses this description to submit a job
to a local resource, and has a plug-in architecture that allows adapters to be written for different types of resource manager. In contrast
to WEDS, which represents jobs co-located on
the hosting resource, the AHE can submit jobs
to any resource manager for which a GridSAM
plug-in exists.
Reflecting the flexible philosophy and nature
of Perl, WSRF::Lite allows the developer to host
WS-Resources in a variety of ways, for instance
using the Apache web server or using a standalone WSRF::Lite Container. The AHE has
been designed to run in the Apache [25] container, and has also been successfully deployed
in a modified Tomcat [26] container.
Figure 1 shows the architecture and workflow of the AHE. Briefly, the core components
of the AHE are: the App Server Registry, a
registry of applications hosted in the AHE; the
App Server Factory, a “factory” according to
the Factory pattern [29] used to produce a WSResource (the App WS-Resource) that acts as
a representation of the instance of the executing application. The App Server Factory is itself a WSRF WS-Resource that supports the
WS-ResourceProperties operations. The Application Registry is a registry of previously created
App WS-Resources, which the user can query to
find previously launched application instances.
The File Staging Service is a WebDAV [30] file
server which acts as an intermediate staging step
for application input files from the user’s machine to the remote grid resource. We define
the staging of files to the File Staging Service
as “pass by value”, where the file is transferred
from the user’s local machine to the File Stage
Service. The AHE also supports “pass by reference”, where the client supplies a URI to file required by the application. The MyProxy Server
is used to store proxy credentials required by
GridSAM to submit to Globus job queues. As
described above we use GridSAM to provide a
web services compliant front end to remote grid
resources.
Figure 1: The architecture of the Application Hosting Environment
All user interaction is via a client that communicates with the AHE using SOAP messages.
The workflow of launching an application on a
grid resource running the Globus middleware
(shown in figure 1 is as follows: the user retrieves
a list of App Server Factory URIs from the AHE
(1). There is an application server for each application configured in the AHE. This step is
optional as the user may have already cached
the URI of the App Server Factories he wants to
use. The user issues a “Prepare” message (2);
this causes an App WS-Resource to be created
(3) which represents this instance of the application’s execution. To start an application instance the user goes through the sequence: Prepare →Upload Input Files →Start, where Start
actually causes the application to start executing. Next the user uploads the input files to the
intermediate file staging service using the WebDAV protocol (4).
The user generates and uploads a proxy credential to the MyProxy server (5). The proxy
credential is generated from the X.509 certificate
issued by the user’s grid certificate authority.
This step is optional, as the user may have previously uploaded a credential that is still valid.
Once the user has uploaded all of the input files
he sends the “Start” message to the App WSResource to start the application running (6).
The Start message contains the locations of the
files to be staged in to and out from the target
grid resource, along with details of the user’s
proxy credential and any arguments that the
user wishes to pass to the application. The App
WS-Resource maintains a registry of instantiated applications. Issuing a prepare message
causes a new entry to be added to the registry
(7). A “Destroy” command sent to the App WSResource causes the corresponding entry to be
removed from the registry.
The App WS-Resource creates a JSDL document for a specific application instance, using its
configuration file to determine where the application is located on the resource. The JSDL is
sent to the GridSAM instance acting as interface
to the grid resource (8), and GridSAM handles
authentication using the user’s proxy certificate.
GridSAM retrieves the user’s proxy credential
from the MyProxy server (9) which it uses to
transfer any input files required to run the application from the intermediate File Staging Service to the grid resource (10), and to actually
submit the job to a Globus back-end.
The user can send Command messages to
the App WS-Resource to monitor the application instance’s progress (11); for example the
user can send a “Monitor” message to check on
the application’s status. The App WS-Resource
queries the GridSAM instance on behalf of the
user to update state information. The user can
also send “Terminate” and “Destroy” messages
to halt the application’s execution and destroy
the App WS-Resource respectively. GridSAM
submits the job to the target grid resource and
the job completes. GridSAM then moves the
output files back to the file staging locations
that were specified in the JSDL document (12).
Once the job is complete the user can retrieve
any output files from the application from the
File Staging Service to their local machine. The
user can also query the Application Registry to
find the end point references of jobs that have
been previously prepared (14).
The AHE is designed to be interacted with
by a variety of different clients. The clients we
have developed are implemented in Java using
the Apache Axis [32] web services toolkit. We
have developed both GUI and command line
clients from the same Java codebase. The GUI
client uses a wizard to guide a user through the
steps of starting their application instance. The
wizard allows users to specify constraints for the
application, such as the number of processors to
use, choose a target grid resource to run their
application on, stage all required input files to
the grid resource, specify any extra arguments
for the simulation, and set it running.
To install the AHE clients all an end user
need do is download and extract the client, load
their X.509 certificate into a Java keystore using a provided script and set an environment
variable to point to the location of the clients.
The user also has to configure their client with
the endpoints of the App Server Registry and
V AHE Deployment
Application Registry, and the URL of their file
As described above the AHE is implemented staging service, all supplied by their AHE server
as a client/server model. The client is designed administrator.
to be easily deployed by an end user, without
The AHE client attempts to discover which
having to install any supporting software. The
files
need to be staged to and from the resource
server is designed to be deployed and configured
by
parsing
the application’s configuration file.
by an expert user, who installs and configures
It
features
a
plug-in architecture which allows
applications on behalf of other users.
new
configuration
file parsers to be developed
Due to the reliance on WSRF::Lite, the AHE
for
any
application
that is to be hosted in the
server is developed in Perl, and is hosted in a
AHE.
The
parser
will
also rewrite the user’s apcontainer such as Apache or Tomcat. The acplication
configuration
file, removing any relatual AHE services are an ensemble of Perl scripts
tive
paths,
so
that
the
application
can be run on
that are deployed as CGI scripts in the hosting
the
target
grid
resource.
If
no
plug-in
is availcontainer. To install the AHE server, the expert
able
for
a
certain
application,
then
the
user
can
user must download the AHE package and conspecify
input
and
output
files
manually.
figure their container appropriately. The AHE
server uses a PostgreSQL [31] database to store
Once an application instance has been prethe state information of the App WS-Resources,
pare
and submitted, the AHE GUI client allows
which must also be configured by the expert
the
user
to monitor the state of the application
user. We assume that a GridSAM instance has
by
polling
its associated App WS-Resource. Afbeen configured for each resource that the AHE
ter
the
application
has finished, the user can
can submit to.
stage
the
application’s
output files back to their
To host an application in the AHE, the exlocal
machine
using
the
GUI client. The client
pert user must first install and configure it on
also
gives
the
user
the
ability
to terminate an apthe target grid resource. The expert user then
plication
while
it
is
running
on a grid resource,
configures the location and settings of the appliand
destroy
an
application
instance,
removing it
cation on the AHE server and creates a JSDL
from
the
AHE’s
application
registry.
In addition
template document for the application and the
to
the
GUI
client
a
set
of
command
line clients
resource. This can be done by cloning a preare
available
which
provide
the
same
functionalexisting JSDL template. To complete the installation the expert user runs a script to repop- ity of the GUI. The command line clients have
ulate the Application Server Registry; the AHE the advantage that they can be called from a
can be updated dynamically and doesn’t require script to produce complex workflows with multiple application executions.
restarting when a new application is added.
VI
User Experience
We have successfully used the AHE to deploy two parallel molecular dynamics codes,
LAMMPS [33] and NAMD [34]. These applications have been used to conduct production
simulations on both the UK National Grid Service (NGS) and the US TeraGrid. There follows
a discussion of two different use cases where the
AHE has been used to quickly and easily run
simulations using the grid.
A
NAMD
Users often require the ability to launch multiple instances of the same or similar simulations
that vary in particular attributes that affect the
outcome of the simulation. An example of this
is ‘ensemble’ molecular dynamics simulations of
biological molecules in which the starting energies of various atoms are randomized to allow for
conformational sampling of the biological structure through multiple simulations. Another example is Thermodynamic Integration (TI) techniques that calculate binding affinities between
biological molecules. Given that enough grid resources are available, multiple jobs each utilizing
a slightly different configuration can be launched
and executed simultaneously to provide the necessary results. Prior to the AHE, the problems
with implementing such techniques have been
the tediousness of repetitive job submission coupled with the monitoring of job status across
multiple grid resources, as well as the time consuming act of shepherding input and output files
around from resource to resource.
The AHE circumvents these problems by
presenting a uniform interface to multiple resources, through which multiple job submission
can be achieved by scripting the AHE command
line clients, as well as the ability to monitor each
job through this interface. Furthermore, all files
required for a job can be automatically staged
to a set of desired resources as well as output
files retrieved upon job completion.
Some molecular dynamics simulations also
require complex equilibration protocols that
evolve a biological molecule from an available
starting structure to an equilibrium state at
which relevant data can be collated. Such protocols usually involve a series of chained simulations where the output of one simulation is
fed into the input of the next. Whilst some
conventional methods such as ssh can be employed to afford some automation of chained job
submission, scripting the AHE command line
clients provides a simpler and quicker mecha-
nism through which chaining can be distributed
seamlessly across multiple grid resources.
B
LAMMPS
The microscopic and macroscopic behaviour
of large-scale anionic and cationic clay nanocomposite systems can be modeled using molecular
dynamics (MD) techniques. The use of computer simulations to model these sort of systems has proved to be an essential adjunct to
experimental techniques [35]. The clay systems
which we simulate are those of the smectite clay,
montmorillonite and the layered double hydroxide, hydrotalcite. Clays such as these form a
sheet-like (layered) structure, which can intercalate molecules within their layers. Whilst useful
information about the intercalated species can
be obtained by running small-scale simulation,
finite size effects can be explored by increasing
the model size.
LAMMPS is an example of a well used MD
code which does not have the functionalities of
steering and visualization. We have integrated
the RealityGrid Steering system [36, 37] into
LAMMPS in order to introduce these features.
The RealityGrid Steering system was designed
for such legacy codes to be fully grid enabled.
This means that the steering system allows applications to be deployed on a computational
grid using the RealityGrid launcher GUI, which
then can be steered using a steering client. Further integration of the steering library into a visualizer means that the application can transmit
its data to a visualization service. The visualizer
itself can be launched on a separate machine to
that of the application, and communication is
carried out over sockets.
The RealityGrid launcher was built to manage steerable applications in the RealityGrid
computational steering framework. To enable a
scientist to launch, steer and visualize a simulation on separate grid resources, the launcher has
to submit jobs for simulation and visualization,
start a variety of supporting services, and put
all these loosely coupled components in communication with each other. In doing this it relied
on the presence of grid client software (actually
Globus commands invoked through customized
scripts) on the end-user’s machine. This approach possesses several of the drawbacks discussed in this paper, all of which increase the
barrier to uptake. These include:
• deep software dependencies make the
launcher heavyweight.
• the situation in the client of (customiz-
able) logic to orchestrate the distributed
components implicates the end-user in ongoing maintenance of the client’s configuration (consider the difficulty of adding a
new application or new resource, especially
one operating a different middleware, to
the set that the user can access).
• the client needs to be “attached” to the
grid in order to launch and monitor jobs
and retrieve results, which decreases client
mobility.
The AHE approach alleviates these difficulties
by moving as much of the complexity as possible into the service layer. The AHE decomposes
the target audience into expert and end-users,
where the expert user installs, configures and
maintains the AHE server, and the end-users
need simply to download the ready-to-go AHE
client. The client itself becomes thinner, and
with a reduced set of software dependencies is
easier to install. All state persistence occurs at
the service layer, which increases client mobility. Architecturally, the AHE is akin to a portal, but one where the client is not constrained
to be a Web browser, increasing the flexibility
of what the client can do, and permitting programmatic access, which allows power users to
construct lightweight workflows through scripting languages.
VII
Summary
By narrowing the focus of the AHE middleware to a small set of applications that a group
of scientists will typically want to use, the task of
launching and managing applications on a grid is
greatly simplified. This translates to a smoother
end user experience, removing many of the barriers that have previously deterred scientists from
getting involved in grid computing. In a production environment we have found the AHE to be a
useful way of providing a single interface to disparate grid resources, such as machines hosted
on the NGS and TeraGrid.
By representing the execution of an application as a stateful web service, the AHE can easily
be built on top of to form systems of arbitrary
complexity, beyond its original design. For example, a BPEL engine could be developed to allow users to orchestrate the workflow of applications using the Business Process Execution Language. Employing a graphical BPEL workflow
designer would ease the creation of workflows
by users not comfortable with creating scripts
to call the command line clients, which is something we hope to look at in the future.
In future we also hope to be able to use a
GridSAM connector to the Unicore middleware
to allow the AHE to submit jobs to the DEISA
grid. By providing a uniform interface to these
different back end middlewares, the AHE will
provide a truly interoperable grid from the user’s
perspective. We also plan to integrate support
for the RealityGrid steering framework into the
AHE, so that starting an application which is
marked as steerable automatically starts all the
necessary steering services, and also to extend
the AHE to support multi-part applications such
as coupled models. The end-user still deals with
a single application, while the complexity of
managing multiple constituent jobs is delegated
to the service layer.
VIII
Acknowledgements
The development of the AHE is funded by
the projects “RealityGrid” (GR/R67699) and
“Rapid Prototyping of Usable Grid Middleware”
(GR/T27488/01), and also by OMII under the
Managed Programme RAHWL project. The
AHE can be obtained from the RealityGrid website: http://www.realitygrid.org/AHE.
References
[1] P. V. Coveney, editor. Scientific Grid Computing. Phil. Trans. R. Soc. A, 2005.
[2] I. Foster, C. Kesselman, and S. Tuecke. The
anatomy of the grid: Enabling scalable virtual organizations. Intl J. Supercomputer
Applications, 15:3–23, 2001.
[3] http://www.globus.org.
[4] http://www.unicore.org.
[5] J. Chin and P. V. Coveney. Towards
tractable toolkits for the grid:
a
plea for lightweight, useable middleware.
Technical report, UK e-Science
Technical Report UKeS-2004-01, 2004.
http://nesc.ac.uk/technical papers/UKeS2004-01.pdf.
[6] J. Kewley, R. Allen, R. Crouchley,
D. Grose, T. van Ark, M. Hayes, and Morris. L. GROWL: A lightweight grid services
toolkit and applications. 4th UK e-Science
All Hands Meeting, 2005.
[7] S. Graham, A. Karmarkar, J Mischkinsky, I. Robinson, and I. Sedukin. Web
Services Resource Framework. Technical
report, OASIS Technical Report, 2006.
http://docs.oasis-open.org/wsrf/wsrfws resource-1.2-spec-os.pdf.
[8] http://www.ccp.ac.uk/.
[22] J. Brooke, M. Mc Keown, S. Pickles, and
S. Zasada. Implementing WS-Security in
[9] IETF.
The
IP
Network
Perl. 4th UK e-Science All Hands Meeting,
Address
Translator
(NAT).
2005.
http://www.faqs.org/rfcs/rfc1631.html.
[23] http://www.cs.wisc.edu/condor.
[10] IETF. The TLS Protocol Version 1.0.
[24] http://gridengine.sunsource.net.
http://www.faqs.org/rfcs/rfc2246.html.
[11] P. Coveney, J. Vicary, J. Chin, and M. Har- [25] http://www.omii.ac.uk.
vey. Introducing WEDS: a Web services[26] Job
Submission
Description
Lanbased environment for distributed simulaguage
Specification.
GGF.
tion. In P. V. Coveney, editor, Scientific
http://forge.gridforum.org/projects/jsdlGrid Computing, volume 363, pages 1807–
wg/document/draft-ggf-jsdl-spec/en/21.
1816. Phil. Trans. R. Soc. A, 2005.
[27] http://www.apache.org.
[12] http://gridsam.sourceforge.net.
[28] http://tomcat.apache.org.
[13] http://www.sve.man.ac.uk/research/
AtoZ/ILCT.
[29] E. Gamma, R. Helm, R. Johnson, and
Vlissides J.
Design Patterns:
Ele[14] http://www.soaplite.com.
ments of Reusable Object-Oriented Software. Addison-Wesley, 1995.
[15] M.
Gudgin
and
M.
Hadley.
Web
Services
Addressing,
2005.
[30] IETF.
HTTP Extensions for Dishttp://www.w3c.org/TR/2005/WDtributed
Authoring
–
WEBDAV.
ws-addr-core-20050331.
http://www.faqs.org/rfcs/rfc2518.html.
[16] J. Treadwell and S. Graham.
Web
[31] http://www.postgresql.org/.
Services Resource Properties,
2006.
http://docs.oasis-open.org/wsrf/wsrf[32] http://ws.apache.org/axis.
ws resource properties-1.2-spec-os.pdf.
[33] S.J. Plimpton. Fast parallel algorithms
[17] L. Srinivasan and T. Banks. Web Services
for short-range molecular dynamics. J. of
Resource Lifetime, 2006. http://docs.oasisComp. Phys., 117:1–19, 1995.
open.org/wsrf/wsrf-ws resource lifetime[34] L Kale, R. Skeel, M. Bhandarkar, R. Brun1.2-spec-os.pdf.
ner, A. Gursoy, N. Krawetz, J. Phillips,
[18] T. Maguire, D. Snelling, and T. Banks.
A. Shinozaki, K. Varadarajan, and
Web Services Service Group, 2006.
K. Schulte. NAMD2: Greater scalability
http://docs.oasis-open.org/wsrf/wsrffor parallel molecular dynamics. J. Comp.
ws service group-1.2-spec-os.pdf.
Phys., pages 283–312, 1999.
[19] L. Liu and S. Meder.
Web Services
Base Faults, 2006.
http://docs.oasisopen.org/wsrf/wsrf-ws base faults-1.2spec-os.pdf.
[35] H. C. Greenwell, W. Jones, P. V. Coveney,
and S. Stackhouse. On the application of
computer simulation techniques to anionic
and cationic clays: A materials chemistry
perspective. Journal of Materials Chem[20] M. Gudgin, M. Hadley, N. Mendelsohn,
istry, 16(8):706–723, 2006.
J. Moreau, and H. Frystyk.
Soap
version 1.2 part 1: Messaging frame- [36] S. M. Pickles, R. Haines, R. L. Pinning, and
work. Technical report, W3C, June 2003.
A. R. Porter. Practical Tools for Computahttp://www.w3.org/TR/soap12-part1.
tional Steering. 4th UK e-Science All Hands
Meeting, 2004.
[21] A. Nadalin, C. Kaler, P. HallamBaker, and R. Monzillo.
Web Ser- [37] S. M. Pickles, R. Haines, R. L. Pinning,
vice Security:
SOAP Message Seand A. R. Porter. A practical toolkit for
curity 1.0, 2006.
http://www.oasiscomputational steering. In P. V. Coveney,
open.org/committees/download.php/16790/
editor, Scientific Grid Computing, volume
wss-v1.1-spec-os363, pages 1843–1853. Phil. Trans. R. Soc.
SOAPMessageSecurity.pdf.
A, 2005.
Download