Professor Simon Cox - University of Southampton

advertisement
Grid Enabled Optimisation and Design Search for Engineering (GEODISE)
Professor Simon Cox - University of Southampton
Abstract
Engineering design search and optimisation is the process whereby engineering
modelling and analysis methods are exploited to yield improved designs.
GEODISE will provide grid-based seamless access to an intelligent knowledge
repository, a state-of-the-art collection of optimisation and search tools, industrial
strength analysis codes, and distributed computing and data resources. We will
demonstrate our current grid-based system (Geodise v2.1), and discuss our
experiences of the development and deployment of grid technologies and web
services (including authentication mechanisms, knowledge technologies, grid-based
computing and data access) for this important industrial problem.
Whilst our main focus is on the use of computational fluid dynamics with
BAE SYSTEMS, Rolls Royce, and Fluent, an important aspect of our grid-based
framework is the ability to couple in new modules to enable advanced optimisation
capabilities to be exploited easily in other fields.
GEODISE is being developed by the Universities of Southampton, Oxford and
Manchester in collaboration with other industrial partners working in the domains of
hardware (Intel), software (Microsoft), systems integration (Compusys), knowledge
technologies (Epistemics), and grid-middleware (Condor).
Taming of the Grid: Lessons Learned and Solutions Found in the National Fusion
Collaboratory
Dr.Kate Keahey – Argonne National Laboratory
Abstract
The National Fusion Collaboratory project was created to advance scientific
understanding and innovation in magnetic fusion research by enabling more efficient
use of existing experimental facilities through more effective integration of
experiment, theory, and modeling. Magnetic fusion experiments operate in a pulsed
mode producing plasmas of up to 10 seconds duration every 10 to 20 minutes, with
multiple pulses per experiment. Decisions for changes to the next plasma pulse are
made by analyzing measurements from the previous plasma pulse (hundreds of
megabytes of data) within roughly 15 minutes between pulses. The goal of the
collaboratory is to make this mode of operation more efficient by the ability to do
more analysis in a short time through access to remote hardware and software
resources.
The underlying vision of the National Fusion Collaboratory is to provide this access by
creating a pool of ìnetwork servicesî running on shared resources on the Fusion Grid.
The specific characteristics of the problem require real-time access to those services
and emphasize quality of service guarantees in interaction with resources. Another
strongly targeted aspect is authorization and enforcement of usage policies on
services and resources, in particular as related to handling sharing constraints and
priority of execution.
The National Fusion Collaboratory has been created last year. In the course of our
first year our focus has been on creating a production environment implementing
some of the capabilities outlined above, and creating a foundation for research in
subsequent years. We are currently planning a release of Grid-based software giving
Fusion scientists remote access to Fusion codes. In the course of our first year we
gave multiple demonstrations of the evolving infrastructure to the Computer Science
as well as Fusion communities and would like to share the lessons we learned in the
process as well as the problems we identified and our ideas on solving those
problems. Some of the issues we would like to discuss include firewalls and other
security issues, authorization and usage policies, and monitoring in the Grid
environment.
Chemical Reactor Performance Simulation - A Grid-Enabled Application
Kenneth A. Bishop, PhD - The University of Kansas
Abstract
Professor Bishop's research group in Chemical Engineering at The University of
Kansas has been simulating the performance (temperature and chemical
composition) of tubular, packed bed, chemical reactors for several years. The fidelity
of the simulations and the computational capability required to achieve the results
have risen steadily with improved computing power and access to it. The combined
demands for heterogeneous reaction modeling and plant scale reactor simulation
drive the effort to use grid-enabled assets in our research
In an effort to guarantee availability of the computational infrastructure that will be
required during the next five years, we have undertaken to follow and participate (as
early adopters and evaluators) in the grid infrastructure developments being made
under the auspices of the National Computational Science Alliance. A project is
underway that has as its purpose the evaluation of the current set of Grid enabling
tools. The tools are being applied to the execution of small to useful-size problems
using the both shared-memory and message-passing architectures.
Our test problem suite involves both pseudo-homogeneous and heterogeneous
reaction mechanism models for the non-adiabatic, non-isothermal partial vapor
phase oxidation of ortho-xylene to produce phthalic anhydride. The governing
equations for the simulation consist of a set of six, simultaneous, non-linear,
parabolic partial differential equations with boundary conditions that are
combinations of constants and algebraic flux expressions.
Both finite-difference-based (Cactus environment) and method-of-lines-based
approaches are implemented in the codes being tested. Base case solutions were
available at the outset of the project. Those early results were run on non-gridenabled machine configurations that severely limited the spatial resolution of the
solutions and, in some cases, called the transient solutions into question (stiff
equation issues).
The presentation will be a status report on the project which is currently underway.
Special attention will be paid to:
1.
Experience in converting our existing application to run on the grid
2.
Experience in using a grid-enabled applications
3.
Thoughts on features that the grid should provide to effectively support
grid-enabled applications
.
Experiences with Applications on the Grid using PACX-MPI
Dr. Matthias Mueller - High Performance
Abstract
Although the the Grid concept has been widely embraced by science and industry
the number of applications that are Grid aware is still limited. In this presentation we
will not only show a number of success stories, but also the problems and pitfalls
encountered together with possible solutions where they exist. Examples for success
stories participation in various HPC Challenges during Supercomputing conferences
where resources in Europe, Asia and the U.S.A. have been combined to solve various
scientific problems. Recent experiments include applications running distributed on
the European Research Network Geant.
The experiences include a broad range of applications from the participating partners
and HLRS. Among them are CFD, DSMC, First-principles electronic structure
simulation, coupled multi physics and others.
One problem for the development of applications for Grid-environments is the lack of
tools, which end-users are familiar with from their regular working environment. This
talk analyzes the requirements for developing, porting and optimizing scientific
applications for Grid-environments. A toolbox designed and implemented in the
frame of the DAMIEN project which closes some of the gaps and supports the enduser during the development of the application and its day-to-day usage in Gridenvironments is presented.
Overview of the Distributed Aircraft Maintenance Environment (DAME) Project
Martyn Fletcher - Distributed Aircraft Maintenance Environment
Project
Abstract
DAME is one of six e-Science projects funded by the EPSRC under the current UK eScience initiative. DAME will develop a generic test bed for distributed diagnostics
that will be built upon grid-enabled technologies and web services. The generic
framework will be deployed in a proof of concept demonstrator in the context of
maintenance applications for civil aerospace engines. A brief overview of the
environment is presented, including, a description of the proposed operation using
use
cases.
The project will draw together a number of advanced core technologies, within an
integrated web services system (based on Globus), including:
AURA QUOTE CBR -
Advanced Uncertain Reasoning Architecture for Pattern Matching.
neural network based techniques for real-time aero engine monitoring
applications.
Case-Based-Reasoning systems for intelligent decision support.
DAME will address a number of problems associated with the design and
implementation of GRID based on-line decision support systems. The most
significant of these are access to remote resources (experts, computing, knowledge
bases etc), communications between key personnel and actors in the system, control
of information flow and data quality, and the integration of data from diverse global
sources within a strategic decision support system. The web services model for
information brokerage on the Internet offers an inherently pragmatic framework
within which to address these issues.
The partners in the project are the University of York (lead partner), University of
Leeds, University of Oxford, University of Sheffield, Rolls-Royce (Aeroengines),
Data Systems & Solutions and Cybula Ltd.
CFD Grid Research in N*Grid Project
Kum Won Cho
Abstract
N*Grid project is an initiative of grid research in Korea funded by MIC (ministry of
information and communication) of Korea. CFD Grid is a key application research in
N*Grid. The CFD problems involves a large set of partial differential equations,
require a lot of computing time, and already have good algorithm for distributed
and/or parallel computing. Thus, it is very popular application in many grid projects.
The primary goal of CFD grid in N*Grid project is to construct a virtual laboratory for
CFD analysis and design optimization in the framework of grid. The virtual wind
tunnel, which combines CAD, mesh generation system, flow analysis code and
visualization software, provides a high fidelity performance analysis tool.
A prototype of CFD grid is built on the top of grid infrastructure. At the bottom of
CFD grid, the Globus provides necessary services such as security and
authentification for access to the resources. A Global job scheduler allocates
requested resource and invokes it to GRAM.
Several CFD analyses are computed on the grid testbed in Korea. The results show
the feasibility of present grid infrastructure. Distributed computing on testbed was
very successful and performance was satisfactory for practical use. The result of
high-throughput computing on the grid testbed will be included in the final
manuscript.
GridTools: "Customizable command line tools for using Grids"
Ian Kelley - Max Planck Institute for Gravitational Physics
Abstract
One of the promises of Grid computing is to provide easy access, management and
use of distributed heterogeneous computational resources. Advanced user
environments, fronted by Grid portals with user-friendly GUIs, are being developed
to supply such functionality. With a single-login, users will be able to discover and
access their resources, build and run their applications, manage the data they
require and create, and collaborate with colleagues around the world.
However, in the short term, Grid tools and infrastructure are still hard to use for
many users. This is not necessarily due to the Grid software itself, but can also be
caused by the variations in installations and support provided at individual sites, as
well as to local differences in security policies such as firewalls. Additionally there are
currently no standard tools for testbed administrators to manage and test the quality
and reliability of their resources.
In this talk we will describe a simple package of Grid tools which have been
developed to provide users and testbed administrators with straightforward methods
to test and perform basic tasks on their resources. We use the command line tools
distributed with Globus which access some of the functionality of the Globus toolkit,
embedded in Perl scripts, along with a very simple database structure. The command
line use provides a familiar environment for application users and developers, and
the straightforward structure also allows them to be easily customized and changed.
We hope that such a simple mechanism to run and test Globus functionality at this
early stage, will also make it easier to prototype and evaluate capabilities which
could be built in to further applications, see what problems need to be resolved on
the way, and find what features user interfaces will require.
Ian Kelley graduated from the University of Washington in 1999. Soon thereafter,
he moved to Berlin where he was hired at the Max Planck Institute for Gravitational
Physics to work as a programmer for the Cactus project and Living Reviews in
Relativity (an online reviewed physics journal). In January 2002, he began work in
the GridLab project, developing user, application and administrative Grid portals. He
is particularly interested in creating reusable software components to aid in the
development of user-friendly distributed computing environments.
Using pyGlobus to Expose Legacy Applications as OGSA Components
Keith Jackson - Lawrence Berkeley National Laboratory
Abstract
By exposing legacy applications as OGSA compliant components, we can enable the
easy usage of these applications in a Grid environment. I will describe current work
on developing an OGSA compliant hosting environment in Python, and using it to
expose legacy applications. OGSA provides a hosting framework that supports
lifecycle management, security, SOAP parsing, Notification, etc., and allows the easy
development and deployment of Grid Web Services. I will discuss techniques for
wrapping existing applications in Python, and then using pyGlobus to provide a Grid
Web Services interface to the code. This talk will describe both currently existing
tools, and upcoming plans for the next year.
An Overview of the GAT API
Tom Goodale
Abstract
One of the aims of the GridLab project is to produce an API which application
developers can use to access Grid operations without being tied to any particular
middleware such as Globus or Condor, or to the specific details of services deployed
and accessable in their computing environment. Such an API allows applications to
be developed independently of the final environment(s) they will be deployed in.
This talk gives an overview of the API and examples of how it can be used in real-life
scenarios to Grid-enable an application.
Tom Goodale originally trained in Theoretical physics. After his degree he went into
industry to work as a software engineer, primarily in the field of numerical
modelling. After a few years in industry he returned to academia to do a masters in
Computational Fluid dynamics, and in 1997 went on to start a PhD in Numerical
Relatitivity at the Albert-Einstein-Institute in Golm, Germany. During his time at the
AEI he has been one of the prime developers of the Cactus Computational Toolkit,
and is now employed by the GridLab project to develop the Grid Application
Toolkit.
Collaborative Tools for Grid Support
Laura McGinnis - Pittsburgh Supercomputing Center
Abstract
The Software Tools Collaboratory, an NIH funded project at the Pittsburgh
Supercomputing Center, has recently completed a protein folding simulation, utilizing
4 major systems at 3 geographically distributed sites, connected via Legion
middleware. This presentation will discuss the issues related to setting up and
running the simulation, from the perspective of establishing and maintaining the
infrastructure and communication among participants before, during, and after the
simulation.
As many sites have found, coordinating the resources for grid computing is more
than just a matter of synchronizing batch schedulers. This presentation will share the
collaboratory's experience in supporting and managing communication among
participants, especially in the back channels, using common, publically available
tools.
Application Web Service Tool Kit
Geoffrey Fox - Indiana University
Abstract
We describe a set of tools to allow "general" (existing) applications to be presented
as Web Services with some experience from Solid Earth, Structures and other areas.
In particular we describe the interfaces between the grid infrastructure community
and the scientific application developer and user communities. Our usage of the
word interface has multiple meanings. First, there is the need to define an interface
description language for scientific applications, which describes how to invoke a
particular application and how to bind it to particular hardware resources and grid
services, effectively defining a way for adding applications to a grid. We refer to the
resulting entity as an application web service. Second, the interface definition
defines how to build a client user interface for a particular application service. These
service user interfaces (both for application web services and lower level services
such as file transfer and host monitoring) can be placed into portlets, which can be
aggregated as components into a single portal interface. The web portal then
becomes a management environment for user interfaces, both remote and local.
Service interfaces can be plugged into the portal in a well-defined way. We describe
our efforts in designing and building both types of interfaces.
Grid Portals: Bridging the gap between Grids and application scientists
Michael Russell - Max Planck Institute for Gravitational Physics
Abstract
While Grids may provide infrastructures for managing distributed heterogeneous
computational resources, many of the promises of Grid computing have yet to be
realized for its would-be users. For as Grid technologies continue to evolve,
application scientists have few tools for effectively utilizing Grids in their everyday
research. Even Grid portals have failed to deliver. Grid portals, in building upon the
successful Web portal paradigm, are supposed to provide customizable, single points
of access to a wide-variety of information, data, and computational services. Grid
portals are supposed to make it easy for non-Grid experts to utilize Grids. Grid
portals are supposed to serve communities of distributed researchers and to enable
researchers to collaborate on large-scale problems. Up until now, however, Grid
portals have been difficult to develop and support. This is due to a variety of reasons
ranging from poorly supported Grid infrastructures to the lack of a suitable model in
the Grid community for collaborating on the development of Grid portals.
In this talk we will describe our experiences in developing the Astrophysics
Simulation Collaboratory (ASC) Portal. The ASC is a project aimed at enabling a
Virtual Organization (VO) of astrophysicists to collaborate on the development and
execution of simulation codes on Grids for studying complex astrophysical
phenomena. We'll discuss the problems we faced and how we are learning to
overcome these problems in both the ASC and GridLab.
The GridLab project was founded on the principle that it takes more than a portal, it
takes an entire community of experienced researchers and scientists to collaborate
on the development of higher-level Grid services to support Grid portals, including a
toolkit for developing Grid-enabled applications. In addition, we are able to benefit
from recent advances in Grid and Web computing that we feel provide the right kinds
of models for enabling GridLab to collaborate with the ASC and other groups to build
a better Grid portal.
A Data Miner for the Information Power Grid
Thomas H. Hinke, Ph.D. - NASA Advanced Supercomputing
Abstract
This talk will present the design and implementation of the Grid Miner, a data mining system
that operates on NASA’s Information Power Grid. The Grid Miner is an agent-based mining
system that uses the grid to stage a mining agent and a mining plan (which describes the
sequence of mining operations that are to be applied to the data) to a grid computer that is
to serve as the mining site. Initially, a “thin” mining agent is staged to the mining site. Once
staged, the “thin” agent then grows in capability, based on the requirements of the mining
plan, by using the grid to acquire the necessary mining operations from a grid-accessible
operations repository. The fully configured mining agent then uses the grid to acquire the
data to be mined. The Grid Miner is the result of work to convert a stand-alone objectoriented, C++ data mining system, called ADaM (developed at the University of Alabama in
Huntsville under a NASA grant), into a grid-based mining system. The original stand-along
ADaM data mining had a total of 459 C++ classes. The transformation of ADaM into the Grid
Miner required the addition of only three new classes and slight modifications to 5 original
classes.
Background:
A number of Problem Solving Environments (PSE) targeted for creating distributed
applications on heterogeneous networks are under development. Many of these PSEs have a
similar component-based design. In general, the various PSEs do not support portability
(during development) and interoperability (during execution) of applications between
different PSEs. Interoperability includes both user data exchanges and common remote
method protocols.
A range of target applications need to be considered. In the simplest case, applications entail
a local interface to manage the execution of remote codes including file transfer assistance.
Other applications will require more complex work-flow programming along with the
development of remote servers. Other applications will need a full range of services including
execution monitoring and steering, debugging, resource acquisition, security, and others.
Applications will need a range of communication granularity and performance.
Presentation:
The distributed programming research at ICASE is focused on finding solutions that address
the issues mentioned above. This presentation will address the following topics.
1.
Background issues will be discussed, including the conclusions of an "ICASE
Workshop on Programming Computational Grids"
2.
A component-based programming framework model will be outlined. This
model has been previously implemented in prototype form.
In addition, ICASE has developed a small regional grid project, the Tidewater Research Grid
Partnership (TRGP). TRGP is actively seeking to assist researchers in developing Grid
applications. A brief overview of TRGP will be given.
Download