Grid Monitoring Futures with Globus Jennifer M. Schopf Argonne National Lab

advertisement
Grid Monitoring Futures
with Globus
Jennifer M. Schopf
Argonne National Lab
April 2003
My Definitions
z
Grid:
– Shared resources
– Coordinated problem solving
– Multiple sites (multiple institutions)
z
Monitoring:
– Discovery
> Registry service
> Contains descriptions of data that is available
– Expression of data
> Access to sensors, archives, etc.
2
What do *I* mean by
Grid monitoring?
z
Different levels of monitoring needed:
–
–
–
–
z
Application specific
Node level
Cluster/site Level
Grid level
Grid level monitoring concerns data
– Shared between administrative domains
– For use by multiple people
– (think scalability)
3
Grid Monitoring Does Not Include…
z
z
z
z
z
All the data about every node of every site
Years of utilization logs to use for planning next
hardware purchase
Low-level application progress details for a single
user
Application debugging data (except perhaps
notification of a failure of a heartbeat)
Point-to-point sharing of all data over all sites
4
Overview of This Talk
z
Evaluation of information infrastructures
– Globus Toolkit MDS2, R-GMA, Hawkeye
– Insights into performance issues
– (publication at HPDC 2003)
z
What monitoring and discovery could be
z
Next-generation information architecture
– Open Grid Services Architecture mechanisms
– Integrated monitoring & discovery arch for GT3
5
Performance and the Grid
z
z
It’s not enough to use the Grid, it has to
perform – otherwise, why bother?
First prototypes rarely consider
performance (tradeoff with dev’t time)
– MDS1–centralized LDAP
– MDS2–decentralized LDAP
– MDS3–decentralized Grid service
z
Often performance is simply not known
6
Globus Monitoring and
Discovery Service (MDS2)
z
z
Part of Globus Toolkit, compatible with other
elements
Used most often for resource selection
– aid user/agent to identify host(s) on which to run an
application
z
Standard mechanism for publishing and discovery
z
Decentralized, hierarchical structure
z
Soft-state protocols
z
Caching
z
Grid Security Infrastructure credentials
7
MDS2 Architecture
IP
Resource A
IP
IP
IP
GRIS
Resource B
IP
GRIS
GRIS register with GIIS
GIIS requests info
from GRIS services
GIIS
Cache contains info from
A and B
Client 1 searches
the GRIS directly
Client 1
Client 2 uses GIIS for
searching collective
information
Client 2
8
Relational Grid Monitoring
Architecture (R-GMA)
z
z
Implementation of the Grid Monitoring Architecture
(GMA) defined within the Global Grid Forum (GGF)
Three components
– Consumers
– Producers
– Registry
z
GMA as defined currently does not specify the
protocols or the underlying data model to be used.
9
GGF Grid Monitoring Architecture
Consumer
event
publication
information
events
Producer
directory
service
event
publication
information
10
R-GMA
z
Monitoring used in the EU Datagrid Project
– Steve Fisher, RAL, and James Magowan, IBM-UK
z
Based on the relational data model
z
Used Java Servlet technologies
z
Focus on notification of events
z
User can subscribe to a flow of data with specific
properties directly from a data source
11
R-GMA Architecture
12
Hawkeye
z
Developed by Condor Group
z
Focus – automatic problem detection
z
Underlying infrastructure builds on the Condor and
ClassAd technologies
– Condor ClassAd Language to identify resources in a
pool
– ClassAd Matchmaking to execute jobs based on
attribute values of resources to identify problems in a
pool
13
Hawkeye Architecture
14
Comparing Information Systems
MDS2
Info
R-GMA
Hawkeye
Module
Collector
Information Producer
Provider
Info
GRIS
Agent
Server
Producer
Servlet
Aggregate
GIIS
Info Server
(Archiver) Manager
Directory
Server
Registry
GIIS
Manager
15
Some Architecture Considerations
z
Similar functional components
– Grid-wide for MDS2, R-GMA; Pool for Hawkeye
– Global schema
z
Different use cases will lead to different strengths
– GIIS for decentralized registry; no standard protocol
to distribute multiple R-GMA registries
– R-GMA meant for streaming data – currently used
for NW data; Hawkeye and MDS2 for single queries
z
Push vs Pull
– MDS2 is PULL only
– R-GMA allows push and pull
– Hawkeye allows triggers – push model
16
Experiments
z
z
z
z
How many users can query an information
server at a time?
How many users can query a directory
server?
How does an information server scale with
the amount of data in it?
How does an aggregator scale with the
number of information servers registered
to it?
17
Testbed
z
Lucky cluster at Argonne
– 7 nodes, each has two 1133 MHz Intel PIII CPUs
(with a 512 KB cache) and 512 MB main memory
z
Users simulated at the UC nodes
– 20 P3 Linux nodes, mostly 1.1 GHz
– R-GMA has an issue with the shared file system, so
we also simulated users on Lucky nodes
z
z
All figures are 10 minute averages
Queries happening with a one second wait
between each query (think synchronous send with
a 1 second wait)
18
Metrics
z
Throughput
– Number of requests processed per second
z
Response time
– Average amount of time (in sec) to handle a request
z
Load
– percentage of CPU cycles spent in user mode and
system mode, recorded by Ganglia
– High when running small number compute intensive
aps
z
Load1
– average number of processes in the ready queue
waiting to run, 1 minute average, from Ganglia
– High when large number of aps blocking on I/O
19
Performance of Information
Servers vs. Number of Users
MDSGRIS(nocache)
R-GMA Pr oducer Ser vl et(l ucky)
Throughput(queries/sec)
MDSGRIS(cache)
Hawkeye Agent
R-GMA Pr oducer Ser vl et(UC)
160
140
120
100
80
60
40
20
0
0
100
200
300
400
500
600
No. of Users
20
Experiment 1 Summary
z
Caching can significantly improve performance of the
information server
– Particularly desirable if one wishes the server to scale well
with an increasing number of users
z
When setting up an information server, care should be
taken to make sure the server is on a well-connected
machine
– Network behavior plays a larger role than expected
– If this is not an option, thought should be given to
duplicating the server if more than 200 users are expected
to query it
21
Directory Server Scalability
MDSGIIS
R-GMA Regi str y(l ucky)
Hawkeye Manager
R-GMA Regi str y(UC)
Throughput(queries/sec)
120
100
80
60
40
20
0
1
10
50
100
200 300 400 500 600
No. of Users
22
Experiment 2 Summary
z
z
Because of the network contention issues,
the placement of a directory server on a
highly connected machine will play a large
role in the scalability as the number of
users grows
Significant loads are seen even with only a
few users, it will be important that this
service be run on a dedicated machine, or
that it be duplicated as the number of
users grows.
23
Information Service Throughput
vs. Num. of Information Collectors
MDS GRIS(cache)
Hawkeye Agent
MDS GRIS(no cache)
R-GMA ProducerServlet
Throughput(queries/sec)
8
7
6
5
4
3
2
1
0
0
20
40
60
80
100
# of information Collectors
24
Experiment 3 Summary
z
z
z
Too many information collectors is a
performance bottleneck
Caching data helps
Alternatively, register to more instances of
information servers with each handling a
subset of the collectors
25
Overall Results
z
Performance can be a matter of deployment
– Effect of background load
– Effect of network bandwidth
z
Performance can be affected by underlying
infrastructure
– LDAP/Java strengths and weaknesses
z
Performance can be improved using standard
techniques
– Caching; multi-threading; etc.
26
So what could monitoring be?
z
Basic functionality
– Push and pull (subscription and notification)
– Aggregation and Caching
z
z
More information available
More higher-level services
– Triggers like Hawkeye
– Viz of archive data like Ganglia
z
Plug and Play
– Well defined protocols, interfaces and schemas
z
Performance considerations
– Easy searching
– Keep load off of clients
27
Topics
z
Evaluation of information infrastructures
– Globus Toolkit MDS2, RGMA, Hawkeye
– Throughput, response time, load
– Insights into performance issues
z
What monitoring and discovery could be
z
Next-generation information architecture
– Open Grid Services Architecture mechanisms
– Integrated monitoring & discovery arch for GT3
28
Open Grid Services Architecture
(OGSA)
z
Defines standard interfaces and behaviors
for distributed system integration,
especially:
– Standard XML-based service information
model
– Standard interfaces for push and pull mode
access to service data
> Notification and subscription
29
Key OGSI concept - serviceData
z
Every service has it’s own service data
– OGSA has common mechanism to expose a
service instance’s state data to service
requestors for query, update and change
notification
– Monitoring data is “baked right in”
– Service-level concept, not host-level
concept
30
serviceData
z
Every Grid Service can expose internal
state as serviceData elements
– An XML element of arbitrary complexity
z
Each service has a serviceData set
– The collection of serviceData Elements
(SDEs)
z
z
Example: state of a host is exposed as an
SDE by GRAM.
Similar to MDS2 GRIS functionality, but in
each service (rather than once per host)
31
Example:
Reliable File Transfer Service
Client
Client
Client
Request and manage file transfer operations
File
Notf’n Policy
Grid
Service Transfer Source
Fault
Monitor
Perf.
Monitor
Query &/or
subscribe
to service data
Pending
Performance
Policy
Faults
interfaces
service
data
elements
Internal
State
Data transfer operations
32
MDS3
Monitoring and Discovery System
z
Consists of a various components
– Core functionality
– Information providers
– Higher level services
– Clients
33
Core Functionality
z
Xpath support
– XPath is a language that describes a way to locate
and process items in XML docs by using an
addressing syntax based on a path through the
document's logical structure or hierarchy
z
Xindice support – native XML database
z
Registry support
34
Schema Issues
z
Need to keep track of service data schema
– Avoid conflicts
– Find the data easier
z
z
Should really have unified naming
approach
All of the tool are schema-agnostic, but
interoperability needs a well-understood
common language
35
MDS3 Information Providers in
June Release
z
z
All the data currently in core MDS2
Full data in the GLUE schema for compute
elements (CE)
– Ganglia information provider for cluster data will also
be available from Ganglia folks (with luck)
z
z
Service data from RFT, RLS, GRAM
GT2 to GT3 work
– GridFTP server data
– Software version and path data
– Documentation for translating your GT2 information
provider to a GT3 information provider
36
MDS3 Higher Level Products
z
z
Higher-level services can perform actions on
service data collected from other services
Part of this functionality can be provided by a set
of building blocks provided
– Provider interface: GRIS-style API for writing
information providers
– Service Data Aggregator: set up subscriptions to
data for other services, and publish it as a single
data stream
– Hierarchy Builder: allow for hierarchy of aggregators
37
MDS3 Index Server
z
z
z
z
Simplest higher-level service is the caching
index service
Much like the GIIS in MDS2
Will have configurablity like an GIIS
hierarchy
Will also have PHP-style scripts, much as
available today
38
39
Clients currently in GT3
z
findServiceData command line client
– Same functionality of grid-info-search
z
C bindings
– Core C bindings provide findServiceData C
function
– findServiceData command line client gives
an example of using it to parse out
information (in this case, registry contents)
40
Service Data Browser
z
GUI client to display service data from any
service
z
Extensible for data-specific visualization
z
A version was released with GT3 alpha
z
http://www.globus.org/ogsa/releases/
alpha/docs/infosvcs/sdbquickstart.html
41
Comparing Information Systems
MDS2
R-GMA
Hawkeye MDS3
Info. Provider Producer
Module
Service
(SDEs)
GRIS
Producer
Servlet
Agent
Service
(service
data set)
Agg. Info
Server
GIIS
(Archiver) Manager
Index
Service
Directory
Server
GIIS
Registry
Manager
Index
Service
Info
Collector
Info
Server
42
Is this enough?
z
z
No!
Many places where additional help
developing MDS3 is needed
43
We Need More Basic Information
z
Interfaces to other sources of data
– GPT data
– Other monitoring systems
– Others?
z
Service data from other components
– Every service has service data
– OGSA-DAI
z
Will need to interface on schema
44
We Will Need More
GUIs and Clients
z
z
Additional GUI visualizers may be
implemented to display service data
specific to a particular port type (as part of
service data browser)
Additional Client interfaces possibly
– Integration into current portals, brokers
45
We Need More
Higher Level Services
z
We have a couple planned
– Archiving service
– Trigger template
46
Post-3.0 release: Archiving Service
z
z
z
z
Will allow subscription to service data
Logging in a flexible way
Well defined interfaces for mining
Open questions:
– Best way to store time-series of arbitrary
XML?
– Best way to query this archive?
– Link to OGSA-DAI?
– Link to other archivers?
47
Post-3.0 release: Trigger Template
z
z
z
z
Will provide a template to allow subscription to
data, reasoning about that data, and a course of
action to take place
Essentially, a gateway service between OGSA
Notifications and some other notification
framework, with filtering of notifications
Example: Subscribe to disk space information,
send mail to sys admin when it reached 90% full
Needed: trigger template and several small
examples of common triggers, and documentation
for how users could extend them or write new
ones.
48
Other Possible Higher
Level Services
z
Site Validation Service
z
Job Tracking Service
z
Interfacing to Netlogger?
49
We Need Security
z
Need I say more?
50
Summary
z
Current monitoring systems
– Insights into performance issues
z
z
What we really want for monitoring and
discovery is a combination of all the
current systems
Next-generation information architecture
– Open Grid Services Architecture
mechanisms
– MDS3 plans
z
Additional work needed!
51
Thanks
z
Testbed/Experiment support and comments
– John Mcgee, ISI; James Magowan, IBM-UK; Alain
Roy and Nick LeRoy at University of Wisconsin,
Madison;Scott Gose and Charles Bacon, ANL; Steve
Fisher, RAL; Brian Tierney and Dan Gunter, LBNL.
z
This work was supported in part by the
Mathematical, Information, and Computational
Sciences Division subprogram of the Office of
Advanced Scientific Computing Research, U.S.
Department of Energy, under contract W-31-109Eng-38. This work also supported by DOESG
SciDAC Grant, iVDGL from NSF, and others.
52
Additional Information
z
MDS3 technology coordinators
– Ben Clifford (benc@isi.edu)
– Jennifer Schopf (jms@mcs.anl.gov)
z
Zhang, Freschl and Schopf, “A
Performance Study of Monitoring and
Information Services for Distributed
Systems”, to appear in HPDC 2003
– http://people.cs.uchicago.edu/~hai/hpdcv25.doc
z
MDS-3 information
– Soon at www.globus.org/mds
53
Extra Slides
Why Information Infrastructure?
z
Distributed, often complex, performance-critical
nature of Grids & apps demands tools for
– Discovering available resources
– Discovering available sensors
– Integrating information from multiple sources
– Archiving and replaying historical information
z
z
These and other functions are provided by an
information infrastructure
Many projects are concerned with design,
deployment, evaluation, and application
55
Performance of GIS Information
Servers vs. Number of Users
MDSGRIS(nocache)
R-GMA Pr oducer Ser vl et(l ucky)
Throughput(queries/sec)
MDSGRIS(cache)
Hawkeye Agent
R-GMA Pr oducer Ser vl et(UC)
160
140
120
100
80
60
40
20
0
0
100
200
300
400
500
600
No. of Users
56
Performance of GIS Information
Servers vs. Number of Users
MDSGRIS(nocache)
R-GMA Pr oducer Ser vl et(l ucky)
MDSGRIS(cache)
Hawkeye Agent
R-GMA Pr oducer Ser vl et(UC)
Response Time(sec)
120
100
80
60
40
20
0
0
100
200
300
400
500
600
No. of Users
57
Performance of GIS Information
Servers vs. Number of Users
MDSGRIS(nocache)
R-GMA Pr oducer Ser vl et(l ucky)
MDSGRIS(cache)
Hawkeye Agent
R-GMA Pr oducer Ser vl et(UC)
1.6
1.4
1.2
1
0.8
0.6
0.4
0.2
0
0
100
200
300
400
500
600
No. of Users
58
Performance of GIS Information
Servers vs. Number of Users
MDSGRIS(nocache)
R-GMA Pr oducer Ser vl et(l ucky)
MDSGRIS(cache)
Hawkeye Agent
R-GMA Pr oducer Ser vl et(UC)
CPU Load
70
60
50
40
30
20
10
0
0
100
200
300
400
500
600
No. of Users
59
Directory Server Scalability
MDSGIIS
R-GMA Regi str y(l ucky)
Hawkeye Manager
R-GMA Regi str y(UC)
Throughput(queries/sec)
120
100
80
60
40
20
0
1
10
50
100
200 300 400 500 600
No. of Users
60
Directory Server Scalability
Hawkeye Manager
R-GMA Regi str y(UC)
Response Time(sec)
MDSGIIS
R-GMA Regi str y(l ucky)
90
80
70
60
50
40
30
20
10
0
1
10
50
100
200
300
400
500 600
No. of Users
61
Directory Server Scalability
MDSGIIS
R-GMA Regi str y(l ucky)
Hawkeye Manager
R-GMA Regi str y(UC)
Load1
4.5
4
3.5
3
2.5
2
1.5
1
0.5
0
1
10
50
100
200 300 400 500 600
No. of Users
62
Directory Server Scalability
MDSGIIS
R-GMA Regi str y(l ucky)
Hawkeye Manager
R-GMA Regi str y(UC)
CPU Load
90
80
70
60
50
40
30
20
10
0
1
10
50
100
200
300
400
500 600
No. of Users
63
Download