The State of Computational Ecology

advertisement

National Center for Ecological Analysis and Synthesis

The State of Computational Ecology

John Helly

San Diego Supercomputer Center

Ted Case

University of California, San Diego

Frank Davis

University of California, Santa Barbara

Simon Levin

Princeton University

William Michener

Jones Ecological Research Center

Citation: Helly, John, Ted Case, Frank Davis, Simon Levin, William Michener. 1995. The State of Computational Ecology.

Research Paper No. 1. National Center for Ecological Analysis and Synthesis, Santa

Barbara, California. Available at "http://www.nceas.ucsb.edu/nceas-web/projects/2295/nceas-paper1/".

THE STATE OF

COMPUTATIONAL

ECOLOGY

E DITED BY :

John Helly

San Diego Supercomputer Center

Ted Case

University of California, San Diego

Frank Davis

University of California, Santa Barbara

Simon Levin

Princeton University

William Michener

Jones Ecological Research Center

This report is the result of a September 1995 workshop jointly sponsored by the San Diego Supercomputer Center, the National Center for Ecological Analysis, and the National Center for Supercomputing Applications with funding provided by the National Science Foundation.

ABSTRACT

Computational ecology is a field devoted to the quantitative description and analysis of ecological systems using empirical data, mathematical models (including statistical models), and computational technology. While the components of this field are not new, there is a new emphasis on the integrated treatment of the area. To a large degree, this emphasis is precipitated by the expansion of our local, national, and international computational infrastructure, coupled with the heightened social awareness of ecological and environmental issues and its effects on research funding. In an attempt to consolidate what is known about the state-of-practice in computational ecology, a workshop was held gathering together ecologists and computer scientists for the purpose of identifying those technology issues which impede the progress of ecological research. This workshop was held at the San Diego Supercomputer Center in

September 1995.

The results of the workshop are expressed in three areas: data management, visualization, and modeling. Common to these areas is the need to develop standards to facilitate the sharing of data, the comparison of results, and the integration of model components. Unique to the problem of data sharing is the issue of the proprietary nature of research data and the lack of institutional incentives for data sharing along with the usual issues of intellectual property. The use of visualization continues to be inhibited by the highly specialized knowledge still required to effectively utilize current visualization packages along with fundamental questions such as how should statistical error be represented in visual presentations. The modeling community is challenged by questions such as how multi-scale, multi-resolution models may be integrated across disciplines, as well as the desire for collections of standard realizations of model components to minimize redundant software development and facilitate the comparison of modeling analyses.

1 FOCAL ISSUES IN ECOLOGY

Global Climate Change, Biodiversity, and Sustainability

Since before the introduction of the term ecosystem in 1935 by Tansley [10], ecologists have struggled with conceptual questions such as: do communities of species coalesce into functionally integrated higher-order units or do species respond more individually across space and time? Do population sizes exhibit fluctuations in response to apparently stochastic processes in the physical environment or does density-dependence regulate and stabilize population trajectories? If such regulation occurs, how does it work? Does it come from trophic interactions above, below, or within a community? So far it often seems that the answers vary across species, across places, and across time.

Increasingly, ecologists must make their knowledge useful to the practical necessities of conservation and the management of natural resources while continuing to address fundamental questions about how nature works. Lubchenco, et al [21] identified three preeminent focal areas of ecology for the coming decades: global climate change; biodiversity; and sustainability of ecosystem resources. All three issues share a need for mechanistic explanations and understanding. Ecologists continue to search for measurable properties of individual organisms affected by their local environment, which result in aggregated, large-scale phenomena.

Increasingly, these studies are driven by the need to understand the effect of human impacts prospectively, requiring an improved predictive capability.

1.1 THE NEED FOR IMPROVED PREDICTIVE CAPABILITIES

The answers to ecological questions have rarely, if ever, been general laws. Since ecologists deal with in situ, living systems, the study of ecology requires analytical methods and experiments which often cannot be controlled in the precise way physical experiments usually are. Consider what it would be like to conduct a physics or chemistry experiment if the scientist were only a few angstroms in size and lived for only a few nanoseconds. Could such a scientist control or decipher the macroscopic course of chemical reactions from the random collisions of molecules or the relationships between subatomic particles? The phenomena that ecologists study frequently operate on spatial and temporal scales larger and longer than any one individual can effectively study.

The problem of linking processes at the scale of the individual organism to processes at the community and regional scale raises issues regarding the sufficient characterization of the interactions between individuals, populations, and the physico-chemical environment. The obvious difficulty of manipulating and replicating experiments at the landscape scale raises issues about experimental design, regularity and longevity of sampling, and the appropriate integration of empirical data. Furthermore, the difficulty of translating theoretical models to real landscapes with potentially huge numbers of parameters to be ?t from data collected at varying spatial and temporal scales pose both theoretical and practical problems.

1.2 CULTURAL INHIBITIONS AND THE ROLE OF COMPUTATIONAL

TECHNOLOGY

With a shift to mechanistic and large-scale practical questions has come a need for ecologists to think deeply about issues that present bottlenecks to solving their problems. Increasingly large interdisciplinary groups of scientists must collaborate and share data to address questions that span large ranges of physical and temporal scale. This creates both cultural and technical difficulties.

Ecology arose and flourished in a tradition of independent investigators studying relatively small, well-defined systems with a hope that the results could be generalized to larger, more complex systems. Attempts to understand comprehensive processes in ecology are now seen to require increasingly larger groups of collaborators, forcing consideration of issues of priority,

data ownership, communication, coordination of effort, and data consistency. For example, the

Long Term Ecological Research ( LTER ) concept developed by the National Science Foundation

( NSF ) created a network of sites intended to measure a core set of ecological processes to develop a regional and global understanding of existence and nature of global change. Burdened by its cultural legacy of independent, individual investigators, the program has tended toward the old mold, with autonomous research programs capitalizing on the unique characteristics of each particular site. Improving collaboration, to make it possible to answerlarger-scale questions, requires extensive changes in the cultural milieu of academic incentives and the research grant system that rewards individual accomplishments preferentially to collaborative accomplishments

[12]. This shortcoming has been recognized in a recent national review [33] and is being addressed by the new generation of ecologists inside and outside the LTER system.

Technological needs for interdisciplinary research teams must be met. For example, the Internet

(and its progeny) is making an immense impact on the way ecologists conduct research by not only enhancing the speed and facility of communication in general, but also allowing the parallel posting and retrieval of data, creating dynamic, multi-authored data sets rather than the static data sets characteristic of a typical journal article. Increasingly faster computers, with increasingly lower costs, allow a level of realism in modeling efforts that could not even be imagined 20 years ago. Higher resolution satellite imagery and other forms of remotely sensed data can be processed and interpreted by new types of software to produce maps that for the first time allow us to visualize ecological variables across huge spatial scales.

Underlying all these issues is the need to link dynamic processes operating across differing spatial domains, and with different rates. How can we link the natural and man-made forces that influence demand for biological resources with the population dynamics of these resources? How can we link large-scale atmospheric models with the behavior of individual organisms that may even influence weather patterns? How much averaging and smoothing of very fine-scale biological data must be made in order to match the coarser scale of the geophysical data while preserving the essence of the phenomenon being modeled? The answers clearly depend on what we want to predict. It may be quite reasonable to predict the species composition of a forest, and even the statistical properties of the spacing patterns of individual trees, but it may be fruitless to ask whether the position of a particular individual at a particular time can be predicted.

2 COMPUTATIONAL ECOLOGY

In the preface of her book, Mathematical Ecology [32], E. C. Pielou described the integrative nature of ecology in terms of three different approaches to the analysis of ecological questions.

These are:

1.

Ecological model building including explanatory and predictive models,

2.

Statistical study of populations and communities including both laboratory systems and large-scale, in situ systems, and

3.

Quantitative descriptive ecology consisting of "...processing large bodies of observational data in such a way that interesting regularities, hitherto buried from sight, become apparent."

These categories of endeavor that Pielou envisioned as mathematical-plus-statistical ecology have been recast here into what we now call computational ecology. As such it is an interdisciplinary field devoted to the quantitative description and analysis of ecological systems using empirical data, mathematical models (including statistical models), and computational technology. The measure of success and progress in this field is the increasing ability to reliably explain and predict the behavior of the systems under study. The long-term scientific goal is the development of methods to predict the response of ecosystems to changes in their physical, biological and chemical components. The computational aspects arise from the need to express and manipulate the data and theories that we have about these complex, expansive and poorly understood systems. We endeavor to develop ways to discern patterns and principles from empirical data and mathematics through the machinery of computation. This leads to three general areas of research in computer science relevant to computational ecology: mathematical modeling, data management, and visualization.

Each has its own organizing principles and basic problems but they are related to each other by their relevance across a wide range of ecological research problems.

2.1 DATA MANAGEMENT

Ecological data management differs in many ways from other types of data management as a result of the irregular character and relative sparseness of the data confounded by widely ranging scales of measurement in time and space. In addition, the information describing the data and its proper use (the "metadata" or "codata") is as diverse as the data itself [4]. This poses research questions regarding how this diversity can be accommodated in the least number of representations possible, yet support the wide range of software and hardware combinations used by researchers around the world. There are also significant issues regarding intellectual property rights and ownership of data that are becoming increasingly pressing [28].

2.2 MATHEMATICAL MODELING

Mathematical models provide a means of expressing our understanding of the mechanisms governing the structure and function of natural populations, communities, and ecosystems in a testable manner through the use of computer simulations. The results of a computer simulation are usually compared with empirical data to evaluate the predictive power of a given mathematical model and the limits of our understanding of the thing being modeled. Because many ecological processes occur over long time scales, ecologists often use models of underlying processes to explain observed patterns. Inference from process to pattern has been limited to relatively simple models because of the high computational burden of relaxing assumptions and incorporating greater detail. While simple models lend some insight, they are typically very sensitive to often unrealistic assumptions. Until recently ecologists had to use these models, because there was insufficient computational power to run stochastic, individuallybased, spatially explicit models. We now have sufficient power to investigate some of these

models. One cannot be satisfied, however, simply to make models more complicated; it is essential to simplify, to explore, and to address the questions raised earlier about the transfer of information across scales.

2.3 VISUALIZATION

Visual modeling uses techniques of scientific visualization to compose multi-dimensional, computer-generated scenes that can be used to express empirical data and modeling results in an intuitive and viscerally-appealing presentation. This can be done not only in the true spatiotemporal context but also in other parameterizations that reveal critical features of data or model sensitivities. Visual models can be presented with an emphasis on particular themes. For example, the spatial distribution of vegetation by altitude and season, or the variation in salinity around an ice floe and the spatial distribution of sea life can be depicted as snapshots in time or through animated sequences. Generalized capabilities now exist to manipulate visualizations to allow the observer to alter the point of view at will and to color-code parametric values to emphasize particular features in data.

3 DATA

Global change, biodiversity, and sustainability represent core research issues for the next century

[21] that present numerous scientific, sociological, and technological challenges [11]. Wise management and decision-making related to sustainability and global change will entail integration and timely analysis of data from the physical, chemical, biological, and social sciences. The needs for seamless integration of data within and among disparate disciplines and rapid transformation of those data into the information and knowledge bases required by resource managers, decision-makers, and the scientific community, require that social and natural scientists revolutionize the ways that data are collected, managed, and analyzed [11],

[26], [24], [5], [31].

3.1 CHALLENGES

Specific challenges related to managing the data and information required for addressing global change, biodiversity, and sustainability include: timely identification and acquisition of relevant data; determining data "fitness-for-use" for meeting specific objectives; development of powerful, flexible, and user-friendly data and database management systems; mechanisms and repositories for data archival; and new mechanisms for transforming data into usable information and knowledge. Identification and acquisition of data relevant to a specific problem is frequently hindered by both sociological and technological obstacles.

Despite the continual improvements in hardware and software, technological impediments still exist with respect to identifying data resources and communicating large volumes of data. This is typified by the arbitrary manner in which data are organized during acquisition. Despite the

existence of quasi-standard data export capabilities such as comma-separated values ( CSV ), richtext format ( RTF ), and their ilk, the essential definition of data content is arbitrary and subject to the experience and inclinations of the originator of the data. This is, not surprisingly, the result of the expedients encountered by the individual research group in balancing project-specific versus long-term data needs. The problem is exacerbated by the lack of guidelines for standardization and limited experience of the researchers.

Ecological data are typically collected to meet needs specific to a single project. Despite the fact that these data frequently are useful for meeting broader needs of the scientific community (e.g., scaling site-specific studies up to broader spatial and temporal scales, etc.), these benefits accrue only when scientists adhere to common standards for data models, quality assurance and quality control ( QA/QC ), archival, and so forth. Agencies that perform or support environmental research and management could benefit from identifying key gaps in available databases, and seeking to fill such gaps by creating new research initiatives, expanding existing research projects to encompass broader spatial and temporal scales or, in many cases, by simply providing the additional support that would be necessary to convert a project-specific database into a high quality, well-documented, and archived database that is easily accessible to the scientific community. Development of cost-share arrangements among agencies to produce and distribute common data sets could mitigate many of these problems.

Standardization efforts are generally treated with reluctance by the scientific community when they are initially proposed. Nonetheless, much of our progress in science can be directly related to the development and adoption of standards. Several ongoing activities in the ecological community provide relevant examples of progress that is being made in standardizing data collection, processing and management, and analysis. Significant progress in understanding biodiversity will require unprecedented collaboration, database development, and data sharing among the ecology and systematics communities. Addressing issues related to biodiversity will require access to voucher and museum specimens. Progress is being made in capturing specimen data in electronic form. For example, internationally 2 million specimen records were available free on the Internet in 1993 [27], and more than 3 million specimen records are currently available (Miller, unpublished data). However, much work remains, as only 5% of the 400 million museum specimens in the US are available in electronic form, and only 2% of the specimens are geocoded. Whereas it may cost only $5 tocapture specimen data in electronic form, recapturing the "data" in the field may exceed $100 per specimen [1]. Transferring the wealth of museum specimen and associated data into electronic form (images and text) represents a significant challenge. Furthermore, despite numerous nascent efforts to develop databases of described species, the lack of a master database for the more than 1 million of them remains a major handicap to all who use species names as keys to information about organisms and ecosystems. Developments in library information science may serve as an evolutionary model for how standardization, data sharing, cataloging, and archival efforts can develop and, ultimately, benefit science and society.

One of the most difficult limitations to improved data sharing is the result of sociological constraints. Data providers are heavily concerned about piracy of their data, the potential for misuse or misinterpretation of their data and, in general, the larger legal issues surrounding intellectual property rights and liability. Liability issues frequently impede or prevent the free flow of data. Scientists and agencies may have real and justified concerns about releasing data

associated with endangered or threatened species and communities. Data quality can easily become a legal issue requiring concerted attention to quality assurance and quality control that may exceed resource availability and result in lengthy time delays. Inaccurate mappings of locations of endangered species or flood zones represent just two examples of data quality problems that may result in economic harm to landowners and subsequent recourse through the legal system.

In some cases, individual scientists and agencies may equate data with power, thereby attempting to hold and protect data, as opposed to sharing data [34]. Although this notion may be naive or even counter to agency mandates, it is also true that frequently there are few incentives for data sharing. It is clear that the challenging research questions that are facing ecologists will demand unprecedented access to data that are collected by others. Significant attention has been focused on developing mechanisms for disseminating scientific results. Consequently, hundreds of journals now publish findings of ecological studies. Perhaps it is time for the scientific community to devote similar attention to developing and promoting the incentives necessary to facilitate data sharing. Electronic journals that specialize in publishing data and metadata, ascribing value and prestige to data submission to national databases, and other mechanisms may warrant consideration by agencies and scientific societies.

San Diego Bay

Interagency Water Quality Panel

Table of Contents

Background

Data

Modeling

Library Access

Sites of Interest

File Utilities

San Diego Bay History

San Diego Bay Interagency Water Quality Panel

Query System

Modeling home page

SD Bay Bibliography at Scripps Institution of Oceanography

LiveNet San DiegoBayCam

Chesapeake Bay

UNIX, PC, and Mac utilities

Send email to John Helly (hellyj@sdsc.edu) if you need more information or have questions about the San Diego Bay Environmental Data Repository.

San Diego Bay Project web pages maintained by John Truong (jtruong@sdsc.edu)

F IGURE 1 : Example of a data repository to facilitate the sharing and integration of environmental and ecological data for San Diego Bay ( http://www.sdsc.edu/~sdbay )

F IGURE 2 : Example of precipitation, bathymetry, and copper concentration in sediments and coastline data into an image produced by scientific visualization methods

Data sharing represents a real cost burden to the data provider that is not often or easily factored into project budgets. The National Science Foundation, for example,

...expects investigators to share with other researchers, at no more than incremental cost and within a reasonable time, the data, samples, physical collections and other

supporting materials created or gathered in the course of the work. [9]

Such data sharing may reasonably be expected to occur during the course of a funded project, but continued Congressional and Agency cost-cutting efforts may limit the extent to which high quality data and associated metadata can be developed and shared. Furthermore, after a project has been completed and funding terminated, who bears the responsibility for maintaining, updating, and sharing data?

Historically, data sets relevant to a specific problem were identified via word-of-mouth, newsletters, and, more recently, by data catalogs such as the Long-Term Ecological Research core data set catalog , [23], NASA 's Global Change Master Directory , and the Carbon Dioxide

Information Analysis Center's compendium of data on global change [2]. Computer bulletin boards, list servers, and World Wide Web browsers have further enhanced our ability to identify pertinent data. For example, the ability of Web browsers and various Internet search engines to deal with 2D and 3D data sets, and interface with relational databases has generated many new data cataloging applications. Despite these rapid advances, World Wide Web tools do not really represent interoperable databases and the need for a comprehensive master catalog of ecologically-relevant data still remains. Ideally, such a catalog would support searches for data by thematic content and temporal and spatial attributes. Communication of large data sets represents a technological limitation for many research efforts. For example, new communication methods that permit rapid transfer of data under conditions of limited bandwidth and new data compression methods may be necessary for communicating large remotely sensed data sets.

Adequate metadata (documentation) are required by ecologists in order to acquire data necessary for addressing specific research objectives, determine the suitability of the data for meeting specific objectives, and facilitating data processing, analysis, and modeling. Ideally, metadata provide all relevant information required by scientists to understand the content, context, quality, structure, and accessibility of a data set. Significant effort in the geographical science community recently led to the development of a comprehensive set of Content Standards for Digital

Geospatial Metadata (Federal Geographic Data Committee 1994). Ecological metadata typically do not conform to standards except perhaps for those adopted within the context of a single project, although a set of generic descriptors for non-geospatial metadata in the ecological sciences has recently been proposed [25]. Societies, state and Federal agencies, and relevant scientific groups can benefit by developing, adopting, and endorsing metadata standards.

The increasing complexity of environmental data requires that we assess the suitability of existing data models. For example, will newer object-oriented data models be better suited than the widely used relational data model to represent the complexity of environmental data? Or is an "associative" data model required that can store and retrieve data based on the association among observations (e.g., spatial, temporal, thematic, and taxonomic domains) instead of explicitly defined relationships? Irrespective of the form of the data model, ecological data acquisition, management, and integration require that there be a common, logical representation of the entities (objects) that comprise an ecological observation, and the relationships among those entities. In addition to the primary quantitative observations (raw data), data models should represent supporting information such as location, date, methods, QA/QC , audit trails, and other information that is generally associated with the metadata. Data models can serve as the basis for creating data reporting/transfer standards, database definitions, and distributed database architectures. Unfortunately, there has been little attention devoted by the ecological community

to developing generic data models.

Future database management systems will necessarily include stronger analytical capabilities.

For example, establishing correlations across multiple data sets and multiple databases is an essential feature. However, the ability to analyze the massive amounts of information often involved (data assimilation) poses unique technological challenges. Data assimilation encompasses: searches for correlations in large datasets, incorporation of remotely sensed data to improve the predictive capability of environmental models, and mining of data sets to derive better input conditions for new calculations [29]. For these types of applications to be feasible, the integration of archival storage system technologies with database technologies and Internet browsing tools needs to become a reality. Computational ecologists can play an important role in defining and steering the technology integration that needs to take place.

Addressing questions related to global change, biodiversity, and sustainability requires that scientists have timely access to current and historic data, as well as gels, samples, specimens, maps, and other relevant material. Whereas, the library and museum communities have devoted significant attention to paper and specimen archival procedures, less attention has been paid by the ecological community to long-term archival of data, maps, and non-traditional biological materials. Recently, the National Institute for the Conservation of Cultural Property reported on many of the materials conservation issues associated with collections [7]. Long-term data archival requires active management to preserve data integrity, direct linkage of data to high quality metadata, and flexible archival systems that are capable of adjusting to new technologies.

Data storage formats should ideally be platform independent and easily convertible to standard interchange formats. Economics of scale may justify development and implementation of centralized data archival facilities. Regardless of whether archival storage is centralized or distributed, some redundancy (i.e., storage at multiple sites) is important to guard against losses.

3.2 EDUCATION

Effective scientific information management has become a central and critical function for projects, initiatives, and for entire research communities. There are no existing university programs focused on elucidating the semantic and structural complexity of ecological data and their management. As a result of a general naiveté regarding these and related technological issues, the informatics infrastructure for ecology is poorly developed, resulting in significant constraints for management, archiving, integration, and interoperability. In order to support the necessary standardization and integration needed to support grand future integration efforts, ecologically-oriented informatics specialists are required to build and maintain the computing infrastructure. Until ecological informatics is incorporated into the academic infrastructure, existing resources may be able to bridge the gap by providing relevant training to students and practicing ecologists via workshops, graduate apprenticeships, and other mechanisms. A wide gap currently exists between the information that managers and decision-makers require and the data that can be supplied by ecologists to address global change, biodiversity, and sustainability issues. Our ability to collect data is far outdistancing our capacity for managing, processing, and transforming those data into relevant information. The present situation may be analogous to wading through oceans of data and rivers of information in order to extract the small streams of knowledge and drops of wisdom that are required for wise management and decision-making.

Supercomputer centers may be able to help bridge these gaps by supporting workshops where ecologists, computer scientists, and managers and policy-makers examine approaches for better integrating data and research, supportive computational technologies, and management and policy needs [5], [31], [27], [13], [16], [7], [30].

3.3 PRIORITIES

Priority research areas include: standardization of data models and metadata to facilitate sharing, management, integration, archival, and understanding of data; data archival including nontraditional data such as specimens, gels, maps, and other relevant materials; and improved data and database management systems. Adherence to common data models can significantly facilitate the integration of data from diverse sources as ecologists and other scientists attempt to scale their research to broader spatial, temporal, and thematic scales. Research initiatives leading to "standard" ecological data models are required. The development of these ecological data models can benefit significantly from past efforts that have been undertaken to develop data models in the systematics and other disciplines. Once available, generic data models can benefit the:

1.

Ecological research community by providing a checklist of parameters to include in observations in order to facilitate data management, analysis, and integration efforts;

2.

Computational science community by providing real-world data structures that need to be accommodated in database management systems; and

3.

Broader scientific community by demonstrating how ecological data structures may be mapped and integrated with other discipline-specific data models.

A pilot project could be undertaken at supercomputer centers to establish an ecological data archive for a subset of data (e.g., valuable long-term data sets currently stored at single field stations or by single investigators, etc.). As part of the pilot study, archival policies and procedures could be developed (including curation activities), costs could be assessed, and mechanisms for commercializing or making the archives self-sustaining could be proposed and implemented. Museum and specimen data provide the basis for examining past and current biodiversity patterns. However, the majority of these data are not yet accessible in electronic form. New techniques and databases that support archival, accessibility, and analyses of these non-traditional data are needed. Supercomputer centers could play a vital role in linking the systematics and ecology communities with appropriate technologies via workshops and pilot projects.

With improved data availability and appropriate supporting technologies, it is likely that both systematists and ecologists will be able address questions that extend far beyond traditional spatial, temporal, and thematic domains. Even the simplest of questions (e.g., what is where and how much?) cannot be effectively addressed until we develop better mechanisms for providing access to therelevant museum, climate, remotely sensed, and monitoring and field data. For example, linking physical voucher specimens to literature records, and tracing the intellectual lineage of the species concept or identi?cation through time has been required in dealing with various exotic species in Hawaii.

Many existing and developing databases are simply too large to explore with existing technology. New tools are needed that will allow ecologists to browse these large databases, such as 2D and 3D browsers. Integration of database management systems, large mass storage systems, and two-dimensional and three-dimensional exploration tools will be required.

Research and filters to interpolate, extrapolate, aggregate, and disaggregate data into forms that can be used in analysis and integration activities are also needed. Similarly, there is a significant need for research into mechanisms that allow for errors to be propagated through these filters, enabling the scientist, manager, or decision-maker to understand the uncertainty that is attached to an outcome. Quality control tools, such as those developed by ERIN , should be considered for the future.

How should standards be developed and advocated within the ecological community?

What incentives can and should be provided for data sharing?

CHALLENGES

How can intellectual property rights be protected?

How can long-term funding for archives be provided?

What are appropriate methods for the publication of data?

What kind of repositories are reasonable for long-term archival of data?

EDUCATION

Develop training programs in informatics relevant to ecology.

Establish curricula for use in workshops and seminars.

PRIORITIES

Standardize data models and metadata.

Develop long-term data archival methods, including those for physical specimens.

Improve data and database management systems.

T ABLE I : Summary of Data Workgroup Results

4 MODELING

The broad ecological issues to be addressed relate to the larger issues of global change, biodiversity, and sustainability, identified as the most pressing scientific problems [21].

Specifically, these issues involve the need to incorporate variation among components that are interacting non-linearly; examine the interactions among processes operating at varying scales of time, space, and organizational complexity; and relate patterns to the processes that generate and maintain them [21],[17]. These research topics hold the key to developing the mechanistic understanding fundamental to predictive ecology and to the wise management of our natural resources.

Closely related to the above is the need to move from small scale to large scale, and from individuals to aggregations. The problem of aggregation and simplification is central and involves the interplay between experimentation in real systems, experimentation in computer models, and analytic approximation techniques. Moment closure methods, Lagrangian-Eulerian translations, and other methods to scale from small scales or individuals to landscapes or aggregations can be carried out only with the help of high-speed computation, which provides a powerful and essential set of new tools [18]. Statistical considerations link the challenges of parameter estimation, error propagation, sensitivity analysis, and model reduction, each an issue of fundamental importance [22].

The limitations to predictability, well understood since Lorenz's seminal paper [20], make clear the importance of characterizing the degree of uncertainty and advancing decision making, given the uncertainty. In this,adaptive management, monitoring, modeling, and optimization techniques are interlocked. Cross-cutting issues underscore the importance of model flexibility so that questions not imagined can be addressed. Transients,ecotones, and phase transitions, and nonlinear phenomena, such as chaos and multiple domains of attraction, need to be accommodated. The latter, known to be important, for example, in understanding evolutionary processes [14],[15], will constrain fundamentally responses to global change, and is central to our ability to manage economic systems and environmental change.

The need exists, from general circulation models to models of organism responses, to enhance our capabilities to achieve greater spatial and temporal resolution. However, a trade-off exists between increased resolution and increased understanding of how information is transferred across scales. For example, models with too much detail may be much less reliable than those that incorporate only "relevant" detail. Computer simulation is a powerful tool to determine the appropriate level of detail.

Although it is natural to focus on the applied challenges, the key to addressing those challenges

[21] lies in understanding the basic mechanisms governing the structure and functioning of natural populations, communities, and ecosystems. These mechanisms raise ecological and evolutionary issues worth examining in their own right and provide the foundation for applied decision making. How are communities organized in space and time? What maintains

biodiversity, and what are the implications for ecosystem function and ultimately for how biodiversity should be measured? Related challenges also exist in epidemiology and population genetics, and hold the key to the development of principles for moving ecology from a disorganized collection of case studies to an applied science.

Because many ecological processes occur on slow time scales, ecologists often have to use models of underlying processes to explain observed patterns. Inference from pattern to process has been limited to analytical models because of the high computational burden posed by relaxing assumptions. One important example in conservation biology involves the use of genetic data to make inferences about dispersal rates. This sounds simple to do: Models of selectively neutral genes predict that, if dispersal rates are low, genetic relatedness will be correlated spatially. Available analytical models lend some insight but suffer from crucial assumptions known to be false (e.g., equal population sizes) in real systems. Until recently, ecologists had to use these models because sufficient computational power didn't exist to run stochastic, individually based, spatially explicit, evolutionary models [6]. We now have sufficient power to evaluate some of these models. However, to obtain general insight (and improve our computational efficiency), we need to invoke these complex models to develop simpler models that capture their essential dynamics and provide robustness.

F IGURE 3 : Depiction of Yellow Pine distribution throughout the California Sierra Nevada

Bioregion using the Jaccard similarity index. The images were produced using a GIS

(geographic information system) by Professor Frank Davis, University of California, Santa

Barbara

F IGURE 4 : Visual model of solar heating in the Midwest derived from National Geophysical Data

Center data, courtesy of Professor Stuart Gage, Michigan State University

4.1 CHALLENGES

Specific challenges relate to the ability to interface models that operate on different scales, describe different systems, or address different issues. Examples include physical/biological or ecological/socioeconomic linkages, the integration of subgrid scale processes, models for management and control, trans-boundary problems, and the dialogues between dynamic simulation models and geographic information systems.

Geographic information systems ( GIS ) have emerged as powerful tools to manage, analyze, and display spatially structured data. The current generation of GIS software, however, is limited to storing and manipulating static, two-dimensional data. An objective of both the research and vendor communities is to make GIS more dynamic by creating linkages between simulation models and GIS databases. The benefits of doing this are clear, but the methods by which to link them are less so. The linkage can be approached in two ways. The first is to develop simulation

engines within the GIS software and use them to develop models. The second is to create tools to access the GIS data layers from outside the GIS . Such tools range from simple ways to import and export data to more sophisticated tools to access the GIS data structures directly from the external application. Each approach has its benefits and problems, and it's likely that both approaches will be promoted.

Ecological models are developed for specific reasons and to use specific data sets. How can such models be used in an integrated simulation? The models must be able to communicate with each other, either directly through interprocess communication or indirectly through a database. This implies some form of standardized I/O structure. In addition, the data requirements may differ from one model to another. If a spatial model is coupled with a non-spatial model, for example, the representation of the data structure needs to be established.

Further, model design carries implicit assumptions as to spatial and temporal resolution. If a model were designed for high-resolution spatial data, problems might result if it is given lowresolution, aggregate data as input.

Moreover, models that run on different time advance intervals require a synchronization procedure to assure that the data are not corrupted during the simulation. One method to synchronize disparate sub-models and procedures that should be explored is the use of persistent data objects derived from object-oriented design techniques. An ongoing project using objectoriented approaches to link simulation models of various trophic levels within an ecosystem to dynamic GIS data is ATLSS (Across Trophic Level System Simulation) , being developed for the

Everglades of South Florida.

Models that were designed and tested as stand-alone applications may produce unpredictable results when put into multi-model dynamic simulations. This situation requires additional testing in the new environment so that the overall simulation provides reasonable output.

Meta-techniques for relating different modeling efforts should be explored. In some cases (e.g., global change, estuarine prediction), multiple models have been developed to address the same issue with the same data. For these, it is vital to develop reference data sets for validation. For example, the VMAP project compares different versions of terrestrial ecosystem models using the same set of input data. In other situations, different systems with similar features are being studied, for example to examine the effects of fragmentation. For these, it would be valuable to explore which aspects, whether model components or techniques for analysis, could be modularized. For teaching and information transfer, such modularity offers great advantages.

One aspect of ecological modeling that demands further study involves how to modularize or encapsulate model components, which would improve code maintenance, flexibility, and documentation. This is not to imply that models should be plug-and-play as generalized, or universal software that can be inserted readily into other models or simulation systems. On the other hand, the use of modules and templates may promote improved understanding of model structure and performance.

One way to foster the design and implementation of this modular view would be to adopt the principles developed to support the implementation of model-database management systems.

These systems would provide the basis upon which researchers could engage in visual,

interactive modeling exercises.

4.2 EDUCATION

The effective use of models in ecology demands attention at several levels of the educational system, particularly as regards computational methodology. At the K -12 through undergraduate levels, we encourage the application of existing technology, such as providing a comprehensive guide to existing resources through the Internet. The Mathematics Archive site with a focus on quantitative training across the life sciences, has taken the first step. At the same time, particularly at the undergraduate level, modules that may be incorporated easily within life science courses should be developed to enhance students' appreciation for the utility of computational methods. The World Wide Web, in particular, offers opportunities to train educators by providing sites for ecological models and potentially interactive simulation.

In graduate education, one particular goal should be to foster collaborations between students in ecology and computational science. In addition, collaborations between theoreticians and field ecologists will stimulate research in ecology. These collaborations could be supported in a number of ways, including workshops at national centers and training courses that bring together teams of students and faculty from the same institution. Similarly, multidisciplinary courses that focus on particular problems can provide opportunities for ecology students to collaborate with computational science students. Advisors should encourage students to broaden their backgrounds by taking courses in areas that complement their ecological training, such as computer programming, GIS , and other computational science courses. A convenient, state-ofthe-art modeling environment would ease the application of modern computational methods in such courses.

At the postdoctoral and faculty research level, further development of a software environment for ecologists without extensive modeling background is appropriate. To make such environments accessible, enhancements to existing visual, interactive modeling languages continue to be needed, such as STELLA . At the same time, these individuals need to be educated in the different purposes to which models can be applied and the potential misuse of models.

As a means to foster collaborations, funding agencies should support interdisciplinary work and professional societies should sponsor symposia at national meetings to illustrate the success of such collaborations. Finally, because decision makers involved in environmental issues often do not have a strong scientific background, models that highlight the consequences of their actions should serve as excellent educational tools.

4.3 PRIORITIES

The barriers between field and theoretical ecology need to be eliminated by enhancing the ability of ecologists to create complex models without needing to learn to program in lower-level languages or depending on extensive collaboration with a computer scientist. One way to achieve this goal is to create a visual (graphical interface) interactive modeling environment based on decomposing models into components or modules. Each module should include information on what it does, how it does it, recommended use, and precluded use. Such environments are well

suited to developing and using object-based approaches to programming. But what parts of ecosystem models can be modular? One starting point is to create a set of Ecological Recipes (in the Numerical Recipes style). Examples are demographic computations, trend analysis, genetic statistics and distance measures, and intrinsic rate-of-growth calculations. These modules are available in packages, but incorporating them into new models will require recoding. The insight gained through use of these straightforward examples can guide the choice of other components that might be modularized effectively. Demonstration projects that integrate models at different temporal and spatial scales, such as interfacing social and ecological system models, should also be undertaken.

CHALLENGES

How can we interface independent models (e.g., biological and economic) and scales (e.g., subgrid processes)?

How should dynamic linkages with GIS be developed?

Will encapsulation or modularization of modeling components assist a broader and consistent application of modeling?

Foster collaboration between computer science and ecology students.

Improve software for ecologists with limited modeling background.

EDUCATION

PRIORITIES

Enhance ability of ecologists to create complex models.

Create a collection of examples to be used as exemplars.

T ABLE 2 : Summary of Modeling Results

5 VISUALIZATION

Visualization can be broadly defined as any method that presents information in a manner to facilitate the conceptualization of scientific phenomena. This includes both exploratory visualization and visual communication. It is used in many scientific disciplines and is an enabling technology concerned with the process of facilitating scientific discovery and communication. Visualizations are commonly used to explore the nature of data, reveal patterns and anomalies in data and associated analytical methods as well as to document results. It is a powerful and effective adjunct to numerical and statistical methods of investigating empirical and theoretical experiments. The sensitivity of eye to patterns provides an effective means of communicating scientific ideas and allows the expression of valuable mental constructs that may be entirely incommunicable via language.

Through the continuing advances in computational technology, the ability to create multidimensional graphical representations of empirical data and modeling results is readily and affordably available; albeit usually difficult to learn. Visualization repertoires routinely include portraying three and four dimensions (including time), use of color, animation, interaction with data, and so forth. A major advantage of this richness in visualization is the capability to compellingly represent concepts, trends, and patterns that are inherently difficult to express linguistically or through other representational means. Although software for visualization and animation is available across the full spectrum of computer platforms, ecologists make limited use of these systems when compared, for example, to chemistry and physics. This disparity is believed to be the combined result of behavioral and cultural inertia, difficulty in the use of the more powerful software packages, and lack of conventions for multi-dimensional graphical representations.

In ecology, traditional methods of scientific analysis and communication rely on text and twodimensional, monochromatic representations of data and concepts. To a non-trivial degree, the cost of publishing color figures also discourages the effective use of color in many journal publications. This limitsthe breadth, depth and efficiency of information transfer. Visualization scientists want to encourage the ecological community to capitalize on the concepts of visual thinking and to apply these concepts to study ecological processes. This must be done by making the best visual technology available and useful to the individual scientist. Toward this end, ecologists and visualization experts need stronger relationships to develop visualization methods relevant to ecological problems and accessible to the practicing ecologist.

5.1 CHALLENGES

The most important challenges to visualization technology, in this context, are to provide the means to increase understanding of ecological processes through pictures and to decrease the difficulty in producing the visualizations. In order to do this, it is necessary to develop standard definitions of visualization concepts and tools in the context of ecology. For example, what are the appropriate methods of visualization for different types of analyses-analogous to the variety

of statistical methods for qualitative versus quantitative data, or categorical versus continuous data. What classes of errors can and should be addressed prior to visualization, e.g., changes in sampling methods? How does a researcher determine what types of visualizations are useful for the studies at hand?

Accepted standards and protocols exist within the field of statistics, however in this newly emerging field of ecological visualization, there is much disparity amongst researchers, further exacerbated by inconsistent use of terms amongst computer specialists, both academically and commercially. Important examples of general efforts in this direction can be found in [35], [8],

[3], [19]. There are many others. A system of protocols and standards addressing this issue is warranted to enhance diffusion of the technology throughout the ecological arena. This suggests the need to develop and implement protocols and standards to allow ready interpretation and consistency across applications within ecological domains. Similarly, this effort should also be addressed in communication across disciplines--for example, preserving the understanding of the visualization to audiences with diverse backgrounds, including managers. The visualization should promote understanding and trust in the analysis, by appropriately representing error or sensitivity as well as space and time. Users must be able to quickly change between temporal and spatial scales of analysis depending on the context of the analysis. In certain circumstances the landscape level view might be appropriate, yet this view may need to be quickly changed to an individual view as a dynamic process unfolds.

Ecological models are often built on datasets that are sparse because data are costly and difficult to collect. Various methods of interpolation (e.g., rectilinear gridding) are often needed by the researchers, but the assumptions of these models and their error characteristics must also be made clear. One well-established principle of visualization is the use of plots to detect anomalies and outliers within datasets. Research must be conducted to determine a set of visualization capabilities that are designed specifically to support the visual detection of these errors and anomalous model results in visual presentations.

5.2 EDUCATION

Visualization is a powerful and persuasive tool. Consequently, there is a substantial potential for misrepresentation of data, both deliberately and accidentally. Researchers interested in the visualization of ecological model results can draw upon basic principles of graphical design to improve communication. Among the most important dimensions of visualization is the matching of specific types of visual symbols to particular representations. Displays typically can draw upon a set of visual variables (size, color, shape...) to represent data. The visualization literature can provide guidance in the matching of phenomenon and their symbolization. Social resistance to these new methods must be countered by broad outreach efforts to educate practitioners. This requires a move toward more rapid presentation of data and results via these methods; we expect the role of the World Wide Web and successor technologies to be extremely important in this respect. Computational ecologists should also pay close attention to emerging applications that allow for network distributed, run-time linked, object-oriented programming languages (e.g.,

JAVA , ScriptX).

5.3 PRIORITIES

Research priorities in visualization relate to the tools necessary to visually examine the issues of global change, biodiversity and sustainability. These issues require increasingly more effective methods of exhibiting spatial and temporal patterns and trends. These include, for example, geospatial mapping, change detection, temporal animation, methods for the display of Ndimensional data. Perhaps the fastest growing area of visualization involves the preparation of quantitative maps. The problems of error propagation and display must be addressed and publicized within the ecological community with recommendations for the appropriate representation of errors.

Simplified tools implementing the common and future image file formats should be further developed and disseminated. A considerable help would be the establishment of visual representative examples or standards for colormapping of quantitative variables, conventions for labeling and declaration of intellectual property rights.

These could be disseminated through establishment of standard repositories via the World Wide

Web to provide the latest versions of appropriate tools This will facilitate collaboration through the establishment of a common frame of reference and lexicon. Guidelines for the application of these standards should be developed and broadly disseminated to facilitate a more common lexicon for communication between researchers. For example, the development of appropriate whitepapers or FAQS (Frequently-Asked-Questions) housed in well-known sites addressing these issues is a natural design objective.

We believe formal links should be established between the community of computational ecologists and appropriate SIG s (special interest groups) in the IEEE, ACM, IETF or other relevant organizations where emerging standards in the areas of visualization are defined and discussed.

One area that may need to be explored is the establishment of a typology of map types and graph types that could serve as a convention for ecologists. We suggest special workshops or symposia focused on contemporary visualization theory and technology and the potential applications to ecology.

CHALLENGES

What are appropriate types of visualization for various classes of analysis?

How can current methods of producing visualizations be increasingly automated?

What classes of errors can should be addressed prior to visualization?

How should errors be visualized?

How does a researcher determine what types of visualizations are useful for a given study?

EDUCATION

Define appropriate use of symbology.

Establish limitations of interpolation.

Describe common pitfalls in the presentation of data.

PRIORITIES

Standardize methods for geospatial mapping.

Standardize methods for change detection.

Standardize methods for temporal animation.

Develop methods for display of N-dimensional data.

T ABLE 3 : Summary of Visualization Workgroup Results

PARTICIPANTS AND CONTRIBUTORS

M IKE A LLEN mallen@sciences.sdsu.edu

L ARRY D EYSHER

71620.2617@compuserve.com

A LLEN A LLISON allison@uhunix.uhcc.hawaii.edu

G ORDON FOX gfox@ucsd.edu

M ARC A RMSTRONG armstrng@blue.weeg.uiowa.edu

S TUART G AGE

23027shg@msu.edu sg1410@esalsun10.ent.msu.edu

R ICHARD M ARCIANO marciano@sdsc.edu

W ILLIAM M ICHENER wmichene@longleaf.jonesctr.org

S COTT M ILLER scottm@bishop.bishop.Hawaii.Org

P ETER A RZBERGER parzberg@ucsd.edu

G EORGE B ALL gball@nexus.srnr.arizona.edu

J AMES B EACH jbeach@nsf.gov

T case@jeeves.ucsd.edu

W

ED C ASE

EIXIN C HENG wcheng@maxey.dri.edu

L OU G ROSS gross@math.utk.edu

S TEVE H ARPER steve\_harper@qms1.life.uiuc.edu

Y E QI yqi@tc.cornell.edu

J OHN H ELLY hellyj@sdsc.edu

D OUG J OHNSTON dmj@ncsa.uiuc.edu

G ERRY K EY key@coralsea.cscnet.com

J ACK C ORLISS

Corliss@bio2.com

E D D ELAND edeland@anes.ucla.edu

T OM K IRCHNER tkirchner@lternet.edu

F RANK D AVIS fd@ventana.GEOG.UCSB.EDU

P ETER K OCHEVAR kochevar@sdsc.edu

S IMON L EVIN simon@eno.princeton.edu

S TEWARD P ICKETT

STAPickett@aol.com

M ARK S CHILDAUER schild@alishaw.UCSB.EDU

P AUL S CHWARZ schwarz@tc.cornell.edu

S TEPHANIE S EXTON sexton@vertigo.ucsd.edu

B ARBARA T AYLOR taylor@jeeves.ucsd.edu

G REG W ITTEMAN gwitteman@ucsd.edu

REFERENCES

[1] J. Armstrong. The funding base for Australia's biological collections. Australian Biologist,

5( 1 ):80-88, 1992.

[2] T. A. Boden, D. P. Kaiser, R. J. Sepanski, and F. W. Stoss (eds.). Trends 93: a compendium of data on global change. Carbon Dioxide Information Analysis Center, Oak Ridge National

Laboratory, Oak Ridge, Tennessee, 1994.

[3] P. A. Burrough. Principles of Geographical Information Systems for Land Resources

Assessment. Clarendon Press, Oxford, 1986.

[4] National Research Council. Understanding marine biodiversity: A research agenda for the nation. National Academy Press, 1995.

[5] F. W. Davis. Information systems for conservation research, policy, and planning. BioScience

Supplement 1995:36-42, 1995.

[6] D. L. DeAngelis and L. J. Gross. Individual-based Models and Approaches in Ecology.

Routledge, Chapman and Hall, New York, 1992.

[7] W. D. Duckworth, H. H. Genoways, and C. L. Rose. Preserving natural science collections:

Chronicle of our environmental heritage. National Institute for the Conservation of Cultural

Property, Washington, DC, p. iv + 140 p., 1993.

[8] J. D. Foley, A. van Dam, S. K. Feiner, and J. F. Hughes. Computer Graphics. Principles and

Practice. Addison-Wesley, 1990.

[9] National Science Foundation. Grant Proposal Guide, NSF 94-2 p. 21, January 1994.

[10] F. B. Golley. A History of the Ecosystem Concept in Ecology: More than the Sum of Its

Parts. Yale University Press, New Haven and London, 1993.

[11] J. R. Gosz. Sustainable biosphere initiative: Data management challenges, in Environmental

Information Management and Analysis: Ecosystem to Global Scales. Taylor and Francis,

London, England, 1994.

[12] K. Gross, E. Allen, C. Bledsoe, R. Colwell, P. Dayton, M. Dethier, J. Helly, R. Holt, N.

Morin, W. Michener, S. T. A. Pickett, and S. Stafford. Report of the committee on the Future of

Long-term Ecological Data (FLED), 1995.

[13] R. W. Hodges. Biosystematic information on terrestrial arthropods program. Paci?c Science

Association Information Bulletin, 45(3-4):15-16, 1993.

[14] F. Jacob. Evolution and tinkering. Science, 196:1161, 1977.

[15] S. A. Kaufman and S. A. Levin. Towards a general theory of adaptive walks on rugged landscapes. J. Theoret. Biol., 149(467), 1987.

[16] B. and D. L. Hawksworth (eds.). The biodiversity of microorganisms and the role of microbial resource centres. World Federation for Culture Collections, Braunschweig, Germany, p. 104, 1994.

[17] S. Levin. Mathematics and biology: The interface. Report of Workshop, PUB-701, pp.

vi+96, 1992.

[18] S. Levin. Frontiers in ecosystem science. Frontiers in Mathematical Biology, Lecture Notes

in Biomathematics, v. 100:381-389, 1994.

[19] T. M. Lillesand. Remote Sensing and Image Interpretation. John Wiley and Sons, 1989.

[20] E. N. Lorenz. Deterministic nonperiodic ?ow. J. Atmos. Sci., 20(130), 1963.

[21] J. Lubchendo, et al. "The sustainable biosphere initiative: An ecological research agenda." A

report from The Ecological Society of America, 72:371-412, 1991.

[22] D. Ludwig. Uncertainty and ?sheries management. Frontiers in Mathematical Biology (S.

A. Levin, ed.), 1995.

[23] W. K. Michener, A. B. Miller, and R. Nottrott. Long-term Ecological Research Core Data

Set Catalog. Belle W. Baruch Institute for Marine Biology and Coastal Research, University of

South Carolina, Columbia, South Carolina, 1990.

[24] W. K. Michener, E. R. Blood, K. L. Bildstein, M. M. Brinson, and L. R. Gardner. Climate change, hurricanes and tropical storms, and rising sea level in coastal wetlands: Projected ecological impacts and research challenges. Ecological Applications (in press).

[25] W. K. Michener, J. Brunt, J. Helly, T. Kirchener, and S. Stafford. Non-geospatial metadata for ecology. Ecological Applications (submitted), 1996.

[26] Interactions and collaboration in global change across the social and natural sciences. Ambio

23(1): 17-24, 1994.

[27] S. E. Miller. Biological collections databases available on internet. Paci?c Science

Association Information Bulletin, 45(3-4):14-15, 1993.

[28] B. Mishkin. Urgently needed: Policies on access to data by erstwhile collaborators. Science,

270:427-435, 10 November 1995.

[29] Moore, Reagan W., "Enabling Petabyte Computing," Proceedings of the NII 2000 Committee

of the Computer Science and Telecommunications Board of the National Research Council, May

1995.

[30] E. S. Nielsen and J. G. West. Biodiversity research and biological collections: Transfer of information. in Forey, P. L. et al. Systematics and conservation evaluation. Clarendon Press,

Oxford:101-121, 1994.

[31] P. L. Osborne. Biological and cultural diversity in Papua, New Guinea: Conservation, con?icts, constraints and compromise. Ambio 24: 231-237, 1995.

[32] E. C. Pielou. Mathematical Ecology, John Wiley and Sons, 1977.

[33] Richard Stone. Long-term NSF network urged to broaden scope. (National Science

Foundation'sLong Term Ecological Research network). Science, 262(5132):334-335, October

15, 1993.

[34] N. Tosta. Data sharing. Geo Info Systems, 2(7):32-34, 1992.

[35] E. R. Tufte. Envisioning Information. Graphics Press, Cheshire, Connecticut, 1990.

J OHN H ELLY

619 534 5060

619 534 5113 Fax

US Mail:

San Diego Supercomputer Center

P.O. Box 85608 San Diego CA 92186-5608

Deliveries:

10100 John Jay Hopkins Drive

La Jolla, CA 92093-0505

SAN DIEGO SUPERCOMPUTER CENTER

A National Laboratory for Computational Science & Engineering

P.O. Box 85608 San Diego CA 92186-5608

619 534 5100 consult@sdsc.edu http://www.sdsc.edu/

Download