NSF Resources Page - The Center for Computational Science

advertisement
Facilities, Equipment and Other Resources
The Center for Computational Science (CCS) holds offices on all three campuses at the
University of Miami: Miller School of Medicine, The Rosenstiel School of Marine and
Atmospheric Science, and its main operations on the Coral Gables campus at the
Gables One Tower and the Ungar Building . Each location is equipped with a dual
processing workstations and essential software applications. CCS has three dedicated
conference rooms and communication technology to interact with advisors (phone, web-,
and video conferencing), plus a Visualization Lab with 2D and 3D displays (located at
the Ungar Building).
CCS systems are collocated at the Verizon Terremark NAP of the Americas (NOTA or
NAP). The NAP Datacenter in Miami currently features a 750,000 square foot, purposebuilt datacenter, Tier IV facility with N+2 14 Megawatt power and cooling infrastructure.
The equipment floors start at 32 feet above sea level, and the roof slope designed to aid
in drainage of floodwater in excess of 100-year storm intensity assisted by: 18 rooftop
drains, architecture designed to withstand a Category 5 hurricane with approximately 19
million pounds of concrete roof ballast, and 7-inch-thick steel reinforced concrete exterior
panels. Plus, the building is outside FEMA 500-year designated flood zone. The NAP
uses a dry pipe fire-suppression system to minimize the risk of damage from leaks.
The NAP has a centrally located Command Center manned by 7×24 security and
security sensors. In order to connect the University of Miami with the NOTA
Datacenter, UM has invested in a Dense Wavelength Division Multiplexing (DWDM)
optical ring for all of its campuses. The CCS Advanced Computing resources occupy a
discrete, secure wavelength on the ring, which provides a distinct 10 Gigabit HPC
network to all UM campuses and facilities.
Given University of Miami’s past experience including several hurricanes and other
natural disasters, we anticipate no service interruptions due to facilities issues. The NAP
was designed and constructed for resilient operations. UM has gone through several
hurricanes, power outages, and other severe weather crises without any loss of power or
connectivity to the NAP. The NAP maintains its own generators with a flywheel power
crossover system. This insures that power is not interrupted when the switch is made to
auxiliary power. The NAP maintains a two-week fuel supply (at 100% utilization), and is
on the primary list for fuel replacement due to its importance as a data-serving facility.
In addition to hosting the University of Miami’s computing infrastructure, the NAP of the
Americas is home to the US SouthCOM, Amazon, EBay, and several
telecommunications companies’ assets. The NAP at Miami hosts 97% of the network
traffic between the US and Central/South America. The NAP is also the local access
point for Florida LambdaRail (FLR), which is gated to Internet 2 (I2) to provide full
support to the I2 Innovation Platform. The NAP also provides TLD information to the
DNS infrastructure and is the local peering point for all networks in the area.
The University of Miami has made the NAP its primary Data Center occupying a very
significant footprint on the third floor. Currently all UM-CCS resources, clusters, storage
and back up system run from this facility and serves all major campuses of UM.
Equipment
Advanced Computing
UM maintains one of the largest centralized academic cyber infrastructures in the
country with numerous assets.
The Advanced Computing Team has been in operation since 2007. Over that time, the
core was grown from zero advanced computing cyerinfrastructure to a regional highperformance computing environment that currently supports more than 1,500 users, 220
TFlops of computational power, and more than 3 Petabytes of disk storage. The center’s
latest system acquisition, an IBM IDataPlex system, was ranked at number 389 on the
November 2012 Top 500 Supercomputer Sites list.
At present, CCS maintains several clusters and application servers:
Pegasus – CentOS 6.5 based batch/interactive compute cluster consisting of:
•
10,000 cores. IBM IDataPlex/Blade system
•
Diverse operating environments (Intel Xeon, Intel Phi, AMD processors)
•
19TB of RAM
•
Dedicated graphical nodes (Pegasus-gui)
•
Dedicated data transfer nodes with direct connection to I2 (Aspera,
GridFTP, SFTP)
•
250+ programs, compilers, and libraries
Jabberwocky – CentOS 6.5 based interactive visualization cluster
•
184 cores
•
1 TB RAM
•
Graphical access from all nodes
•
Firewalled access to all resources
•
1 PB+ of storage
Elysium – CentOS 6.5 based secure data processing cluster (HIPAA/IRB compliant)
•
32 cores
•
128 GB RAM
•
Separate VLAN
•
Restricted access (MAC authentication/user ACL’s enforced)
•
Full auditing and attestation
•
500 TB
DAVID (Distributed Access for Visualization and Interaction with Data) Cloud –
•
32 cores
•
128 GB RAM
•
CIFS/NFS/FTP/HTTP access
•
500 TB
Data Storage
CCS offers an integrated storage environment for both structured (relational) and
unstructured (flat file) data. These systems are specifically tuned for CCS’ data type and
application requirements, whether they are serial access or highly parallelized. Each
investigator or group has access to its own area and can present his or her data through
a service-oriented architecture (SOA) model. Researchers can share their data via
access control lists (ACLs), which ensure data integrity and security while allowing
flexibility for collaboration.
CCS offers structured data services through the most common relational database
formats, including: Oracle, MySQL, and PostgreSQL. Investigators and project teams
can access their space through SOA and utilize their resources with the support of an
integrated backend infrastructure.
The CCS flat file storage environment is built as a multi-tier solution combining highspeed storage with dense high capacity storage in a tiered architecture, all supported by
IBM’s GPFS. Our HPC/Global tier (700TB) is available on all compute nodes. This
storage is designed for massively parallel work and has been clocked at 157,000
IOP/sec and over 20 GB/sec bandwidth.
Our standard tier of storage (2.8 PB) is designed for general-purpose data storage,
analysis, and presentation of data to collaborators both within and without the University
of Miami. All tier 2 storage is available from all systems including our visualization
cluster. Several data management tools are available for tier 2 storage including public
presentation, long-term archive, deduplication, encryption, and HSM.
Our archival tier of storage (2.5 PB) leverages several platforms for keeping critical data
safe. By using a combination of tape and disk technologies, we are able to reduce
restore times significantly while still ensuring data integrity.
HPC Core Expertise
The HPC team has in-depth experience in various scientific research areas with
extensive experience in parallelizing or distributing codes written in Fortran, C, Java,
Perl, Python and R. The team is active in contributing to Open Source software efforts
including: R, Python, the Linux Kernel, Torque, Maui, XFS and GFS. The team also
specializes in scheduling software (LSF) to optimize the efficiency of the HPC systems
and adapt codes to the CCS environment. The HPC core has expertise in parallelizing
code using both MPI and OpenMP depending on the programming paradigm. CCS has
contributed several parallelization efforts back to the community in projects such as R,
WRF, and HYCOM.
The core specializes in implementing and porting open source codes to CCS’
environment and often contributes changes back to the community. CCS currently
supports more than 300 applications and optimized libraries on its computing
environment. The core personnel are experts in implementing and designing solutions in
the three different variants of Unix. CCS also maintains industry research partnerships
with IBM, Schrodinger, Open Eye, and DDN.
Software
HPC users have a complete software suite at their fingertips, including standard
scientific libraries and numerous optimized libraries and algorithms tuned for the
computing environment. All programs and algorithms are implemented in 64-bit mode in
order to address large memory problems, and also offer compatible 32-bit libraries and
algorithms. In addition, the LSF grid scheduling process maximizes the efficiency of the
computational resources. Increased efficiency translates into the faster execution of
programs, which provides researchers faster access to more resources. By utilizing the
full suite of LSF tools we are able to provide both batch and interactive workloads while
still retaining workload management features.
For more details about our HPC infrastructure, please visit our pages on this website at
http://www.ccs.miami.edu/hpc
Other Resources
Bioinformatics
CCS’ Computational Biology and Bioinformatics Program (CBBP) was established to
conduct research and offer services and training in the management and analysis of
biological and medical/health record data. The program’s mission is to spearhead
bioinformatics capacity at the University of Miami for all biological and medical
applications. This includes data management, data mining, and data analysis capacities.
The CBBP aims to achieve this mission through infrastructure, education, and expertise.
In particular, CBBP provides an online portal for bioinformatics databases and web tools,
and offers a number of data analysis services. CBBP are concomitantly leading
educational and training initiatives in bioinformatics analysis, and nourishing these
activities with high impact bioinformatics research.
Bioinformatics Data Analysis
The team provides data analysis training and expertise at a three levels, consulting,
preliminary data generation, and fully collaborative, based on the time and complexity of
the service requested. The analyses are undertaken by skilled analysts, and overseen
by experienced faculty. The group has been working mostly with microarray data and
next generation sequencing data, and the analytical services include, but are not limited
to, the following:
• gene expression analysis for transcriptome profiling and/or gene regulatory network
building,
• prognostics and/or diagnostic biomarker discovery,
• microRNA target analysis,
• copy number variant analysis, in this context we are testing the few existing algorithms
and developing new ones for accurate and unambiguous discovery of copy number
variation in the human genome,
• genome or transcriptome assembly from next generation sequencing data, and its
visualization,
• SNP functionality analysis,
• other projects include merging or correlating data from various data types for a holistic
view of a particular pathway or disease process.
Advanced Data Mining
The Center’s Data Mining Research Group provides advanced data mining expertise
and capabilities to further explore high dimensional data. The following are examples of
the expertise areas covered by our faculty.
• Classification, which appears essentially in every subject area that involves collection
of data of different types, such as disease diagnosis based on clinical and laboratory
data. Methods include regression (linear and logistic), artificial neural nets (ANN), knearest neighborhood (KNN), support vector machines (SVM), Bayesian networks,
decision trees and others.
• Clustering, which is used to partition the input data points into mutually similar
groupings, such that data points from different groups are not similar. Methods include
KMeans, hierarchical clustering, and self-organizing map (SOM), and are often
accompanied by space decomposition methods to offer low dimensional representations
of high dimensional data space. Methods of space decomposition include principal
component analysis (PCA), independent component analysis (IDA), multidimensional
scaling (MDA), Isomap, and manifold learning. Advanced topics in clustering include
multifold clustering, graphical models, and semi-supervised clustering.
• Association data mining, which finds frequent combinations of attributes in databases
of categorical attributes. The frequent combinations can be then used to develop
prediction of categorical values.
• Analysis of sequential data involves mostly biological sequence and includes such
diverse topics as extraction of common patterns in genomic sequences for motif
discovery, sequence comparison for haplotype analysis, alignment of sequences, and
phylogeny reconstruction.
• Text mining, particularly in terms of extracting information from published papers, thus
transforming documents to vectors of relatively low dimension to enable the use of data
mining methods mentioned above.
Visualization
The Visualization program conducts both theoretical and applied research in the general
areas of Machine Vision and Learning, and specifically in
(i) computer vision and image processing,
(ii) machine learning,
(iii) biomedical image analysis, and
(iv) computational biology and neuroscience.
The goal is to provide expertise n this area to develop novel fully automated methods
that can provide robustness, accuracy and computational efficiency. The program works
towards finding better solutions to existing open problems in the above areas, as well as
exploring different scientific fields where our research can provide useful interpretation,
quantification and modeling.
Cheminformatics
CCS has a sophisticated cheminformatics and compute infrastructure with a significant
level of support from the institution. CCS facilitates scientific interactions and enables
efficient research using informatics and computational approaches. A variety of
departments and centers at the University use high content and high throughout
screening approaches – The Miami Project to Cure Paralysis, the Diabetes Research
Institute, the Cancer Center, Bascom Palmer Eye Institute, the Dept. of Surgery, the
John P. Hussman Institute for Human Genomics.
Cheminformatics and computational chemistry tools—running on HPC Linux cluster and
high performance application server.
• SciTegic Pipeline Pilot—visual work-flow-based programming environment (data
pipelining); broad cheminformatics, reporting / visualization, modeling capabilities;
integration of applications, databases, algorithms, data.
• Leadscope Enterprise—integrated cheminformatics data mining and visualization
environment; unique chemical perception (~27K custom keys; user extensions); various
algorithms, HTS analysis, SAR / R-group analysis, data modeling.
• ChemAxon Tools and Applications—cheminformatics software applications and tools;
wide variety of cheminformatics functionality.
• Spotfire—highly interactive visualization and data analysis environment, various
statistical algorithms with chemical structure visualization, HTS and SAR analysis.
• Open Eye ROCS, FRED, OMEGA, EON, etc. implemented on Linux cluster – suite of
powerful applications and tool kits for high-throughput 3D manipulation of chemical
structures, modeling of shape, electrostatics, protein-ligand interactions and various
other aspects of structure- and ligand-based design; also includes powerful
cheminformatics 2D structure tools.
• Schrodinger Glide, Prime, Macromodel, and various other tools implemented on Linux
Cluster—powerful state of the art docking, protein modeling and structure prediction
tools and visualization.
• Desmond implemented on Linux Cluster—powerful state of the art explicit solvent
molecular dynamics.
• TIP workgroup—powerful environment for global analysis of protein structures, binding
sites, binding interactions; implemented automated homology modeling, binding site
prediction, structure and site comparison for amplification of known protein structure
space.
Download