Our research interests include - The DIstributed Computing

advertisement
Distributed Computing Environments Team
Marian Bubak
bubak@agh.edu.pl
Department of Computer Science and Cyfronet
AGH University of Science and Technology
Krakow, Poland
dice.cyfronet.pl
DICE Team
•
•
•
•
Investigation of methods for building complex scientific collaborative applications
Elaboration of environments and tools for e-Science
Integration of large-scale distributed computing infrastructures
Knowledge-based approach to services, components, and their semantic composition
AGH University of Science and Technology (1919)
16 faculties, 36000 students; 4000 employees
http://www.agh.edu.pl/en
Academic Computer Centre
CYFRONET AGH (1973)
120 employees
http://www.cyfronet.pl/en/
Distributed Computing
Environments (DICE) Team
http://dice.cyfronet.pl
Faculty of Computer Science, Electronics and
Telecommunication (2012)
2000 students, 200 employees
http://www.iet.agh.edu.pl/
Department of Computer Science AGH (1980)
800 students, 70 employees
http://www.ki.agh.edu.pl/uk/index.htm
Other 15
faculties
Current research objectives
• Investigating applicability of cloud computing model for complex
scientific applications
• Optimization of resource allocation for applications on clouds
• Resource management for services on heterogeneous resources
• Urgent computing scenarios on distributed infrastructures
• Billing and accounting models
• Procedural and technical aspects of ensuring efficient yet secure
data storage, transfer and processing
• Methods for component dependency management, composition
and deployment
• Information representation model for cloud federating platform, its
components and operating procedures
Topics for collaboration
• Optimization of service
deployment on clouds
– Constraint satisfaction and
optimization of multiple
criteria (cost, performance)
– Static deployment planning
and dynamic auto-scaling
• Billing and accounting
model
– Adapted for the federated
cloud infrastructure
– Handle multiple billing
models
• Supporting system-level
(e)Science
– tools for effective scientific
research and collaboration
– advanced scientific analyses
using HPC/HTC resources
• Cloud security
– security of data transfer
– reliable storage and removal
of the data
• Cross-cloud service
deployment based on
container model
Spatial and temporal dynamics in grids
•
•
Grids increase research capabilities for science
Large-scale federation of computing and storage resources
– 300 sites, 60 countries, 200 Virtual Organizations
– 10^5 CPUs, 20 PB data storage, 10^5 jobs daily
•
However operational and runtime dynamics have a negative
impact on reliability and efficiency
~95%
1 job
<10%
100 jobs
seconds
3 hours
asynchronous and frequent failures
and hardware/software upgrades
long and unpredictable job waiting times
J. T. Moscicki: Understanding and mastering dynamics in Computing Grids, UvA PhD thesis, promoter: M. Bubak, co-promoter: P. Sloot;
12.04.2011
User-level overlay with late binding scheduling
•
•
•
•
Improved job execution characteristics
HTC-HPC Interoperability
Heuristic resource selection
Application aware task scheduling
1.5 hours
Completion time
with late binding.
40 hours
Completion time
with early binding.
J. T. Moscicki, M. Lamanna, M. Bubak, P. M. A.Sloot: Processing moldable tasks on the Grid: late job binding with lightweight user-level
overlay, FGCS 27(6) pp 725-736, 2011
Cloud performance evaluation
• Performance of VM deployment times
• Virtualization overhead Evaluation of open source cloud
stacks (Eucalyptus, OpenNebula, OpenStack)
• Survey of European public cloud providers
• Performance evaluation of top cloud providers (EC2,
RackSpace, SoftLayer)
• A grant from Amazon has been obtained
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
IaaS Provider
Weight
Amazon AWS
Rackspace
SoftLayer
CloudSigma
ElasticHosts
Serverlove
GoGrid
Terremark ecloud
RimuHosting
Stratogen
Bluelock
Fujitsu GCP
BitRefinery
BrightBox
BT Global Services
Carpathia Hosting
City Cloud
Claris Networks
Codero
CSC
Datapipe
e24cloud
eApps
FlexiScale
Google GCE
Green House Data
Hosting.com
HP Cloud
IBM SmartCloud
IIJ GIO
iland cloud
Internap
Joyent
LunaCloud
Oktawave
Openhosting.co.uk
Openhosting.com
OpSource
ProfitBricks
Qube
ReliaCloud
SaavisDirect
SkaliCloud
Teklinks
Terremark vcloud
Tier 3
Umbee
VPS.net
Windows Azure
EEA
Zoning
20
1
1
1
1
1
1
1
1
1
1
1
1
0
1
1
1
1
0
0
1
1
1
0
1
1
0
0
0
0
0
1
0
0
1
1
1
0
1
1
1
0
0
0
0
0
0
1
1
1
jClouds
API
Support
20
1
1
1
1
1
1
1
1
1
1
1
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
1
0
0
0
0
0
1
0
1
0
0
0
0
BLOB
storage
support
10
1
1
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
0
0
1
1
0
0
1
0
1
1
0
0
1
0
0
0
1
0
0
0
0
0
0
1
Perhour
instance
billing
5
1
1
1
1
1
1
1
1
0
0
0
0
0
1
0
0
1
1
1
0
1
1
0
1
1
0
0
1
1
0
1
1
1
1
1
0
1
1
1
0
0
1
1
0
1
0
1
0
1
API
Access
5
1
1
1
1
1
1
1
1
1
1
1
1
0
1
1
0
1
0
1
0
1
0
0
1
1
1
0
1
1
0
0
1
1
1
1
0
1
1
1
0
0
0
1
0
1
1
1
1
1
Published
price
5
1
1
1
1
1
1
1
0
1
0
0
0
1
1
0
0
1
0
1
0
0
1
1
1
1
0
1
1
1
0
1
1
1
1
1
1
1
1
1
1
0
1
1
0
1
0
1
1
1
VM
Image
Import /
Export
3
0
0
0
1
1
1
0
1
0
1
0
0
0
1
1
1
0
0
0
1
0
0
0
1
0
1
1
1
0
0
1
0
0
0
0
0
1
1
0
0
0
0
1
0
1
0
1
0
0
Relational
DB
support
2
1
1
0
0
0
0
0
0
1
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
1
0
1
1
1
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
1
Score
27
27
25
18
18
18
15
13
12
8
5
5
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
M. Bubak, M. Kasztelnik, M. Malawski, J. Meizner, P. Nowakowski and S. Varma: Evaluation of Cloud Providers for VPH Applications, poster
at CCGrid2013 - 13th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing, Delft, the Netherlands, May 13-16, 2013
Cost optimization of applications on clouds
Task
Infrastructure model
– Multiple compute and
storage clouds
– Heterogeneous instance
types
•
Application model
– Bag of tasks
– Multi-level workflows
•
•
•
•
Modeling with AMPL (A
Modeling Language for
Mathematical
Programming) and CMPL
Cost optimization under
deadline constraints
Mixed integer
programming
Bonmin, Cplex solvers
Input
Layer 1
1h
A
private
Application
Compute
Output
B
B
Private cloud
C
2.5 h
Layer 2
B
6h
Layer 3
0.5 h
D
Layer 4
E
0.3 h
F
2h
m1.small
m1.large
t1.micro
m2.xlarge
Layer 5
rs.1gb
rs.2gb
rs.4gb
rs.16gb
Storage
Storage
Compute
Compute
Rackspace
Amazon
20000 tasks, 512 MiB input and 512 MiB output, task execution time 0.1h @ 1ccu machine
3000
Amazon S3
Rackspace Cloud Files
Optimal
2500
Multiple providers
2000
Cost ($)
•
1500
Amazon's and private instances
1000
Rackspace and private instances
Rackspace instances
500
0
0
10
20
30
40
50
60
70
80
90
100
Time limit (hours)
M. Malawski, K. Figiela, J. Nabrzyski: Cost minimization for computational applications on hybrid cloud infrastructures, Future Generation Computer
Systems, Volume 29, Issue 7, September 2013, Pages 1786-1794, ISSN 0167-739X, http://dx.doi.org/10.1016/j.future.2013.01.004
M. Malawski, K. Figiela, M. Bubak, E. Deelman, J. Nabrzyski: Cost Optimization of Execution of Multi-level Deadline-Constrained Scientific Workflows on
Clouds. PPAM (1) 2013: 251-260 http://dx.doi.org/10.1007/978-3-642-55224-3_24
Resource allocation management
Developer
Admin
Scientist
The Atmosphere Cloud Platform is a one-stop management service for
hybrid cloud resources, ensuring optimal deployment of application
services on the underlying hardware.
VPH-Share Core Services Host
Cloud Facade
(secure
RESTful API )
VPH-Share Master Int.
Cloud Manager
Atmosphere
Management
Service (AMS)
Cloud stack
plugins (Fog)
Development Mode
Atmosphere
Internal
Registry (AIR)
Generic Invoker
Workflow management
OpenStack/Nova Computational Cloud Site
Other CS
External application
Cloud Facade client
Head
Node
Worker Worker Worker Worker
Node
Node
Node
Node
Amazon EC2
Customized applications may directly
interface Atmosphere via its RESTful
API called the Cloud Facade
Image store
(Glance)
Worker Worker Worker Worker
Node
Node
Node
Node
P. Nowakowski, T. Bartynski, T. Gubala, D. Harezlak, M. Kasztelnik, M. Malawski, J. Meizner, M. Bubak: Cloud Platform for Medical
Applications, eScience 2012 (2012)
Data reliability and integrity
DRI is a tool which can keeps track of binary data stored in a cloud infrastructure, monitor
data availability and faciliate optimal deployment of application services in a hybrid cloud
(bringing computations to data or the other way around).
LOBCDER
DRI Service
Metadata extensions for DRI
Binary
data
registry
Validation
policy
End-user features
(browsing, querying,
direct access to data,
checksumming)
A standalone application service, capable of autonomous operation. It periodically
verifies access to any datasets submitted for validation and is capable of issuing alerts
to dataset owners and system administrators in case of irregularities.
Register files
Get metadata
Migrate LOBs
Get usage stats
(etc.)
Configurable validation runtime
(registry-driven)
Amazon S3
OpenStack Swift
Runtime layer
Cumulus
VPH Master Int.
Store and marshal data
Data management
portlet (with DRI
management
extensions)
Distributed Cloud storage
Extensible
resource
client layer
Data security in clouds
•
•
•
To ensure security of data in transit
Modern applications use secure tranport
protocols (e.g.TLS)
For legacy unencrypted protocols if absolutly
needed, or as additional security measure:
–
–
•
•
•
Site-to-Site VPN, e.g. between cloud sites is
outside of the instance, might use
Remote access – for individual users accessing
e.g. from their laptops
Data should be secure stored and realiable
deleted when no longer needed
Clouds not secure enough, data optimisations
preventing ensuring that data were deleted
A solution:
–
–
end-to-end encryption (decryption key stays in
protected/private zone)
data dispersal (portion of data, dispersed
between nodes so it’s non-trivial/impossible to
recover whole message)
Jan Meizner, Marian Bubak, Maciej Malawski, and Piotr Nowakowski: Secure storage and processing of confidential data on public clouds.
In: Proceedings of the International Conference On Parallel Processing and Applied Mathematics (PPAM) 2013
Semantic workflow composition
• GworkflowDL language (with A.
Hoheisel)
• Dynamic, ad-hoc refinement of
workflows based on semantic
description in ontologies
• Novelty
– Abstract, functional blocks translated
automatically into computation unit
candidates (services)
– Expansion of a single block into a
subworkflow with proper concurrency
and parallelism constructs (based on
Petri Nets)
– Runtime refinement: unknown or failed
branches are re-constructed with
different computation unit candidates
T. Gubala, D. Harezlak, M. Bubak, M. Malawski: Semantic Composition of Scientific Workflows Based on the Petri Nets Formalism. In: "The
2nd IEEE International Conference on e-Science and Grid Computing", IEEE Computer Society Press,
http://doi.ieeecomputersociety.org/10.1109/E-SCIENCE.2006.127, 2006
Cooperative virtual laboratory for e-Science
• Design of a laboratory for virologists, epidemiologists and clinicians
investigating the HIV virus and the possibilities of treating HIV-positive
patients
• Based on notion of in-silico experiments built and refined by cooperating
teams of programmers, scientists and clinicians
• Novelty
– Employed full concept-prototyperefinement-production circle for
virology tools
– Set of dedicated yet interoperable
tools bind together programmers
and scientists for a single task
– Support for system-level science
with concept of result reuse
between different experiments
T. Gubala, M. Bubak, P. M. A. Sloot: Semantic Integration of Collaborative Research Environments, chapter XXVI in “Handbook of Research
on Computational Grid Technologies for Life Sciences, Biomedicine and Healthcare”, Information Science Reference IGI Global 2009, ISBN:
978-1-60566-374-6, pages 514-530
Semantic integration for science domains
•
•
•
•
Concept of describing scientific domains for in-silico
experimentation and collaboration within laboratories
Based on separation of the domain model, containing
concepts of the subject of experimentation from the
integration model, regarding the method of (virtual)
experimentation (tools, processes, computations)
Facets defined in integration model are automatically
mixed-in concepts from domain model: any piece of
data may show any desired behavior
Proposed, designed and deployed the
method for 3 domains of science:
–
Computational chemistry inside InSilicoLab
chemistry portal
–
Sensor processing for early warning and crisis
simulation in UrbanFlood EWS
–
Processing of results of massive bioinformatic
computations for protein folding method
comparison
–
Composition and execution of multiscale
simulations
–
Setup and management of VPH applications
T. Gubala, K. Prymula, P. Nowakowski, M. Bubak: Semantic Integration for Model-based Life Science Applications. In: SIMULTECH 2013
Proceedings of the 3rd International Conference on Simulation and Modeling Methodologies, Technologies and Applications, Reykjavik, Iceland
29 - 31 July, 2013, pp. 74-81
GridSpace - platform for e-Science applications
•
•
•
•
•
Experiment: an e-science application
composed of code fragments (snippets),
expressed in either general-purpose
scripting programming languages,
domain-specific languages or purposespecific notations. Each snippet is
evaluated by a corresponding
interpreter.
GridSpace2 Experiment Workbench: a
web application - an entry point to
GridSpace2. It facilitates exploratory
development, execution and
management of e-science experiments.
Embedded Experiment: a published
experiment embedded in a web site.
GridSpace2 Core: a Java library providing
an API for development, storage,
management and execution of
experiments. Records all available
interpreters and their installations on
the underlying computational resources.
Computational Resources: servers,
clusters, grids, clouds and einfrastructures where the experiments
are computed.
E. Ciepiela, D. Harezlak, J. Kocot, T. Bartynski, M. Kasztelnik, P. Nowakowski, T. Gubała, M. Malawski, M. Bubak: Exploratory Programming
in the Virtual Laboratory. In: Proceedings of the International Multiconference on Computer Science and Information Technology, pp. 621-628,
October 2010, the best paper award.
Collage - executable e-Science publications
Goal:
Extending the traditional
scientific publishing model with
computational access and
interactivity mechanisms;
enabling readers (including
reviewers) to replicate and
verify experimentation results
and browse large-scale result
spaces.
Challenges:
Scientific: A common description schema for primary data (experimental data, algorithms, software,
workflows, scripts) as part of publications; deployment mechanisms for on-demand reenactment of
experiments in e-Science.
Technological: An integrated architecture for storing, annotating, publishing, referencing and reusing
primary data sources.
Organizational: Provisioning of executable paper services to a large community of users representing
various branches of computational science; fostering further uptake through involvement of major
players in the field of scientific publishing.
P. Nowakowski, E. Ciepiela, D. Harężlak, J. Kocot, M. Kasztelnik, T. Bartyński, J. Meizner, G. Dyk, M. Malawski: The Collage Authoring
Environment. In: Proceedings of the International Conference on Computational Science, ICCS 2011 (2011), Winner of the Elseview/ICCS
Executable Paper Grand Challenge
E. Ciepiela, D. Harężlak, M. Kasztelnik, J. Meizner, G. Dyk, P. Nowakowski, M. Bubak: The Collage Authoring Environment: From Proof-ofConcept Prototype to Pilot Service in Procedia Computer Science, vol. 18, 2013
GridSpace2 / Collage - Executable
e-Science Publications
17
• Goal: Extend the traditional way of authoring and
publishing scientific methods with computational
access and interactivity mechanisms thus bringing
reproducibility to scientific computational
workflows and publications
• Scientific challenge: Conceive a model and
methodology to embrace reproducibility in
scientific worflows and publications
• Technological challenge: support these by modern
Internet technologies and available computing
infrastructures
• Solution proposed:
• GridSpace2 – web-oriented distributed
computing platform
• Collage – authoring environment for
Dec 2011
executable publications
Jun 2012
Jun 2011
GridSpace2 / Collage - Executable e-Science
Publications
Results:
•
•
•
•
GridSpace2/Collage won Executable
Paper Grand Challenge in 2011
Collage was integrated with Elsevier
ScienceDirect portal so papers can be
linked and presented with
corresponding computational
experiments
Special Issue of Computers &
Graphics journal featuring Collagebased executable papers was
released in May 2013
GridSpace2/Collage has been applied
to multiple computational workflows
in the scope of PL-Grid, PL-Grid Plus
and Mapper projects
E. Ciepiela, D. Harężlak, M. Kasztelnik, J. Meizner, G. Dyk, P.
Nowakowski, M. Bubak: The Collage Authoring Environment:
From Proof-of-Concept Prototype to Pilot Service. In: Procedia
Computer Science, vol. 18, 2013
E. Ciepiela, P. Nowakowski, J. Kocot, D. Harężlak, T. Gubała, J. Meizner, M. Kasztelnik, T. Bartyński, M. Malawski, M. Bubak: Managing
entire lifecycles of e-science applications in the GridSpace2 virtual laboratory–from motivation through idea to operable web-accessible
environment built on top of PL-grid e-infrastructure. In: Building a National Distributed e-Infrastructure–PL-Grid, 2012
P. Nowakowski, E. Ciepiela, D. Harężlak, J. Kocot, M. Kasztelnik, T. Bartyński, J. Meizner, G. Dyk, M. Malawski: The Collage Authoring
Environment. In: Procedia Computer Science, vol. 4, 2011
Common Information Space (CIS)
•
•
Facilitate creation, deployment and robust operation of Early Warning
Systems in virtualized cloud environment
Early Warning System (EWS): any system
working according to four steps:
monitoring, analysis, judgment,
action (e.g. environmental
monitoring)
Common Information Space
• connects distributed component
into EWS and deploy it on cloud
• optimizes resource usage taking into
acount EWS importance level
• provides EWS and self monitoring
• equipped with autohealing
B. Balis, M. Kasztelnik, M. Bubak, T. Bartynski, T. Gubala, P. Nowakowski, J. Broekhuijsen: The UrbanFlood Common Information Space for
Early Warning Systems. In: Elsevier Procedia Computer Science, vol 4, pp 96-105, ICCS 2011.
HyperFlow: model & execution engine
• Simple yet expressive model for complex scientific apps
• App = set of processes performing well-defined functions and
exchanging signals
HyperFlow model JSON serialization
{
• Supports a rich set
"name":
"...",  name of the app
"processes": [ ... ],  processes of the app
of workflow patterns
"functions": [ ... ],  functions used by processes
"signals":
[ ... ],  exchanged signals info
• Suitable for various
"ins":
[ ... ],  inputs of the app
"outs":
[ ... ]  outputs of the app
application classes
}
• Abstracts from other distributed app aspects (service model,
data exchange model, communication protocols, etc.)
Platform for distributed applications
• HyperFlow model
& engine for
distributed apps
• App optimization
& scheduling
• Autoscaling and
dynamic app
reconfiguration
• Multi-cloud
resource
provisioning
Execution Platform
Provisioning platform
Composite App
Optimizer & Scheduler
App model
Scaling rules
Autoscaler
Enact
Reconfigure app
Initial
deployment
App state
Provisioner
HyperFlow
Enactment Engine
measuremants
Start/Stop/Reconfigure VM
Cloud
Execute
VM
Input data
Executor
VM
VM
Trigger app execution
VM
VM
Monitoring
Colaborative metadata management
Objectives
•
•
•
•
Provide means for ad-hoc metadata model
creation and deployment of corresponding
storage facilities
Create a research space for metadata model
exchange and discovery with associated data
repositories with access restrictions in place
Support different types of storage sites and
data transfer protocols
Support the exploratory paradigm by making
the models evolve together with data
Architecture
•
•
•
Web Interface is used by users to create,
extend and discover metadata models
Model repositories are deployed in the PaaS
Cloud layer for scalable and reliable access
from computing nodes through REST
interfaces
Data items from Storage Sites are linked from
the model repositories
Multiscale programming and execution tools
•
•
•
•
•
•
MAPPER Memory (MaMe) a semanticsaware persistence store to record metadata
about models and scales
Multiscale Application Designer (MAD)
visual composition tool transforming high level
description into executable experiment
GridSpace Experiment Workbench
(GridSpace)
execution and result
management of experiments
MaMe
choose/add/delete
Mapper A
Submodule
A
Mapper B
Submodule
B
MAD
•
A method and an environment for composing multiscale
applications from single-scale models
Validation of the the method against real applications
structured using tools
Extension of application composition techniques to
multiscale simulations
Support for multisite execution of multiscale simulations
Proof-of-concept transformation of high-level formal
descriptions into actual execution using e-infrastructures
GridSpace
•
K. Rycerz, E. Ciepiela, G. Dyk, D. Groen, T. Gubala, D. Harezlak, M. Pawlik, J. Suter, S. Zasada, P. Coveney, M. Bubak: Support for Multiscale
Simulations with Molecular Dynamics, Procedia Computer Science, Volume 18, 2013, pp. 1116-1125, ISSN 1877-0509
K. Rycerz, M. Bubak, E. Ciepiela, D. Harezlak, T. Gubala, J. Meizner, M. Pawlik, B.Wilk: Composing, Execution and Sharing of Multiscale
Applications, submitted to Future Generation Computer Systems, after 1st review (2013)
K. Rycerz, M. Bubak, E. Ciepiela, M. Pawlik, O. Hoenen, D. Harezlak, B. Wilk, T. Gubala, J. Meizner, and D. Coster: Enabling Multiscale Fusion
Simulations on Distributed Computing Resources, submitted to PLGrid PLUS book 2014
Effective management of multiscale computations
•
•
•
Support for typical interations in multiscale applications:
– Macro module triggers micro module and wastes resources while waiting for its output
– Macro module needs to trigger a dynamic number of micro modules
Research towards:
– Usage of Akka actors and Spray toolkit features for effective management (a.o. support for a
dynamic creation of new modules)
– Grouping similar demanding (but not necessary connected) modules on the same resources to
avaid waisting resources
Legacy applications issues
Example 1: Concurrect execution of
macro and micro moduls in a loop
Example 2: Macro triggers a
dynamic number of micro.
A proposal of the architecture of
management system
Quantum Computer Simulator
• building and testing quantum circuits and algorithms
• learning and understanding quantum computation
• flexible source code edition and ease of graphical building of circuit
diagrams
• implementation of existing algorithms: quantum search (Grover),
quantum factorization (Shor), quantum teleportation
• comparision of Shor’s algorithm optimizations
J. Patrzyk, B. Patrzyk, K. Rycerz, M. Bubak: A Novel Environment for Simulation of Quantum Computing, submitted to CGW 2014
Building scientific software based on Feature Model
Research on Feature Modeling:
• modelling eScience applications family
component hierarchy
• modelling requirements
• methods of mapping Feature Models to
Software Product Line architectures
Research on adapting Software Product Line
principles in scientific software projects:
• automatic composition of distributed
eScience applications based on Feature
Model configuration
• architectural design of Software Product
Line engine framework
B. Wilk, M. Bubak, M. Kasztelnik: Software for eScience: from feature modeling to automatic setup of environments, Advances in Software
Development, Scientific Papers of the Polish Informations Processing, Society Scientific Council, 2013 pp. 83-96
DICE team in EU projects
CrossGrid
2002-2005
Interactive compute- and data-intensive applications
K-Wf Grid
2004-2007
Knowledge-based composition of grid workflow applications
CoreGRID
2004-2008
Problem solving environments, programming models for grid applications
GREDIA
2006-2009
Grid platform for media and banking applications
ViroLab
2006-2009
Script based composition of applications, GridSpace virtual laboratory
PL-Grid; +
2009-2015
Advanced virtual laboratory, DataNet – metadata models (2 large Polish projects)
gSLM
2009-2012
Service level management for grid and clouds
UrbanFlood
2009-2012
Common Information Space for Early Warning Systems
MAPPER
2010-2013
Computational strategies, software and services for distributed multiscale simulations
VPH-Share
2011-2015
Federating cloud resources for VPH compute- and data intensive applications
Collage
2011-2013
Executable Papers; 1st award of Elsevier Competition at ICCS2011 (Elsevier project)
ISMOP
2013-2016
Management of cloud resources, workflows, big data storage and access, analysis tools (MCBiR)
PaaSage
2013-2016
Optimization of workflow applications on cloud resources
Topics for collaboration
• Optimization of service
deployment on clouds
– Constraint satisfaction and
optimization of multiple
criteria (cost, performance)
– Static deployment planning
and dynamic auto-scaling
• Billing and accounting
model
– Adapted for the federated
cloud infrastructure
– Handle multiple billing
models
• Supporting system-level
(e)Science
– tools for effective scientific
research and collaboration
– advanced scientific analyses
using HPC/HTC resources
• Cloud security
– security of data transfer
– reliable storage and removal
of the data
• Cross-cloud service
deployment based on
container model
dice.cyfronet.pl
Download