NeSC News Chris Date: capturing knowledge Issue 55 November 2007 By Iain Coleman

advertisement
The monthly newsletter from the National e-Science Centre
NeSC News
Issue 55 November 2007 www.nesc.ac.uk
Chris Date: capturing knowledge
By Iain Coleman
Photograph by Peter Tuffy, University of Edinburgh
You can have a perfectly logical system, but you can never
have perfect knowledge of the world. Our information is a
fallible interpretation of inexact data, and can never be wholly
freed from the problems of incompleteness and subjective
interpretation. Philosophers and writers have long understood
this, from Plato to St. Paul to Philip K. Dick. But scientists can’t
afford to wring their hands over this problem: they have to find
ways of dealing with it.
This was the main topic of the Chris Date Seminar: Data and
Knowledge Modelling for the Geosciences that was held at the
e-Science Institute on 1-2 November. Part of the eSI research
theme on Spatial Semantics for Automating Geographic
Information Processes, the seminar featured database guru
Chris Date and a selection of other international speakers
discussing ways of bringing new ways of understanding and
recording knowledge to bear on scientific processes in general,
and the geospatial sciences in particular.
Cont. on P2
Summer and Winter Schools open for applications
APPLICATIONS are invited for both
the fifth annual International Summer
School, to be held in Hungary on July
6-18, 2008, and the Winter School,
which will run online from February 6
to March 5, 2008.
The Summer School will provide
an in-depth introduction to Grid
technologies, presenting a
conceptual framework to enhance
each student’s ability to work in this
rapidly advancing field. Reports
from world leaders in deploying and
exploiting Grids will complement
lectures from research leaders
shaping future e-Infrastructure.
Hands-on laboratory exercises
will give participants experience
with widely used Grid middleware.
Malcolm Atkinson , Programme
Chair of the School said: “The
Summer School is always a ferment
of enthusiasm. Its goal is to build
a lasting network of well-informed
graduates who will go on to great
achievements.”
Applications will be available from
February on: http://www.iceageeu.org/issgc08
The Winter School will be
delivered online and will examine
the conceptual and practical
underpinnings of today’s grids.
Experts will provide exciting
practical exercises, discuss the
challenges of building and sustaining
e-Infrastructure, report its rapid
influence on the way we research,
design and make decisions. To support the hands-on laboratory
sessions, the GILDA infrastructure
will host widely used middleware.
The testbed will be connected to
major international Grids and allow
learning and experimentation.
The target audience will include
enthusiastic and ambitious young
researchers who expect to use or
develop grids in their research.
Applications are invited from
researchers who have recently
started (or are about to start) working
on Grid projects. Selection schools is
competitive based on the information
supplied on the application form and
by a referee.
We are looking for students with
commitment and enthusiasm for Grid
research and development. We will
expect competence and experience
in some aspects of software
development, distributed systems,
computational systems, data systems
and Grid applications. Most students
will establish credentials from
academic qualifications, but some
will base this on experience. We also
welcome educators who are planning
to teach Grid computing. Applications
close on December 12. See http://
www.iceage-eu.org/iwsgc08/
Both Schools will be conducted in
English, so participants must be
comfortable using spoken and written
English.
Issue 55, November 2007
Chris Date kicked off the seminar
with a concise introduction to formal
logic. This is the foundation of data
modelling and database design,
and a little goes a long way: data
modellers don’t need to be expert
logicians, but a good grasp of
the basics can prevent countless
mistakes. One such mistake, Date
argued, is the use of three-valued
logic to handle missing data.
Allowing statements to have one of
three truth-values – “true”, “false”
or “unknown” – seems an obvious
way to deal with the fact that we
don’t always have all the data we
want, but it has a few problems. Like
the fact that it makes databases
give you the wrong answers to your
queries. The logical processes of
both human users of databases and
optimisers in the database software
rely on manipulating tautologies
and contradictions in ways that can
be shown to break down when you
move from two-valued to threevalued logic. Date’s recommendation
– don’t touch three-valued logic with
a ten-foot pole – was nothing if not
unambiguous.
Another clash between the logic
of databases and the real world
comes from the fact that much
scientific data, most especially in the
geospatial sciences, has an intrinsic
extent in time and/or space, while a
relational database works with pointlike data. Fortunately, this problem,
unlike that of missing data, can be
solved in a rigorous and satisfactory
fashion. Chris Date showed, in the
second of his two presentations, how
to generalise the concepts in the
relational database model so that
it can handle temporal data. This
involves introducing a new data type,
the interval, and new operators to
manipulate it. The existing relational
database operators then turn out
to be special cases of these more
general concepts. No commercial
system, according to Date, does
this properly, but in principle there is
no reason why the relational model
cannot be extended in this way
to deal with time intervals – and,
by similar arguments, to deal with
spatially extended data as well.
NeSC News
Photograph by Peter Tuffy, University of Edinburgh
Chris Date, cont.
Chris Date
So let’s say all of our data is present,
with no missing values, and our
software tools are able to process
it. This doesn’t get past the really
fundamental problem, which is that
our data is only ever an imperfect
reflection of reality. Andrew Frank
(TU Vienna) argued that the only use
of information is in decision-making,
and we must therefore concentrate
on minimising the influence of data
imperfections on our decisions. In his
view, the usual idea of “data quality”
is unhelpful, because it supposes
that quality is a linear quantity, with
more detail meaning higher quality.
But the most detailed information
is not always the most useful:
if you want to catch a train, the
timetable and platform number are
all you need, while the details of the
locomotive engine are of interest only
to a minority of enthusiasts, and even
they would balk at a thermodynamic
analysis of the drive system. This is
just one example of how we routinely
simplify our understanding of the
world by classifying our observations
into objects and abstractions.
Understanding the imperfections
in our data means understanding
the processes of observation,
classification and construction.
Random observational errors are
well-understood, and there are wellestablished mathematical methods
for dealing with them. In higher-
level classification, we disregard
things that are too distant, too big
or small, and so on: we reduce
the imperfections by restricting the
level of relevant detail. There is no
mathematical method for handling
some kinds of imperfect knowledge,
such as whether a bank note in
your wallet is real or fake, but there
are socially-constructed methods
for having the risk absorbed by an
entity less sensitive to a single error,
whether by taking out insurance so
that a large company takes on the
risk, or by having the risk absorbed
by the state. The Swiss land registry
is an example of the latter approach:
the law says that the registry is
accurate by definition, and so any
clash between it and other sources
of information is up to the state to
resolve. The fundamental point is
that understanding the processes by
which information is constructed is
the key to understanding the errors in
that information.
This was taken up by Bill Pike
(Pacific Northwest National
Laboratory), in his presentation on
integrating knowledge models into
the scientific analysis process. He
described the challenge of trying to
capture scientific knowledge as it is
created, with workflow models that
describe the process of discovery. In
this way, the knowledge of what was
discovered can be connected with
www.nesc.ac.uk
Issue 55, November 2007
the knowledge of how the discovery
was made. Knowledge is created
through interaction: among people,
through dialogue, between people
and representations, between people
and tools, and even among tools.
Evaluating and reusing knowledge
requires that these interactions
be recorded. When it comes to
geoscientific data, the more content
is stored, the greater the need
to manage the perspectives that
are necessary if it is to be used
appropriately.
This means finding ways to
record the thought processes
that go into turning raw data into
usable information. A great deal
of geoscience is about narrative
explanations of observations,
such as accounts of the geological
processes that have operated over
time to create a particular landscape.
Typically, there is no unique
interpretation, and geological maps in
particular are subject to the vagaries
of individual perspectives. This can
be seen most clearly where a map
crosses a state boundary, and the
fact that each state has had its own
geologist interpreting the data leads
to startling discontinuities at the state
line. Mark Gahegan (Penn State)
used this example to illustrate the
need to record scientific processes
and communicate them to others,
capturing snapshots in the evolution
of our knowledge. As A.N. Whitehead
said, “Knowledge keeps no better
than fish”. If future generations of
scientists are to understand the work
of the present, we have to make sure
they have access to the processes
by which our knowledge is being
formed. The big problem is that, if
you include all the information about
all the people, organisations, tools,
resources and situations that feed
into a particular piece of knowledge,
the sheer quantity of data will rapidly
become overwhelming. We need to
find ways to filter this knowledge to
create sensible structures, much as
knowledge is filtered to create a train
timetable.
So how do we capture the most
meaningful information, how do we
ensure that this process is trusted,
and who gets to make the rules that
govern this work?
NeSC News
One method for explicitly
representing knowledge was
presented by Alberto Canas (Institute
for Human and Machine Cognition).
The concept maps that he discussed
are less ambiguous than natural
language, but not as formal as
symbolic logic. Designed to be read
by humans, not machines, they have
proved useful for finding holes and
misconceptions in knowledge, and
for understanding how an expert
thinks. These maps are composed
of concepts joined up by linking
phrases to form propositions: the
logical structure expressed in these
linking phrases is what distinguishes
concept maps from similar-looking,
but less structured descriptions
such as “mind maps”. Working with
a cardiologist on a concept map for
diagnosing heart disease, Canas
was able to tease out information
that had not been published in any
papers or books, and managed to
develop a system that diagnosed
heart problems from radiological
images with around an 80% success
rate: lower than the expert’s 90%
rate, but considerably better than
the average practitioner who only
manages a correct diagnosis
around half the time. As well as
automating expertise, concept maps
can also be applied to capturing
knowledge so that it isn’t lost when
the experts leave, retire or die. This
is very useful in industries such as
nuclear engineering or spaceflight
– regardless of hardware, NASA
can no longer launch a flight to the
Moon as so much knowledge has
been lost since the Sixties – but it
has also found a place in preserving
aspects of traditional Thai culture
that are threatened with obliteration
by the relentless advance of global
capitalism.
None of these systems will allow
us to transcend the fundamental
limitations of human knowledge.
They may, however, enable today’s
scientists and their successors to
work with those limitations better
than ever before, and to ensure
that our knowledge of the world,
situated in time and space, is as
fully understood as this imperfect
Universe allows.
Geospatial
Knowledge
Infrastructures
Workshop
In Association with eSI Thematic
Programme: Spatial Semantics for
Automating Geographic Information
Processes
26 - 27 November 2007
Welsh e-Science Centre (WeSC)
Geospatial knowledge infrastructures
are computational frameworks for
expressing, managing, discovering,
annotating and utilising geospatial
knowledge.
The vision is that of a knowledge
Web that supports the automatic
exploitation of geospatial information
and scientific knowledge. Such
knowledge is expressed in many
different forms and stored in different
formats. Due to the large social,
spatial, and temporal distances
between these elements, their
discovery, association and use
require much effort, which hinders
scientific progress.
How can we express the semantics
of these varying forms of scientific
information and the meaningful
relationships among science
elements? How can we use semantic
technologies to automate the
discovery and use of this information
across a network of related
information objects? What roles can
the current and emerging semantic
and social web technologies
play in the development of this
infrastructure?
Topics covered will be Knowledge
management and reasoning on the
Semantic Web and Web 2.0 and their
application to the geospatial context.
This workshop is intended for
geoscientists and researchers
in geospatial semantics and
geoinformatics.
For registration and more details
see http://www.nesc.ac.uk/esi/
events/832/.
www.nesc.ac.uk
Issue 55, November 2007
Building a better fly brain
e-Science
Institute
by Iain Coleman
One in three of us will experience
mental illness at some time in our
lives, and mental health care is
the biggest single cost to the NHS.
Treatment is often very difficult.
Drugs take time and money to
develop, with typically 13 years and a
billion pounds between the idea and
the marketplace, and even then the
drug failure rate is all too high.
Building a predictive model of the
brain – indeed, of the entire central
nervous system – could be of great
benefit to medicine, speeding up
the scientific process and making
better treatments available faster
and cheaper. The problem is that
the brain is so complex: the most
complex structure in the known
Universe. There has been a lot of
work done on linking behaviour to
the lowest-level molecular structures,
such as the genetic influence on
schizophrenia, but there is still little
understanding of all the levels in
between. Existing simulations tend to
be good at dealing with networks of
neurons and synapses, the circuitry
of the brain, but struggle to move up
to the higher levels of behaviour or
down to the lower levels of genetics.
The aim of the e-Science
Institute’s latest research theme,
Neuroinformatics and Grid
Techniques to Build a Virtual Fly
Brain, is to close the loop between
biology labs and simulations. As
the human brain is still forbiddingly
complex, the theme aims to begin by
looking at a simpler case: modelling
the brain of a fly. In his eSI public
lecture, “Building a better (fly)
brain...”, Theme Leader Douglas
Armstrong explained how the work
in this theme over the next year
will lay the foundations for a better
understanding of ourselves.
The fruit fly Drosophila has a
brain with around 50,000 neurons,
compared to the trillion in the human
brain. So it is simple enough to
present a more tractable problem,
but still complex enough to give
insights that can apply to the human
case. Unlike simpler creatures such
as worms, Drosophila’s brain is not
completely hardwired, and differs
NeSC News
Fly brain slice
from one individual to another.
Indeed, Drosophila has even been
shown to be able to learn. If a group
of flies in a metallic container is
exposed to two odours, one after
another, and one of the odours is
associated with an electric shock
applied to the container, about 90%
of the flies learn to subsequently
avoid the odour. The theme will
look at fly behaviour both in such
artificial situations and in more
natural circumstances, such as the
mating ritual, and try to make the
connections between observed
behaviour and the structure of the
brain.
Drosophila is particularly well suited
to this study because there is already
a huge amount of data on it, and
it is a particularly cheap animal to
work with. It has been researched in
laboratories for over a century, and
of the 11,000 genes in its genome,
around 2,000 are similar to genes
linked to human disease. Crucially
for this theme, there are around
20,000 different genetic mutants of
Drosophila readily available. Looking
at the development of these mutants
can tell researchers a lot about how
the normal structure is formed.
Even a fly brain is only comparatively
simple. Each of its 50,000 neurons
has about 500 synapses, and each
of these is a processing unit in itself,
not just a simple building block. With
2,000 proteins per synapse, the
fly brain has about 50 trillion basic
units. By comparison, the Edinburgh
BlueGene computer has 2048 CPUs,
each with 241 million transistors,
making about 49 trillion basic units.
Armstrong likened this fly brain study
to trying to understand a system as
large and complex as BlueGene by
hacking bits off of it and seeing what
happens.
So it’s a formidable challenge, but
one which is now achievable thanks
to the marriage of a new generation
of computing with extensive
biological data, from gene sequences
to behaviour analysis. The idea is
gaining momentum: following this
theme, there will be a full conference
on the topic in Washington in
2009. Between now and then, the
series of lectures and workshops
in this theme should go a long way
towards unravelling how the complex
structures of the brain affect how we
think, feel, and understand.
www.nesc.ac.uk
Issue 55, November 2007
e-Science
Institute
Cardiff supplies UK National Grid Service
with first Windows Condor Pool
The UK National Grid Service (NGS) is pleased to announce the first
Windows Condor Pool to be added to the service. Cardiff University’s
Information Services Directorate has been running a 1000-processor Condor
Pool for over 3 years and use by the University’s researchers has grown
considerably saving local researchers years of time in processing their
results. This resource is now freely available to all NGS users.
Jonathan Giddy, Grid Technologies Co-ordinator for the Welsh e-Science
Centre, said “The Windows Condor Pool can be used to perform a range
of computations, from determining the structure of proteins to calculating
radiotherapy dosages. By contributing these resources to the National Grid
Service we are enabling researchers nationwide to run a greater number of
Windows based programmes thereby continuing to open up the NGS to new
types of user.”
Cardiff University’s new Advanced Research Computing Division, led
by Professor Martyn Guest, will now run the Condor Pool in addition to
purchasing and managing a large tightly coupled cluster for the benefit of
local researchers.
Dr James Osborne, Condor Project Manager and Application Support
Engineer for the Advanced Research Computing division, said “The Windows
Condor Pool is the most widely used computing resource on campus and
has delivered over 2 million CPU hours since early 2006. The largest users
of Condor are based in the Department of Epidemiology, Statistics and
Public Health and are using Condor to help them analyse their data using
combinatorial methods.”
End of the First
Phase of the UK
National Grid
Service
The UK National Grid Service
upgraded the machines at the four
core sites of Manchester, Leeds,
RAL and Oxford over the summer.
There are some important changes
regarding user access to the old
hardware (referred to as NGS1). All users of the UK National
Grid Service are urged to check
the front page of the NGS website
(http://www.ngs.ac.uk) as soon as
possible for further information. If
you have any questions regarding
this announcement, please contact
the NGS Support Centre at support@
grid-support.ac.uk or 01235 446822.
UK e-Science Showcase at
SuperComputing 07
This November the UK’s e-science
programme will be showcased
through a selection of presentations
and demonstrations from a wide
range of application areas at SC07
(http://sc07.supercomputing.org) in
Reno, Nevada.
Funded by the STFC and EPSRC
and hosted by the UK National
Grid Service, the United Kingdom
pavilion will feature continuous
demonstrations of cutting edge
applications of HPC and Grid
Computing. The auditorium in the
centre of the pavilion will highlight
particular projects from the UK
NeSC News
e-science programme and allow for
in-depth audience questions and
discussion.
Projects to be demonstrated on the
pavilion include:
AstroGrid (http://www.astrogrid.org/)
CampusGrid (http://www.omii.ac.uk/
solutions/campusgrid.jsp)
GridPP (http://www.gridpp.ac.uk/)
National Centre for Text Mining
(NaCTeM) (http://www.nactem.ac.uk/)
GridSAM (http://www.lesc.imperial.
ac.uk/gridsam/)
National Grid Service JSDL
Application Repository (https://portal.
ngs.ac.uk)
The UK e-Science Pavilion at SC06
in Florida, US.
Photo by Pete Oliver.
www.nesc.ac.uk
Issue 55, November 2007
e-Science
Institute
DCC Conference: “Curating our Digital Scientific Heritage: a
Global Collaborative Challenge”
The 3rd International Digital Curation
Conference “Curating our Digital
Scientific Heritage: a Global
Collaborative Challenge,” will be
held on 11-13 December 2007, in
Washington DC, USA, in partnership
with the US National Science
Foundation (http://www.nsf.gov/)
and the Coalition for Networked
Information (http://www.cni.org/).
A drinks reception will be held at the
National Museum of the American
Indian (http://www.nmai.si.edu/) on
the evening of 11 December. The
conference starts on 12 December
with a keynote address from
Professor John Wood, formerly
the Chief Executive of the Council
for the Central Laboratory of the
Research Councils, and currently
Principal of Imperial College
e-SI Public Lecture
The e-Science Institute will host a
public lecture by Dr Werner Kuhn,
Professor of Geoinformatics at the
University of Münster on the subject
of “Dynamizing Spatial Semantics”.
The public lecture is open to all
interested parties in academia and
industry. There is no need to register
for this event and the lecture will also
be available via web cast.
A common thread through the 2007
e-Science Theme “Spatial Semantics
for Automating Geographic
Information Processes” has been
the question how processes affect
the meaning of spatial information.
For example, we have studied how
the function of objects (like buildings
or vehicles) determines their
categorization and how change of
meaning over time can be modelled.
In this final theme presentation,
Professor Kuhn will draw some
conclusions on the growing role of
dynamic theories and models of
spatial semantics.
16:00 8 February 2008
Newhaven Lecture Theatre, eScience Institute, 15 South College
Street, Edinburgh. http://www.nesc.
ac.uk/esi/events/833/
NeSC News
London’s Faculty of Engineering.
Following the keynote will be a
session featuring four national
perspectives on curation including
the UK perspective presented by Drs
Astrid Wissenburg, Economic and
Social Research Council (ESRC),
US perspective presented by Chris
Greer, National Science Foundation
(NSF), a European perspective
presented by Mario Campoloargo,
from the European Commission
and an Australian view from Dr
Rhys Francis, ��������������������
Executive Director,
Australian eResearch Infrastructure
Council
The afternoon will address the
issue of “Sustainable Access to the
Records of Science” looking at case
studies from a range of different
perspectives with the aim of seeding
discussion and moving towards the
drafting of a “Curation Manifesto” as
an output from the Conference. The
day will finish with a keynote from
Rick Luce, Vice Provost and Director
of Libraries at Robert Woodruff
Library, Emory University.
The second day will be dedicated
to peer-reviewed research papers
and will open with a keynote from
Professor Carole Goble.� School of
Computer Science, University of
Manchester, UK.���������������������
The conference will
close with a summing up keynote
from Clifford Lynch, Director of the
Coalition for Networked Information
(CNI)
The full programme and registration
are available at:� http://www.dcc.
ac.uk/events/dcc-2007/
GCN! Webinar - The Business Case and
Methods for the Green Data Centre
The latest Grid Computing Now! Webinar, putting forward the business case
for ‘greening’ the data centre, was great success, with 77 participants online.
Zahl Limbuwala, chair of the BCS Data Centre Specialist Group, presented
some of the shifts in behaviour that lie ahead for data centre owners and
operators.
Kate Craig-Wood, Managing Director of Memset, is already running carbonneutral data centres. She described what Memset have done to achieve this
status, and explained the business benefits of adopting a “green” approach.
The Webinar, plus question and answer session, can be viewed here: http://
mediazone.brighttalk.com/event/gridcomputingnow/6e0721b2c6-783-intro.
Boost to ICEAGE library
The ICEAGE Library has been growing and currently contains a total of
1406 entries, all of which are designed for Grid education. Content is
available in a huge range of formats, including a large number of articles,
audio tracks, presentations, and tutorials.
Following the success of the flagship International Summer School for Grid
Education 2007 (ISSGC’07) we are in the process of adding content to the
ICEAGE library. At the ISSGC’07 students were treated to presentations
given by leading Grid experts; the best presentations are being made
available in a variety of formats online.
To find out more and access ISSGC’07 content click here:
http://library.iceage-eu.org/resolve/resolver.jsp?rfr_id=info:sid/nesc.ac.uk:
library&rft_dat=lib:7977&svc_dat=details Currently there are 20 ISSGC’07
presentations available, and this will rise to 30 in the next few weeks.
www.nesc.ac.uk
Issue 55, November 2007
High-Throughput
Computing Week
This High-Throughput Computing
(HTC) event will run from 27-30
November and is intended to interest
those who may benefit from HTC
in their research or businesses and
those who provide HTC for their
users. It covers everything from how
to transform a task so that it can
benefit from HTC, through choosing
technologies that deliver HTC, to
providing cost effective services that
are convenient to use.
Speakers include Miron Livny
of Condor and John Powers of
Digipede, as well as Jason Stowe of
Cycle Computing and Akash Chopra
of Barrie & Hibbart representing
commercial users.
Each day will focus on a different
aspect of HTC. Delegates may
register for individual days. There will
be a workshop dinner on the evening
of Day 3.
Day 1: Example Solutions:
presentations from users in academia
and enterprise show how HTC has
transformed their work.
Day 2: Technology comparison and
training: two technology providers
demonstrate how their systems
would tackle the same problem
Day 3: Requirements gathering:
researchers, applications developers
and providers discuss their HTC
requirements, including security,
usability, energy efficiency and
reliability.
Day 4: The Future of HTC: users,
service providers and technology
providers discuss long term
roadmaps.
This meeting is intended for a mix of
users of HTC as a service, service
providers and application developers
who build on top of HTC systems.
Participants should be from the
commercial as well as academic
communities.
For registration and more details
see http://www.nesc.ac.uk/esi/
events/831/.
NeSC News
OMII-UK helps e-infrastructure broaden its
reach
Choreography, archaeology and medieval warfare are not typically
associated with the use of e-infrastructure. However, pioneering researchers
within these fields have shown that cutting-edge computer technology is not
just for computer scientists. OMII-UK aims to promote this trend by engaging
with research groups from all disciplines to find out what they need to get
started with e-infrastructure. To do this, OMII-UK needs to hear from YOU.
Founded by EPSRC to provide UK researchers with a sustainable source
of free software, support and expertise, OMII-UK is investigating the einfrastructure needs of the UK research community (in conjunction with
the e-IUS and e-Uptake projects). This project, called ENGAGE, will be
jointly undertaken by the National Grid Service, one of the UK’s largest einfrastructure providers. The ENGAGE project will help OMII-UK to identify
the common problems that prevent researchers from making use of einfrastructure, disseminate best practice, and develop and deploy suites of
software to meet the needs of researchers.
OMII-UK is an open-source software organisation based at the Universities
of Southampton, Manchester and Edinburgh. If your group is interested in
taking part, and potentially benefiting from software to support the use of
computing infrastructure in your research, please contact OMII-UK (info@
omii.ac.uk).
Call for Affiliated Workshops EuroSys 2008
1st - 4th April, 2008
Researchers and practitioners are invited to submit proposals for workshops
on topics related to design, implementation, evaluation and deployment of
computer systems.
The purpose of the workshops is to provide participants a forum for
presenting novel ideas, and to discuss in a small and interactive atmosphere.
Proposals should include:
The name and the preferred date of the proposed workshop.
A very brief cv of the organizers.
A short summary of the topic, its scope and significance, and including a
discussion on the relation with the EuroSys topics.
A description of past versions of the workshop, including dates, organisers,
submission and acceptance counts, attendance (or indication the workshop is
new).
Procedures for selecting papers, plans for dissemination (ranging from, for
example, special issues of journals, to just position statements distributed to
the participants), and the expected number of participants.
Affiliated Workshop Date: April 1st, 2008
Workshop proposals due November 23, 2007
Notification of acceptance due December 8, 2007
For further information, please visit the Eurosys 2008 web site (http://www.
eurosys.org/2008), or contact the workshop co-chairs at eurosys2008_
workshops@eurosys.org.
www.nesc.ac.uk
Issue 55, November 2007
Forthcoming Events Timetable
November
13-14
SRM2.2 Deployment Workshop
National e-Science
Centre
http://www.nesc.ac.uk/esi/events/827/
10-16
SC07
Reno-Sparks
Convention
Center, Nevada
http://sc07.supercomputing.org/
14
Symposium on “The Dalmarnock Fire
Tests: Experiments & Modelling”
Royal Museum,
Edinburgh
http://www.see.ed.ac.uk/FIRESEAT/
14-16
10th IEEE High Assurance Systems
Engineering Symposium
Dallas, Texas
http://hase07.utdallas.edu/
26-30
MGC 2007
Newport Beach,
Orange County
http://mgc2007.lncc.br/
26-27
Geospatial Knowledge Infrastructures
Workshop
Welsh e-Science
Centre
http://www.nesc.ac.uk/esi/events/832/
26-27
Grids and other eInfrastructures for
Education
National e-Science
Centre
http://www.nesc.ac.uk/esi/events/830/
27-30
High Throughput Computing Week
National e-Science
Centre
http://www.nesc.ac.uk/esi/events/831/
5
e-Infrastructures in the Arts and
Humanities and Social Sciences: Gridenabling Data Sets
e-Science Institute
http://www.nesc.ac.uk/esi/events/835/
10
Workshop for UK e-Science educators
TOE
http://www.nesc.ac.uk/esi/events/834/
10-12
International Grid Interoperability and
Interoperation Workshop 2007
Bangalore, India
http://omii-europe.org/OMII-Europe/
igiiw2007.html
10-13
2nd International Workshop on Scientific
Workflows and Business Workflow
Standards in e-Science
Bangalore, India
http://staff.science.uva.nl/~adam/
workshops/e-science2007/cfp-swbes2007.htm
10-13
3rd IEEE International Conference on eScience and Grid Computing
Bangalore, India
http://www.escience2007.org/index.
asp
11-13
3rd International Digital Curation
Washington DC
Conference “Curating our Digital
Scientific Heritage: a Global Collaborative
Challenge”
http://www.dcc.ac.uk/events/dcc2007/
13-15
ICMLA’07
Cincinnati, Ohio
http://www.icmla-conference.org/
icmla07/
The e-Science Institute Public Lecture
- “Dynamizing Spatial Semantics”
e-Science Institute
http://www.nesc.ac.uk/esi/events/833/
December
February
8
This is only a selection of events that are happening in the next few months. for the full listing go to the following
websites:
Events at the e-Science Institute: http://www.nesc.ac.uk/esi/esi.html
External events: http://www.nesc.ac.uk/events/ww_events.html
If you would like to hold an e-Science event at the e-Science Institute, please contact:
Conference Administrator,
National e-Science Centre, 15 South College Street, Edinburgh, EH8 9AA
Tel: 0131 650 9833 Fax: 0131 650 9819
Email: events@nesc.ac.uk
The NeSC Newsletter is produced by Gillian Law,
email glaw@nesc.ac.uk
The deadline for the December Newsletter is: 27th November 2007
NeSC News
www.nesc.ac.uk
Download