Work Package Data Acquisition
Job Description 1 – Operation Automation
Project Title
Automation of the operation of a large physics experiment
Institute - Group:
CERN-PH/AID
Supervisor:
Franco Carena
Qualifications/
experience required:
University Degree or equivalent diploma in computing or in physics
with a specialization in computing.
Training value to
applicant:
The fellow will be trained to online software and control automation
Impact
Industrial involvement
Starting date
Project Introduction and background
ALICE (A Large Ion Collider Experiment) is the heavy ion experiment at the Large Hadron
Collider (LHC). The ALICE Collaboration has built a dedicated heavy-ion detector to exploit the
unique physics potential of nucleus-nucleus interactions at LHC energies with the aim to study the
physics of strongly interacting matter at extreme energy densities, where the formation of a new
phase of matter, the quark-gluon plasma, is expected. Complex trigger and data acquisition
systems have been installed to select on-line the most interesting events and write them to
persistent storage. ALICE has now entered in its operation phase and is observing collisions of
particle beams generating large amounts of data. The stable and reliable operation of the control
system of such a complex apparatus is critical for the success of the experiments. The load of the
shifts for the operation of the experiment on the collaboration personnel is quite heavy because the
planned operations of the experiment, according to the LHC schedule, will last for very long
periods.
The fellow will be a member of the team in charge of the data-acquisition system of the ALICE.
The project will focus on the study of methods for automating the work of the operators in the
ALICE Control Room with the purpose of reducing their number and the required period of
attendance. The fellow will thus acquire in depth knowledge of the management, installation and
deployment of the forefront data-acquisition technologies. The fellow will have to get a multilayered comprehension of both hardware and software of all the ALICE online systems (the
Detector Control System, the Trigger, the Data-Acquisition and the High-Level Trigger) and a
profound understanding of the Experiment Control System software, responsible for the
synchronization of the online systems. In addition the fellow will learn all the tools which are
available for the operator to monitor the experiment and the online systems: the InfoLogger
reporting information and error messages, the electronics LogBook, and the Lemon system
monitoring the hardware of the DAQ computing system.
The broad goal of this control automation system is to combine the information provided by the
different systems and to propose solutions on how to bring the whole experiment to a state
compatible with physics data taking.
In order to ease the experiment operation as much as possible, by automating some of the tasks,
the first part of the project should be focused in the gathering of some experience on typical
situations that need to be fixed with well-established recipes. The second part of the project
consists in the design, development and deployment of software, eventually based on expert
system technology, for the reliable automation of the control system.
This offers an ideal educational environment with cutting-edge technology to exercise the software
development skills under realistic conditions and to gradually obtain experience in all aspects of a
high-energy physics experiment.
Work package Research and Work Plan
During the first part of the project the fellow will acquire in depth knowledge of data acquisition
technologies, improve skills in programming, system design, GUI interfaces, Finite State
Machines, databases, data monitoring methods, optimization techniques and general engineering.
Working in an international and stimulating environment like CERN will develop social and
languages skills and acquaintance with the research activities.
The second part of the project will consists in the following phases:
 Study how to describe the various sub-systems as decentralized entities, reacting in realtime to changes in the system;
 Provide interfaces to all the online systems to get information about their behavior;
 Individuate the method to be used to simulate the performance of the experts that would
need to be consulted otherwise;
 Select the rule-based programming language and method for the implementation;
 Choose the formalism for the creation of the knowledge base data base;
 Implement the knowledge base, as database of typical patterns reflecting situations leading
to deprecated functioning of the experiments;
 Study the problem-solving models;
 Make a list of recipes on how to handle those situations;
 Study the rules of inference;
 Develop an interactive program proposing solutions to the operator.
The research part of this job will consist of the following areas:
 Establish the scope of the automation.
 Identify and evaluate the techniques that could be used to develop a control automation
system.
 Develop new concepts leading to a unified control of the experiment and to its automation.
The goal of the work plan is first to develop a first prototype that could be tested in production in
the experiment and evaluate its effectiveness when used by the ALICE ECS-DAQ shifters, then
implement a production version of the system.
Training plan
Year 1
On the job training
Formal training
General overview of the ALICE
experiment Trigger, DAQ and
ECS
CERN School of Computing
Programming courses
Unix Programming/Distributed
applications development held at
CERN
Technical software training
French/English courses held at CERN
Language courses
Year 2
Expert systems technologies
Academic Training courses at CERN
Artificial Intelligence
Year 3
Communication courses
Communicating effectively
How to make presentations
Work Package Data Acquisition
Job Description 2 – Software Refactoring
Project Title
Refactoring of a large online software package
Institute - Group:
CERN-PH/AID
Supervisor:
Sylvain Chapeland
Qualifications/
experience required:
University Degree or equivalent diploma in computing. Good
knowledge in C/C++ language is necessary, and willingness to learn
others if necessary (Tcl, bash). Communication abilities will also be an
important aspect of the work in order to interact with the software
experts of the multicultural DAQ team.
Training value to
applicant:
The fellow will get training in the area of large data-acquisition systems,
and will exercise his skills in software development and quality
insurance techniques.
Impact
The Early Stage Researcher will become an expert in software design
processes, code refactoring, and quality insurance techniques.
Industrial involvement
Starting date
Project Introduction and background
ALICE is an experiment observing and analyzing the particle collisions produced in the CERN
Large Hadron Collider (LHC). The detector outputs physics data at a sustained rate of up to 2
gigabytes per second. The DATE software (ALICE Data Acquisition and Test Environment)
implements the distributed processes and control mechanisms necessary for a high-performance
and reliable data taking, and runs on several hundred computers. It has grown from the first test
setups in the late 1990s to maturity in production for the LHC. It consists of 300’000 lines of
source code (mainly C and Tcl/Tk) spread in 30 packages.
After 15 years of continuous development following updates in requirements, it is now time for a
refactoring of this software package. Refactoring is the process of applying behavior-preserving
transformations to a program with the objective of improving the program's design. It includes an
in-depth code review in order to identify potentially outdated or unused features.
Work package Research and Work Plan
The goal of this position is to:
 first acquire knowledge about the DATE system and its many underlying control and data
flows. This step involves understanding the context in which the tool is used in the
production environment, infer the associated requirements in features and flexibility, and
inspect at a low level the source code to grasp the design and implementation choices. It is
a unique opportunity to learn about complex distributed systems.
 then analyze and review the architecture and implementation, and prepare a detailed report
of the outcome. This includes discuss with the various packages developers (hence
practicing communication skills), and apply the software quality procedures learned in
parallel with the technical training to identify and sort by level of importance potential
issues.
 finally propose and implement possible changes, for example ways to homogenize interprocess communication, or group in a common library similar features used by different
components. A large autonomy is granted in the design choices to be made, such as the
use new software methods not available at the time some processes where initially
developed. The major challenge of this task is to combine potentially original software
solutions, while putting into practice the learned testing procedures necessary for a smooth
evolution in the production environment.
Training plan
On the job training
Formal training
Year 1
Exercise and operate the DAQ CERN Technical Training:
software in the production
 Secure coding in C/C++
environment
for
a
good
 Introduction to Databases and
understanding of its components
Database Design
Year 2
Code analysis and writing CERN Technical Training:
detailed report about possible
 communicating effectively
improvements.
 quality management
Year 3
Implement ad-hoc changes in the
DAQ software.
CERN Technical Training:

ISTQB International Software
Testing Qualifications Board
Work Package Data Acquisition
Job Description 3 – Detector Readout
Project Title
Development of high-speed detector readout
Institute - Group:
CERN-PH/AID
Supervisor:
Filippo Costa
Qualifications/
experience required:
University Degree or equivalent diploma in computing or in physics
with a specialization in computing
Training value to
applicant:
The fellow will be trained in online software and the hardware/software
integration
Impact
The fellow will be exposed to several key computing and networking
technologies and will increase his knowledge about these technologies
and their interaction. He or she will have the opportunity to contribute to
a large project by practical developments. This will remarkably enhance
the fellow’s potential whether they pursue his/her career in the public or
private sector.
Industrial involvement
Starting date
September 2011
Project Introduction and background
The ALICE DAQ project is based on high-speed computing and networking systems, it is fully
functional and it exhibited outstanding performance during the 2009. The data acquisition
software developed by the ALICE DAQ group consists of several software packages. The readout
software is a specific one that reads the information coming from the front end electronics of the
detectors. It interfaces itself with the readout hardware using a software module called equipment;
there are different equipments, to read data from several sources, giving to the data acquisition
software an extreme flexibility to the whole system.
As part of the ALICE upgrade, faster data links and new protocols will be used to deliver data
from the detector electronics to the DAQ system in a high-radiation environment and the readout
program will need new equipments to read the data from boards using those new links and high
throughput data transfer protocol.
Work package Research and Work Plan
Working in the ALICE DAQ group offers an ideal educational environment for young developers
who can get to work in a cutting-edge environment. Moreover, the DAQ prototypes are used
during several phases of the detector development including test in the laboratory, test-beam runs,
and detector commissioning.
This gives the opportunity to the fellow to participate in all aspects of a high energy physics
experiment and to exercise his/her software development skills in a realistic environment. The
research part of this EST Fellowship will consist of the following phases:





Understand the connection between the different packages of DATE.
Identify the methods to read out the data provided by the detector readout boards.
Design and implement the algorithm to handle the data acquisition and the flow control.
Provide debugging tools to be used during the acquisition to verify that the system is
working properly.
Provide a common software for the slow control that uses the same syntax but it is able to
interface itself with the different equipments.
At the end of each phase, the software developed by the fellow will be released and documented.
It will then be made available to the whole ALICE collaboration, as part of the DAQ software
framework.
Training plan
On the job training
Year 1
- Build a DAQ test set-up gaining
knowledge of all the connections
between the DATE packages.
- Identify the methods to read out
the data provided by the front end
electronics of the detector.
Year 2
- Design and implementation of a
new algorithm to handle the data
taking using the new data
transmission protocols.
- Build debugging tools to test the
software.
Year 3
Development of a slow control
software using the DATE
framework.
Formal training
Work Package Data Acquisition
Job Description 4 – Online software dynamic monitoring
Project Title
Online software dynamic monitoring
Institute - Group:
CERN-PH/AID
Supervisor:
R. Divia
Qualifications/
experience required:
University Degree or equivalent diploma in computing.
Training value to
applicant:
The fellow will be trained on the control and monitoring of online
distributed software and its interface with human operators and field
experts.
Impact
Industrial involvement ETM
Starting date
Project Introduction and background
The ALICE experiment at CERN acquires data via a data-driven architecture that includes
thousands of components: online computers (collectors, concentrators, servers, workers), point-topoint data links, networking elements, data storage elements, data quality checkpoints and control
stations. One person, the DAQ (Data AcQuisition) and ECS (Experiment Control System)
operator, controls these components via dedicated user interfaces. This Work Package will provide
a framework capable to ensure adequate aid to the DAQ/ECS operator, to assist the daily operation
procedures, and to guarantee a solid and reliable support for effective fault-finding and error
recovering tools. The project covers multidisciplinary domains such as information gathering from
multiple, widely distributed sources, merging this information, take unassisted decisions based on
heuristic logic, selecting – if possible - candidate remedies, and presenting the results to the
DAQ/ECS operator. Dedicated facilities must also be provided to the DAQ/ECS experts in order
to guarantee the necessary support for day-to-day operations, to enhance the functionality of the
framework, and to adapt the package to future upgrades of the ALICE DAQ/ECS systems.
Work package Research and Work Plan
This project will follow the following work plan:
 comparative analysis of equivalent systems used within LEP and LHC and by equivalent
industrial systems;
 detailed analysis of the current DAQ/ECS system while in operation: how it works, how
the operator interacts with it, how problems are detected, analyzed and solved by the
operator either in full autonomy or with the assistance from on-call experts;
 definition of the tools to be used for metrics and alarms: sensors, transport services, data
collection and monitoring;
 definition of the framework used by the monitoring system logic engine(s): how metrics
are analyzed, how alarms are triggered by the sampled data using a flexible and adaptable
heuristic engine, how conclusions and recovery procedures are selected;
 interfacing to the operator: how to assist a non-expert in reacting to abnormal operating
scenario quickly and efficiently;
 definition of an initial set of metric, alarms, rules and remedies based on input provided by
the DAQ/ECS developers;
 support to the ECS/DAQ experts: how rules and remedies can be added, validated and
modified in accordance with the natural evolution of the system being monitored, how the
framework can be maintained and migrated to newer architectures.
Dedicated training will be provided to properly identify the requirements, to integrate with the
existing architecture, and to define a correct interfacing with the human operator. The Work
Package will require special skills and tools in order to identify an efficient framework, capable to
evolve together with the ECS and DAQ systems. Easy prototyping, testing and validation of rules
and features must be provided. Adequate support must be guaranteed to allow efficient support of
the framework and of the set of rules used during the decision-taking process.
The duration of the Work Package allows a complete work cycle, from the definition of the base
requirements up to the consolidation procedures, going through design, development and
validation of tools and libraries. All the working aspects of such a project will have to be covered
and ensured. The fellow will be given the opportunity to follow the project in all its phases and to
witness its deployment for operation in real-life environment, all under the direct assistance of
skilled DAQ and ECS experts.
Training plan
Year 1
On the job training
Formal training
Analysis of existing DAQ and
ECS systems, of software
packages currently in use by HEP
experiments and by equivalent
industry setups, of their operating
procedures. Comparative field
survey of existing tools and
frameworks.
SCADA systems and associated fault
detection and recovery frameworks.
ALICE DAQ and ECS operation,
maintenance, and development
procedures.
Year 2
Implementation of the support As required by the chosen tools and
tools required by the framework: technologies.
communication
libraries, Software deployment procedures.
debugging
tools,
support
databases,
editors,
Human
Interfaces
(developers
and
operators).
Year 3
Consolidation, field validation Documentation tools.
while in use by the DAQ/ECS Software review and consolidation
operator, integration within the procedures.
existing operating procedures,
editing of operating guides,
establishment of maintenance and
configuration procedures.
Work Package Data Acquisition
Job Description 5 – Business Intelligence applied to Online software
Project Title
The design, development and deployment of a Business Intelligence (BI)
platform as part of the ALICE Data-Acquisition (DAQ) project
Institute - Group:
CERN-PH/AID
Supervisor:
Vasco Chibante
Qualifications/
experience required:
University Degree or equivalent diploma in computing. Excellent
knowledge in software development is required. Experience and
knowledge in database technologies and some familiarity with distributed
computing systems are needed. Experience with BI tools and/or Data
Warehouse is a plus. Good communication and presentation skills.
Training value to
applicant:
The training part of this fellowship will cover high-performance database
technologies, Data Warehouse concepts and systems, and Business
Intelligence concepts and systems
Impact
Based on the last years trends, the deployment of open-source Business
Intelligence will continue to grow, with experts forecasting an yearly
increase of 100%.
This Fellowship - by providing both theoretical and practical knowledge
on open-source Business Intelligence platforms and Data Warehousing will prepare the Fellow to apply to jobs in industry where the demand for
such skills will remain high.
Additionally, it will provide him with experience in working in large and
multidisciplinary scientific collaborations, which will also be a plus for
his professional career either in academia or in industry.
Industrial involvement Several open-source BI tools will be evaluated. Given the complexity of
these tools, the involvement of leading open-source BI providers such as
Pentaho or JasperSoft would allow the Fellow to access valuable
experience and technical expertise. Moreover, it would create an
important case study for current and future large scientific collaborations.
Starting date
Project Introduction and background
ALICE (A Large Ion Collider Experiment) is one of the Large Hadron Collider (LHC) experiments,
designed to study the Physics of strongly interacting matter at extreme energy densities where the
formation of a new phase of matter – the Quark Gluon Plasma – is expected. After 15 years of
design and installation, and following a series of dedicated sessions in 2008 and 2009 to commission
the different sub-detectors, the online systems and the online-offline interfaces, the ALICE
experiment started to detect and record in March 2010 the first collisions produced by the LHC and
has been collecting millions of events ever since.
Given the complexity and scale of ALICE, the daily experimental operations involve a large number
of people performing heterogeneous tasks: from data-taking operations in the control room to expert
on-site and remote interventions, from software releases to hardware upgrades. Therefore, a good
Information System is essential to support the decision-makers and the operational coordinators not
only to fulfil the Physics objectives but also to do it in an efficient way, thus reducing costs and
optimizing resources.
The design, development and deployment of a Business Intelligence (BI) platform as part of the
ALICE Data-Acquisition (DAQ) project constitute the subject of this Marie Curie Fellowship.
Work package Research and Work Plan
The research part of this fellowship will consist of the following phases:
 Identify the needs of the different decision-makers in terms of Information and Knowledge.
 Identify the available sources of information already existing in ALICE that could be
relevant to the BI platform.
 Design, implement and deploy an efficient data warehouse repository.
 Evaluate, select, deploy and configure an open-source BI suite.
 Following initial usage, validate the system and perform any necessary adjustments.
 Evaluate the effectiveness of using BI platforms in large scientific collaborations
At the end of each phase, document the performed research accordingly.
Training plan
On the job training
Year 1
Formal training
Training in the hardware and - Training in the high-performance
software environment used by the database technologies.
ALICE DAQ system
- Training in Data Warehouse concepts
and practices.
- Training in Business Intelligence
concepts and practices.
Year 2
Training in Communication skills - Training in open-source Business
by presenting the project in Intelligence suite(s).
international conferences
Year 3
Training in Communication skills - Training in Management
by presenting the project in Communication skills
international conferences
and
Download

Work package Research and Work Plan