Home E-mail Notes Meetings Search

[ Home | Contents | Search | Start a new article | Reply | Next | Previous | Up ]


[Fwd: 3rd LCB Workshop : Call for Contributions] (Entire LHCb collaboration)

From: John.Harvey@cern.ch
Date: 6/11/99
Time: 10:29:31 AM
Remote Name: 137.138.115.187
Remote User:

Comments

This is a multi-part message in MIME format.
--------------EC85C492A6E135720205C686
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
--------------EC85C492A6E135720205C686
Content-Type: message/rfc822
Content-Transfer-Encoding: 7bit
Content-Disposition: inline
Return-Path: <touchard@ccimap.in2p3.fr>
Received: via tmail-4.1(10) for harvey; Thu, 10 Jun 1999 17:11:54 +0200 (MET DST)
Return-Path: <touchard@ccimap.in2p3.fr>
Received: from smtp1.cern.ch (smtp1.cern.ch [137.138.128.38])
	by mail3.cern.ch (8.9.3/8.9.3) with ESMTP id RAA09030;
	Thu, 10 Jun 1999 17:11:53 +0200 (MET DST)
Received: from ccimap.in2p3.fr (ccimap.in2p3.fr [134.158.69.6])
	by smtp1.cern.ch (8.9.3/8.9.3) with ESMTP id RAA01988;
	Thu, 10 Jun 1999 17:11:50 +0200 (MET DST)
Received: from in2p3.fr ([134.158.16.96]) by ccimap.in2p3.fr
          (Netscape Messaging Server 3.6)  with ESMTP id AAA15D1A;
          Thu, 10 Jun 1999 17:11:11 +0200
Sender: "Francois Touchard" <touchard@ccimap.in2p3.fr>
Message-ID: <375FD4CD.6745F235@in2p3.fr>
Date: Thu, 10 Jun 1999 17:07:57 +0200
From: Francois Touchard <touchard@in2p3.fr>
Organization: CPPM/IN2P3
X-Mailer: Mozilla 4.51 [en] (X11; I; OSF1 V4.0 alpha)
X-Accept-Language: en
MIME-Version: 1.0
To: David Jacobs <David.Jacobs@cern.ch>,
        Fabrizio Gagliardi <Fabrizio.Gagliardi@cern.ch>,
        Francois Etienne <etienne@cppm.in2p3.fr>,
        John Harvey <John.Harvey@cern.ch>,
        Juergen Knobloch <Juergen.Knobloch@cern.ch>,
        Les Robertson <Les.Robertson@cern.ch>,
        Martti Pimia <Martti.Pimia@cern.ch>,
        Mirco Mazzucato <Mirco.Mazzucato@pd.infn.it>,
        Pierre Vande Vyvre <Pierre.Vande.Vyvre@cern.ch>
Subject: 3rd LCB Workshop : Call for Contributions
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
=====================================================
 3rd LHC Computing Workshop : Call for Contributions
=====================================================
The 3rd LHC Computing Workshop will take place in Marseille (France)
from September 28th, 1999 till October 1st 1999. The purpose of this
workshop is to foster discussion between the LHC collaborations and IT
division on the general strategy to adopt for building and using the
LHC software.
Each session will be organised around an introductory talk given by a
Rapporteur and a large fraction of the sessions will be devoted to
discussion.  In order to help the Rapporteur to identify and present
the main issues, contribution to the organisation of each session is
called for throughout the community. Some guidelines have been
provided by the Organising Committee :
Event Filter Farms
------------------
Event Filter Farms have never been the subject of such a Workshop. The
community should therefore feel free to make any suggestion for the
topics to be addressed. One could focus around the following themes :
- State of the art
- Existing experience
- Ongoing work near LHC
Architecture
------------
1. Progress since Barcelona on terminology, standard references
2. Look at what architectures exist in available systems ( GEANT4, ROOT,
   NOVA, GAUDI, WIRED, BETA, JAS, ...)
 - what architectural styles are evident (client server, data
centric,..)
 - what are the critical design criteria ( transient/persistent, use of
placeholders
for user code,...)
 - what components have come out of these architectural designs
   e.g. event model, detector descriptions, ...
 - what types of interface exist  (interface standards e.g. IDL, what
foundation libraries, etc.)
 - what are the issues / problems
3. Need an architectural model illustrating data types at different
levels of abstraction :
      o primitives (int, float)
      o foundation class libraries (basic domain datatypes)
      o Framework class libraries (plotter, algorithm)
 - goal is to define the interfaces
 - need agreement on what data types, exception handling etc. one
is allowed to use in these interfaces
 - implies agreement (i.e. organisation and management) of what
foundation class libraries we use in HEP (e.g. clhep, STL, math).
Can we get global agreement  on the standard? What mechanism do we
need for managing its evolution.
4. Technology
 - has there been any success in using standard integration technologies
e.g. CORBA,  DCOM
 - what problems are there in mixing languages
Persistency
-----------
 - Non-Technical Risks 
        o Commercial vs. GPL (GNU Public License) vs. home-grown
        o software: benefits and the risks
 - There is a place for a DBMS in HEP? 
        o meta-data 
        o raw-data 
        o reconstucted-data 
        o user-data 
 - Persistent objects vs. object serialisation vs. traditional I/O 
        o What kind of I/O best suite HEP applications? 
        o If meta-data reside in a ODBMS why not storing there data
also? 
        o What a user really wants in memory: the original object or a
copy? 
        o Are HEP object models really so simple that navigation among 
          persistent objects is not an issue? 
 - Experience with the present products: ODBMS (Objectivity/DB) 
        o Performances: Local, Remote 
        o Schema evolution 
        o Transient memory copy of disk resident persistent objects 
        o Row-wise vs. column-wise storage of object-collections 
        o Reliability: on the LAN; on the WAN 
        o Maintainability and Usability: will Objy survive (from a
technical 
        o point of view) long enough? 
        o Production service: experience of CERN Objy production service 
 - Experience with the present products: (others) 
        o ROOT I/O: 
        o Other traditional file-based I/O systems. 
        o wrapped RDBMS 
        o what else? 
 - One size fit all? 
        o what a light/flexible-POM (POM==Persistent Object Manager)
should 
         provide (what should NOT provide)? 
        o Can we find/produce a flexible-POM able to manage at the same
time 
          the data of a large experiment and the histograms of a
theoretician? 
        o Will not a light-POM introduce access techniques for user data 
          different than from experiment data as it was in the past
(ntuples 
          vs dsts)? 
 - Toward 2001 Milestone: "final choice of the ODBMS" 
        o Do we understand all major technical and non-technical aspects
of 
          such a choice? 
        o Do we need further R&D to assess not yet fully evaluated
risks? 
References: 
     CHEP`98 panel on databases
     http://www.hep.net/chep98/paper98/Plenary/Database_Panel/ 
     RD45 documents and white papers:
     http://wwwinfo.cern.ch/asd/rd45/reports.htm
     http://wwwinfo.cern.ch/asd/rd45/recommendations.htm
x
Simulation
----------
 - Managing the transition to GEANT4
        o Use of GEANT3 geometry
        o Training
 - Geometry
        o first experience with G4 geometry
        o can the complexity of LHC detectors be handled?
        o what is the use of starting from CAD?
        o common geometry database for simulation, reconstruction, 
          graphics, ...?
        o what can be done in common between experiments?
 - Readout
        o what can be done in common?
        o pile-up
 - Generators
        o Current activities
        o Interface definition
 - Physics
        o E.m physics
        o Hadronic interactions - comparison with other packages, with
test
          beams
        o Radiation background
        o Are the requirements of the experiments met?
 - Fast simulation
        o Various levels of fast simulation
        o Requirements
        o First experience
 - Performance
        o CPU and memory requirements
        o Reliability, bugs, reaction to bug reports,
 - Utilities
        o Graphics
        o Geometry input
        o Persistency
Data Analysis
-------------
 - Data quality control
        o Debugging and up to date online access of DAQ problems (data 
          recording problems, data integrity, empty events, aborted
runs85) 
        o Debugging of detectors failures by cross-checking the data
quality 
          through all possible detectors 
        o Physics analysis on adequate channels to check the physics
quality 
          of the events (Z0, J/Psi85) 
        o Hotline analysis (can also be used to verify the detector data 
          quality)
 - Data Access and Analysis
        o Strategy on shared resources between central sites
(experimental 
          site, Computer Centers) and local sites.
        o Certification and up to date mechanisms of the analysis code
and 
          data through the collaboration analysis places.. 
        o Network QoS and availability (during good luminosity periods
in 
          particular) 
        o Data model for analysis / Data format. 
        o Data classification tools to extract sub-samples and physics
          quantities 
        o Recording of data & access methods to avoid entropy of
conditional 
          code
 - Data Analysis components
        o Visualization toolkit (graphics & Histogramming) 
        o Maths library 
        o Data analysis language and "macros" 
        o Data storage and access
 - Support to users and maintenance
Technology Tracking
-------------------
 - Networking: 
	o local area networking 
	o wide area networking 
 - Processors, memories and computer systems 
 - Secondary (random access) storage
	o magnetic disks
	o  optical disks
	o  holographic storage 
 - Tertiary (mass, sequential access) storage
	o tapes
	o robotics 
 - Storage management systems
	o  (distributed) file systems
	o  network storage
	o  mass storage 
 - Computer architectures, systems interconnects, scalable clusters 

Distributed Computing and Regional Centers ------------------------------------------ - Analysis and data Models o The situation and the specific requirements of the different experiments o How the distribution affects the analysis model o Priority issues and large scale behaviour of the system - Database system performance and perspective o Federated Database and Regional Centers o Indenpendence from the features of a specific DB product o Clustering, tape access and other issues affecting efficient use of the analysis resources - Infrasctructures for Regional Centres o The network connectivity o The requirements of the experiments o The possible sites and their planning - Ideas for a continuing common work between LHC experiments We would like that everybody feels free to make any suggestion concerning the organisation of the sessions and the particular topics to be addressed within the proposed framework. Please use our web site at URL http://marcpl2.in2p3.fr/LCB/ to send us your comments, not later than July 1st, 1999. In order to reach the largest possible community, please forward this message to anybody who could be interested. --------------EC85C492A6E135720205C686--