TOC PREV NEXT INDEX

Gaudi logo


Chapter 18
Accessing SICB facilities
18.1 Overview

In order to facilitate the transition towards C++ based code we have implemented a number of features into the Gaudi framework whose purpose is to allow access to facilities existing within SICB but not yet existing within the framework itself. Gaudi also can read data produced with SICBMC or SICBDST.

In this chapter we cover: staging data from DSTs, converting SICB data for use by Gaudi algorithms, accessing the magnetic field map, accessing the SICB geometry description, the use of the SUINIT, SUANAL and SULAST routines from within the Gaudi framework and handling event pileup and spillover in Gaudi. We also explain how to modify the size of the ZEBRA COMMON block.

When using the geometry and magnetic field descriptions described here, you should not forget that they are a temporary solution and that they will disappear at some point in the future. Gaudi includes already the machinery to provide any algorithm with the detector data stored in XML format. Investing some time now to describe your detector in XML may be, in many cases, more convenient than using the old SICB routines to access detector description data. If, for any reason, you have to use the old tools, use them only to populate some pre-defined class which can then be used in your algorithms. In this way, when you decide to move to the new detector description tools in Gaudi, the required changes in your code will be confined to the parts which access the geometry.

18.2 Reading tapes

There are three ways to specify a SICB input data file in Gaudi:

1. Read one or more files on disk by setting the appropriate option of the Event Selector in the job options file:
EventSelector.Input = {"FILE = '$SITEROOT/lhcb/data/mc/sicb_mbias_v233_10ev.dst2,  $SITEROOT/lhcb/data/mc/sicb_bpipi_v233_100ev.dst1'"};

2. Specify a data set by giving the job identification numbers in the book-keeping data base, as was done in SICB. The framework will query the ORACLE database in order to find the corresponding tape and then stage it in. The access to the database is through the oraweb01 web server at CERN, as for the mcstagein script.

The job IDs are specified in the job options file as follows:
EventSelector.Input = { "JOBID='16434, 16435'" };

3. Read one or more tape files, even if they are not in the LHCb production book-keeping database, for example test beam data. The user should tell the Event Selector which tape(s) and file(s) to read in the following way:
EventSelector.Input = {"TAPE='Y21221-7, Y21223-24'";

The format is <Volume serial number of the tape>-<File sequence number>

When a Gaudi job requires to stage more than one tape, the program waits for the first tape to be staged. The rest of the required tapes are staged while the first file is processed. Once the program ends reading the first file, it will check if the next file is ready before continuing. If the tape is not staged yet the program writes a message and waits until the file is in the staging disk pools.

Skipping events

When reading SICB data, one may wish to skip some events. This can be done by setting the property EventSelector.FirstEvent. For example, to start processing at event 109, add the following job option:
EventSelector.FirstEvent = 109;

The algorithms will run only after reading that event. Note that Gaudi will take a few seconds to reach the requested event as it has to read all the records in the Zebra file before the one requested.

18.3 Populating the GAUDI transient data store: SICB Converters
18.3.1 General considerations

Access to the SICB data sets is basically via wrappers to the FORTRAN code. A complete event is read into the Zebra common block, and then the conversion to transient objects is done on request by specialised converters.

As mentioned in Chapter 14, a converter must implement the IConverter interface, by deriving from a specific base class. In this way any actions which are in common to all converters of a specific technology may be implemented in a single place.

In the following section we give detailed instructions on how to implement converters within the SicbCnv package. These are intended primarily for Gaudi developers themselves.

18.3.2 Implementing converters in the SicbCnv package

SICB converters are available for reading most SICB banks, the full list of converted banks is available at http://cern.ch/lhcb-comp/Support/html/ConvertedSICBBanks.htm. Writing back into persistent storage (ZEBRA files) is possible for a few banks.

Typically, GAUDI DataObjects can be of two types:

· Simple classes, which contain the data of a single SICB bank. These classes are of type DataObject. An example is the Event class containing the data of the PASS bank.
· Container classes, which contain data from multiple SICB banks. An example is the ObjectVector<MCParticle>, which contains Monte-Carlo particles with data from the ATMC bank.

Template files exist in the directory $LHCBSOFT/SicbCnv/<version>/doc for both types of converters, to ease the creation of user converters:

· SicbCnv.Class.Template.cpp and SicbCnv.Class.Template.h to be used when writing a converter for a single class.
· SicbCnv.Class.Container.cpp and SicbCnv.Container.Template.h to be used when writing a container of an object container.

If you intend to write your own SICB converter, follow the instructions below:

· Copy SicbCnv.xxxx.Template.h to Sicb<your-class>Cnv.h, where <your-class> is the name of your persistent class.
· Copy SicbCnv.xxxx.Template.cpp to Sicb<your-class>Cnv.cpp
· Now customize the header and the implementation file
· Follow TODO instructions in Sicb<your-class>Cnv.h
· Follow TODO instructions in Sicb<your-class>Cnv.cpp
· The converter factory must be made known to the system. This in fact depends on the linking mechanism: If the converter is linked into the executable as an object file, no action is necessary. However, usually the converter code resides in a shared or archive library. In this case the library must have an initialisation routine which creates an artificial reference to the created converter and forces the linker to include the code in the executable. An example of creating such a reference can be found in the file $LHCBSOFT/SicbCnv/<version>/SicbCnv/SicbCnvDll/SicbCnv_load.cpp. The convention for these initialization files is the following: for any other package replace the string "SicbCnv" with "OtherPackage".
· Compile link, debug
· Once the converter works, remove unnecessary TODO comments.
18.3.3 Back Converters

In some cases it may be necessary to convert event data from the Gaudi Transient Event Data store back to SICB banks in the ZEBRA COMMON block, to make this data available to FORTRAN algorithms. This "back" conversion is done by dedicated converter algorithms. An example is the CaloDigitToSicbConverter algorithm in the SICB/CaloSicbCnv package

18.4 Access to the Magnetic Field

The magnetic field map will be accessible in the future via the transient detector store. For the time being, as this is not implemented and as access to the magnetic field has been requested, we have provided a magnetic field service. Again this is effectively just a wrapper which uses SICB routines to read the information from a .cdf file.

The location of the field.cdf file is provided by the standard.stream file which is read in by a SICB routine called from Gaudi. This file is in the data base area, $LHCBDBASE/standard.stream in AFS. For every version the file used in the production is read in. The location of the standard.stream file will be taken from an environment variable as per normal SICB operation.

To use the Magnetic field service one should modify the jobOptions.txt file to include the following:
ApplicationMgr.ExtSvc = { "MagneticFieldSvc"};

Any algorithm which requires the use of the service makes a request via the service() method of the Algorithm base class:
IMagneticFieldSvc* pIMF= 0;
StatusCode sc = service("MagneticFieldSvc", pIMF );

The service provides a method:
StatusCode fieldVector(HepPoint3D& Pos, HepVector3D& field)

which gives a magnetic field vector at a given point in space, for example:
HepPoint3D P(10.*cm, 10.*cm, 120.*cm);
HepVector3D B;
pIMF->fieldVector( P, B );

The magnetic field service uses a new version of the SICB routine GUFLD. In this new version the best possible description of the magnet geometry and the field are assumed in order to eliminate dependencies with other parts of SICB. Technically in SICB this corresponds to:
IMAGLEV = IUVERS('GEOM','MAGN') = 4
IFLDLEV = IUVERS('GEOM','MFLD') = 4

These two parameters have been fixed to 4 in the production since a few months before the Technical Proposal: version 111 of SICB. Thus this seems to be a reasonable restriction until the field map is provided in the detector description database.

For an example of the use of this service see the sub-algorithm readMagField in the example FieldGeom distributed with the release.

18.5 Accessing the SICB detector Geometry from Gaudi

As discussed previously, the detector geometry will be included along with the field map in the XML detector description database. Currently only a part of the LHCb detector is in the new database. However, the detector geometry used in the production of the data can be accessed by calling a function in the SicbFortran name space (this function is just a wrapper for the FORTRAN function of similar name):
void SicbFortran::utdget(const std::string& a1, const std::string& a2,
                         int& nParam, int* data);

nParam should be set to the number of data words required and on return from the function will contain the number of data words actually copied into the array: data. The first string contains the name of the sub detector whose geometry is being requested and the second string is a list of options:

'V'- version;
'G' - geometry description parameters (default);
'C' - calculated geometry (not in *.cdf);
'H' - hit description parameters;
'D' - digitization parameters;
'R' - reconstruction parameters;
'F' - floating point parameters (default);
'I' - integer parameters;
'N' - take parameters from the *.cdf file
'L' - ZEBRA pointer to the beginning of the parameters storage is returned in IARRAY(1)

An algorithm requiring this access should include the header file:
#include "SicbCnv/TopLevel/SicbFortran.h"

and call it so:
float rpar[300];

SicbFortran::utdget("WDRF","D",mpar, (int *) rpar);
log << MSG::INFO << " wdpar(" << j << ") = " << vf[j] << endreq;

Note that the data returned by the function is written into an integer array. However we can also read floating point numbers as in the code fragment above by casting a float array.

One should notice that the geometry returned by this function is that which was used in the production of the data and not that which is in the current version of the .cdf files. Only if the option 'N' is specified are the .cdf files read in from the standard location. In order to be able to use the array of parameters returned one has to know in advance the organization of these data in the cdf file since the data are stored in an array and not in a common block with named variables.

The sub-algorithm readTRackerGeom in the example FieldGeom extracts and writes out some digitization and geometry parameters of the outer tracker.

18.6 Using FORTRAN code in Gaudi

Existing FORTRAN code can be used within a Gaudi application by using the FortranAlgorithm class.This is a standard Gaudi algorithm which calls the FORTRAN routines: SUINIT in the initialize() method, SUANAL in the execute() method for each event, and SULAST in finalize(). Implementing these three routines allows you to write code in FORTRAN and have it called from within Gaudi, in particular to import routines already written in SICB.

Note, however that there are some points that should be kept in mind when importing SICB code into Gaudi. The following list is far from being complete but we hope that it will help anyone using the FortranAlgorithm. It may be updated in the future with the user's experiences and our own findings.

· Gaudi is linked with only two sub-packages of SICB: Finclude and Futio. This means that taking some code from SICB and running it successfully in Gaudi will not always be straight forward. Every case will have to be studied separately. In some cases you may find that the effort required to make the code compatible with Gaudi is comparable to writing it directly in C++. For some pieces of code the integration into Gaudi may be very simple. The difficulties will come mainly from dependencies of the code you want to include on other parts of SICB. For instance there may be common blocks which your code needs but which are never initialized.
· As most of SICB is not executed, you will have either to include and execute the required initialization or to try to eliminate those dependencies.
· Gaudi uses SICB to access the data in the DST's, to extract the information from the banks and to convert the data into the appropriate classes. This needs some initialization but not every SICB initialization step is followed. The standard.stream file and setup.cdf are read in from the standard locations. The program will have access to the data in the cdf files which were used to produce the DST.
· Sicb.dat is not read in. If you need to change the running conditions of your program do it using the jobOptions.txt file or write your own cards file and read it in SUINIT.
· In Finclude you will find all the NAME_BANK.INC and NAME_FUNC.INC include files. This means that you have access to the bank data with the usual utilities provided by SICB. For example, to access the momentum of the reconstructed track one can use (as in SICB):
P = AXTK$P(NOBJ)

· Futio includes most of the UT* and UB routines which can therefore be used by FORTRAN code within Gaudi.
· Initialize the histogram files yourself in SUINIT. Gaudi initializes the histogram service but this can be accessed only from C++ code.
18.7 Handling pile up in Gaudi.

In this section we explain the pile-up structure implemented in the Gaudi framework (see Figure 18.1)

Pile-up in Gaudi is performed by a pile-up algorithm. An example (PileUpAlg) can be found in the package SicbCnv. The pile-up algorithm creates a second instance of the event selector which has to be configured in the job options file. The leading event will be read in from the EventSelector created by the ApplicationMgr. The piled-up events are read in from the second instance created by the pile-up algorithm. There are two different iterators, one for each EventSelector instance, which loop over these two data streams.

When reading the ZEBRA files produced by SICB the events are merged at the level of the ZEBRA common blocks by a call to the Futio routine REMERGE. Every C++ algorithm requesting data from one of the converted banks will get the merged data. Every FORTRAN algorithm using the SICB banks will also read the merged data as SICBDST does.

PileUpAlg must be the first algorithm called (unless spillover is also enabled, in which case it should be the second algorithm after the spillover algorithm), other algorithms will access the merged events when they retrieve some data from the store. The pile-up algorithm controls the number of events which have to be piled-up to every leading event. PileUpAlg uses a C++ version of the SICB routine RELUMI to get the number of pile-up events as a function of the luminosity and some other beam parameters. Those parameters are currently read in from the beam.cdf file. The C++ version of RELUMI uses the random number service in Gaudi. Other implementations of the algorithm, for instance to return a fix number of pile-up events every time, may be implemented if they are needed.
Figure 18.1 Pile-up in Gaudi

In the future, the two event selectors will use two different data services to access different data stores. There could be several different pile-up algorithms using a pile-up service provided in the framework, including all the common functionality needed by the different pile-up algorithms. The current implementation is very dependent on SICB and does not use any pile-up service.

The following job options are necessary to instantiate the current implementation of the pile-up structure. First, tell the ApplicationMgr to create a second SicbEventSelector, called PileUpSelector::
ApplicationMgr.ExtSvc += { "SicbEventSelector/PileUpSelector" };

The application manager should know that the pile-up algorithm should be run, and the user has to configure how this algorithm works. That configuration depends on the concrete implementation of the algorithm. In the current PleUpAlg the user can select between two different implementations to get the number of pile-up events for signal and minimum bias leading events. To select pile-up for signal or minimum bias one can use the pile-up algorithm property "PileUpMode" which can be set to LUMISIGNAL or LUMIMINBIAS. In both cases the number of Pile-up events depends on the luminosity..
ApplicationMgr.TopAlg = { "PileUpAlg","Alg1",Alg2,...};
PileUpAlg.PileUpMode = "LUMISIGNAL";

Finally the second event selector should be configured, the input data source for the pile-up events has to be defined with the PileUpSelector.JobInput property. This property accepts JOBID's, FILE names or TAPEs (note that the format of this option is different to the format explained in section 18.2 for the property of the "main" event selector) :
PileUpSelector.JobInput = "JOBID 12933";

18.8 Handling SpillOver in Gaudi

It is also possible to read in additional (spillover) events. In order to switch on spillover you should include in your job options the lines shown in Listing 4.,
Listing 4 Job options for spillover
1: //##############################################################
2: // Options file for adding spillover to Brunel
3: //==============================================================
4: ApplicationMgr.TopAlg = { "SpillOverAlg" };
5: SicbEventCnvSvc.enableSpillover = true;
6: SpillOverAlg.SpillOverMode = "LUMI";
7: // Enable next two lines to modify number of spillover events.
8: // Default is Prev = 1, Next = 0; Maximum is Prev=2, Next=2
9: // SpillOverAlg.SpillOverPrev = 2;
10: // SpillOverAlg.SpillOverNext = 1;
. Note that SpillOverAlg must be executed ahead of any other algorithm (including ahead of pileup) so Line 4 must occur in the job options file ahead of any other TopAlg declaration.

In the example shown, spillover is included and enabled, and only one additional event is read in by default, corresponding to the previous beam crossing. Up to a total of 4 spillover events can be read in by changing the values of the SpillOverAlg.SpillOverPrev and SpillOverAlg.SpillOverNext job options. The spillover events are read into additional branches of the LHCb data model, as was shown in Table 8.2 Note that the events are merely made available in the transient event data store of Gaudi. None of the event information is modified, in particular the time of flight information of the hits is not modified: the labels Prev, Next etc. are for convenience only, it is up to the algorithms using this infomation to add appropriate timing offsets when required.

18.8.1 Limitations

The current implementation of spillover has the following limitations:

· It is not possible to add pileup to the spillover events within Brunel. If required, the events read into the spillover input stream should come from a file of previously piled-up events.
· The probability of having a pileup event in a given beam crossing depends on the instantaneous luminosity for that event, which is taken at random from the luminosity distribution of the fill. Currently, independent values of the instantaneous luminosity are taken for each of the pileup events and for the spillover. In a future version the same instantaneous luminosity will be used for all sub-events of a given main event.
· Since spillover events will be combined by digitisers into a single Raw event, only the /MC part of the event is provided. Furthermore, only those parts of the /MC subevent which have a converter in the SicbCnv package are available.
18.9 Increasing the size of the ZEBRA COMMON block

The default Gaudi implementation initializes the ZEBRA COMMON block to a rather small size (3M words) in order to avoid a large memory overhead for applications that need to access SICB facilities but not large events. This limited size is however insufficient to read all but the simplest of events. Data processing applications need to over-ride this size. One way to do this is to provide a private version of the GETZEBRASIZE routine, to be linked ahead of the one in the SicbCnv library. An example is shown in Listing 18.1, where the ZEBRA COMMON size is set to 12M words
Listing 18.1 Code to redefine the size of the ZEBRA COMMON block
1: SUBROUTINE GETZEBRASIZE(size)
2:
3: IMPLICIT NONE
4: #define NWPAW 1000000
5: #define NWGEAN 12000000
6: REAL GSTORE
7: COMMON / GCBANK / GSTORE( NWGEAN )
8: REAL HPAW
9: COMMON / PAWC / HPAW(NWPAW)
10:
11: INTEGER size
12:
13: size = NWGEAN
14:
15: RETURN
16: END



Quadralay Corporation
http://www.webworks.com
Voice: (512) 719-3399
Fax: (512) 719-3606
sales@webworks.com
TOC PREV NEXT INDEX