Home E-mail Notes Meetings Search

[ Home | Contents | Search | Start a new article | Reply | Next | Previous | Up ]


Draft answer to the LHCC referees (Trigger/DAQ/Computing)

From:
Date: 5/1/98
Time: 10:10:12 PM
Remote Name: 137.138.245.142

Comments

Dear LHCb colleagues, Here are the outline of answers presented to the LHCC referees during the meetings on Wednesday next week. Please e-mail any comments to Ueli Straumann and myself. yours Tatsuya --------------------- ===========================================================================

> Trigger/DAQ: > from Brian Foster > 1) More information on Timing and Fast Control requirements

The TTC system has always been identified as a clear candidate for common development between the LHC experiments. When the design began the LHCb proposal did not exist and hence the development was driven by the requirements of Atlas and CMS. However in the past year LHCb has discussed with the RD12 TTC designers (B Taylor et al) the special requirements of LHCb. These discussions are continuing in the spirit that the 'standard' TTC system must fulfill the requirements of all the LHC experiments (as requested by the LHCC).

The requirements for all the experiments are similar in terms of clock frequency, jitter on the clock etc.

We have identified the main two requirements on a TTC system that differ for LHCb compared to Atlas or CMS to be the following:

a) A Level-0 accept rate of 1 MHz (as compared to <100 khz for atlas and cms)

b) The necessity of transmitting another Level of trigger decisions to the detector electronics at a rate of 1 MHz.

Point a) of the above should not be be problem for the TTC system since for all experiments it transmits the Level-0 decision at 40 MHz. The high Level-0 trigger rate makes the event counter features of the TTC receiver chip unusable for us, since it would mean the loss of 1 or 2 clock cycles if it is read out. However the event counter however can be implemented externally to the TTC receiver chip.

Point b) of the above needed more discussion since it was not forseen in the design and implementation of the RD-12 system. We have studied the problems and are confident that we can transmit the Level-1 decisions at an average frequency of 1 MHz using the broadcast feature of Channel-B of the RD-12 system. This channel has a maximum transmission frequency of broadcasts of 1.25 MHz. Clearly this imposes a stringent limitation on the average Level-0 trigger rate. However trigger rates significantly higher than 1 MHz would immediately also cause problems elsewhere, e.g. in the readout of the Level-0 de-randomizer buffers, etc. (Note: Instantaneous Level-0 trigger rates above 1.25 MHz will lead to a higher latency of the Level-1 trigger and has to be absorbed in the Level-1 buffers.)

Having studied the RD-12 implementation of the TTC system we have concluded that the system is usable for LHCb. Our comments on the possibility to configure the chip without the TTC system were taken into account when the TTC receiver chip was recently re-implemented and it thus serves well for the LHCb needs.

One concern however is not yet resolved, namely the questions of transmission errors within the RD-12 system. This is a common problem for all LHC experiments and in this spirit it is currently under study.

Reference: ---------- LHCb 98-031, DAQ "Timing and trigger distribution in LHCb", 9.2.1998

---------------------------------------------------------------------------

> > 2) Luminosity measurement, how it is done, whether this requires any > particular special DAQ capabilities, such as rate? Or time-stamping? If > there will be some sort of forward-backward Roman pots for elastic > scattering, what are the rates? What is the accuracy that can be > obtained? How does it compare with whatever the machine will be using > for lumi optimisation?

The luminosity is measured through counting the number of interactions per bunch crossing seen in the vertex detector. To be independant of trigger efficiencies it is planned to use the pileup systems to count the number of vertices in each bunch crossing. A study shows, that this allows to reach an accuracy of a few 0.1% on relative luminosity changes. This option does not need any significant hardware addition to the system presented in the technical proposal and will permanently be active to monitor accurately the relative luminosity.

By combining data from the pile-up system and the full vertex detector we estimate, that the absolute total cross section can be determined to an accuracy of better than 3%, by analysing the poissonian distribution of the number of interactions observed per bunch crossing. The uncertainty comes from the extrapolation to the optical point and from effects of diffractive events. Comparing with the CDF (1994) analysis, which reached an accuracy of 1.5% and since our acceptance in terms of pseudorapidity covers a region even closer to the beam, we believe that our estimation is conservative.

----------------------------------------------------------------------- > > 3) Related to the above, how to estimate dead-times as a function of > particular triggers? Or will there simply be one overall dead-time > number? Will it be averaged, or related to actual currents in particular > bunches? Will latencies/rates for each trigger be a) available b) > stored?

In general the system will be run such, that the deadtime is very small (below 1%), since in this new generation experiments it is not necessary anymore to stop the pipelines for readout of event data. Triggering of events can thus continue, even during the readout of earlier events. HERA-B is the first experiment, which runs such a system and experience from there has and will certainly continue to influence many details of the LHCb trigger and dataacquisition details.

However due to dataflow limitations (CPU power and buffer size) and due to the finite size of the derandomizer buffers (see proposal page 34) deadtime can happen if any of the buffers in the system become full. In this case any triggering of the entire experiment will be blocked by the readout supervisor until enough empty buffers become available again or an operator intervention occurs (see also answer to question B.8 and LHCb 98-029, "DAQ Implementation Studies")

Such occurencies, the reason for them and the total deadtime it caused will be recorded in detail. However since this depends on the history, the last event triggered before the deadtime occurs, is usually not significant in any respect and for the same reason correlations to certain bunch crossings are unlikley.

The latency of a given event, more precisely the CPU time needed in the trigger levels L1 and higher for each event is however very helpfull

to understand the system behaviour and will be recorded carfully.

> What is the strategy envisage for changing trigger conditions > during a fill, or over longer periods?

In general trigger conditions should not be changed during a fill. Defining trigger conditions over longer periods will be done in close collaboration between the physics coordinator and the trigger coordinator. All changes to the systems will be automatically monitored.

> What is the strategy for ensuring > that all triggers have enough redundancy that other independent triggers > are available to measure their efficiency?

Level 0 triggers are redundant to a certain amount, since all the channels are also triggered partly by the decay products of the other b quark. This allows cross checks of efficiencies. In level 1 the tracking and the vertex trigger can be run in parallel, allowing to determine efficiencies from the data, since these two triggers work on rather independant quantities.

To monitor the inefficency of the pileup veto systems a small fraction of events will be triggered without the veto condition, allowing to check the system off-line.

Random triggers will be taken at a low rate to monitor off-line the detectors and the overall trigger system performance and stability.

---------------------------------------------------------------------- > > 4) Related to the above, how to deal with "satellite" bunches in the > machine if they exist? This is a significant problem at HERA, where the > proton beam can have as much as 10% of its intensity in the next rf > bucket. In principle, the luminosity measurement/experimental trigger > will react very differently to events from these satellites. >

Due to the RF structure of LHC of 2.5 nsec satellite bunches will occur at a distance of 75 cm to the main bunch. If this satellite interact with the main bunch, the nearest satellite interaction point will occur at 75/2 cm from the nominal interaction point. At HERA the proton satellite intensities are in normal operation below 1%, however in cases where the filling timing adjustements are not optimal, values in excess of 10% have been observed. The LHC experts predict [ref] satellite capture rates normally of order few o/oo, the luminosity of the satellite / main interaction being suppressed by better than 10E-4.

These satellite interactions will have negligible acceptance in the vertex trigger due to their offset in z. The satellite interaction at z = +/- 37 cm can in principle give a small additional rate to the online luminosity measurement (see answer to question A2) through ghost conicidences of the two silicon planes involved. However this can be corrected for, using the full silicon vertex detector as mentioned above in the answer to question A2.

[ref] Sylvain Weisz, LEMIC meeting, 31.7.97

------------------------------------------------------------------------

> 5) How sensitive are the various parts of the trigger to movements > of the beam? There is a discussion for the vertex trigger, but > how about the tracking trigger? For the vertex trigger, there > statement is that the beam needs to be stable to 200 microns ... > although this will be true in the short term, over periods of weeks or

> months expereince at HERA eq shows that the beam can drift by > significantly more than this? What is your strategy to cope with this?

>

Beam movements at the interaction point of LHCb will lead to a partial separation of the two beams at the IP and therefore to a reduction of the luminosity. Assuming the relative luminosity can be measured and corrected to <10% (confirmed by lhc), we can expect that relative movements of the beams at our ip can be controlled to better than 100 microns.

The beam offset affects the trigger only through the second order effect

of the r-phi geometry not being exact at large offsets. The effect seen in the current version of the algorithm is larger than this. We are therefore confident that we can decrease the dependence of the trigger efficiency on the vertex position by improving the trigger algorithm. We are working towards an improved version of the algorithm.

We envisage to have the whole vertex assembly on verniers where we can alter the position and tilt of the vertex detector as a unit according to beam conditions.

The position resolution used by the tracking trigger is (in the inner part) at most 1 mm, beam movements thus do not affect its performance.

----------------------------------------------------------------------- > 6) Clearly small changes in the characteristics of the non-b signal > can have major effects on trigger rates and efficiencies. What > variations are caused by using the extremely limits of the currently > determined proton structure functions? In particular, how do such > extremes effect the E_t and p_t distributions? Will the relevant x > regions for LHC have been measured by HERA, and with what precision?

A study of the p_t distributions of hadrons accepted by the LHCb spectrometer for different sets of structure function has been initiated. Results will be communicated, as soon as they are available.

--------------------------------------------------------------------- > > 7) How to cope with variations in the machine backgrounds by > factors or 2 - 4? For ATLAS/CMS, it is always said that the > real interaction rate far exceeds any machine backgrounds..is > this also true for LHCb?

As it is known from previous studies made eg. for IP1 the characteristics of the machine generated background depend strongly on the accelerator layout and optics located close to the IP, changing with it by factors or even orders of magnitude.

So the detailed study of this kind of the background requires the frozen layout and optics which is not yet fixed for LHC-B IP8. Preliminary considerations based on the previous experience were formulated in the LHC-B note 97-013, stressed the necessity of the expanded study which is now under consideration.

A non negligible effect is expected from beam halo muons, which have some probability to trigger the L0 muon trigger. Quantitative studies of this effect are still being worked on.

------------------------------------------------------------------- > > 8) Figure 6.3 on page 34 of the TP worries me. If the L0 trigger > rate increases by 25% from the design (is 1 MHz design, or maximum?) > then you start to run into trouble very quickly.....shouldn't you > to be safe really readuce the readout time to options A or B, i.e. > 500 nsecs?

The experiment is specified for a maximum L0 trigger rate of 1 MHz. However in all the various aspects of this limitation some safety is built in (see also the answer on the TTC system). 10% safety in the behaviour of the derandomizer buffers is considered to be enough as a technical contingency, therefore version D in table 6.1 was chosen as the baseline option. Choosing a readout speed of 500 ns would have significant cost relevant impacts on the way the experiment is read out at L0 time.

In practice the L0 trigger rate will needed to be adjusted such, that there is some safety to cope with variing beam quality and luminosity during a fill. An automatic tool to scale the trigger conditions will be foreseen to make optimally use of the total bandwidth available.

----------------------------------------------------------------------

> [B] More specific points > 1) 3D flow in general > -What is the status of the 3D flow chip? > -Have prototypes been made? > -If so, what was the performance?

Status of the 3D-Flow:

A complete netlist exists for a four processors 3D-Flow ASIC, implemented with 0.35 micron CBA technology at 3.3 Volt. The simulations show dissipated power of 884 mW at 60 MHz, and a die size of 63.75 mm sq. The VHDL code for the chip is written in "Generic HDL" using a "style" targeted to an ASIC. On the basis of such a netlist, a prototype production run of 3D-Flow chips could be initiated at any time it was deemed necessary.

On the other side, in view of the continuing rapid advances in ASIC technology, and in order not to incur un-necessarily into the Non-Recursive-Engineering (NRE) costs associated with the prototype run, it appear more prudent to delay the production of prototypes to a date closer to the required time of utilization. One can in fact expect that future years will see the routine utilization of 0.25 or 0.18 micron technologies, which in turn will bring, at a reduced cost, smaller dies, better yields, reduced power consumption and higher speed.

But even though there are very strong reasons to delay production, there are equally strong reasons to generate a prototype sample of 3D-Flow chips that will allow to test its functionality, especially in the context of an assembly consisting of a large number of chips. In order to satisfy this requirement, while still avoiding un-necessary early NRE expenditures, we have taken into serious consideration the possibility of an FPGA version of the 3D-Flow chip. FPGAs have now reached the size of 250,000 gates/chip in the 0.25 micron technology, and are moving towards the 0.18 micron. Manufacturers of large FPGAs such as Altera, ORCA-Lucent and Xilinx will start delivering this year the new families of devices (in some cases at lower costs than before), with snaller dies, more gates, less power consumption, etc.

In this light, it has become feasible to fit the entire 3D-Flow processor in a single, new generation, FPGA chip, and we have taken steps to take advantage of such a development. After acquiring the necessary tools, we have transported the 3D-Flow HDL code to the FPGA environment, and designed an Altera FPGA version of the 3D-Flow processor. By this Summer, it will be equally possible to design a single chip FPGA for the other two vendors, Lucent and Xilinx, and we plan to follow that route too. At that point, an FPGA 3D-Flow production run will become possible, at much reduced NRE costs.

> -In the 3D flow implementations, what happens when individual processors > malfunction?

Handling of processor malfunctions:

An essential part of the 3D-Flow design is that every single processor is individually accessible by a supervising host, via an RS-232 line. This feature provides the capability of periodically testing the processor's performance by down-loading test patterns and/or test programs. In the case of suspected or detected malfunction, the processor performance could be tested remotely, and its performance diagnosed.

In the event of catastrophic malfunction (e.g. a given processor failing completely to respond), normal operation, to the exclusion of the sick processor, could still be maintained by downloading into all the neighbours a modified version of the standard algorithm, instructing them to ignore the offending processor. Obviously Physics consideration would dictate whether such a temporary fix is acceptable, but it is a fact that the system itself does contain the intrinsic capabability of fault recovery, via purely remote intervention.

---------------------------------------------------------------------- > > 2) Muon trigger: > -What efficiencies are assumed per chamber and station? > -What happens if sectors malfunction/need to be switched off? > -For the 3D flow implementation - 45K separate adjustable delays seems

> very undesirable? Even if this is possible, are the delays stable at the > 3 - 5 nsec level presumably required to remain in synch at the 3D flow

> chip? > -Note 97-024 implies a solution with the processors in front of the > shielding wall. Surely the radiation levels here will be too high?? > -Why are the results for the muon trigger shown in 12.23 of the Tp so > different from those in note 98-021? Given that the improvement is > shallow as a function of cut-off, particularly for the pi and mu cases, > is this a useful trigger, particularly since in principle one ought to

> compare with making the same harder pt cut-off at L0? >

(For answer of the last point see question B6.)

Question: What efficiencies are assumed per chamber and station?

Answer: Based on prototype studies of the CSC's and MRPC's we expect individual planes of both types of detector to have efficiencies >99% at the rates at which they are being

asked to operate. Therefore, we expect the combinations of 3/4 CPC planes and the OR of two MRPC planes to have close to 100% efficiencies. We have used 100% efficiencies

in calculating the L0 trigger efficiencies.

Question: What happens if sectors malfunction/need to be switched off?

Answer: Given the redundancy of three of four independent planes in the CPC regions and OR of 2 in the MRPC region, we can continue operation (albeit with lower efficiency in the problem regions). In the CPC region if we go to 2 of 3 if a plane malfunctions there will be some degradation of the timing since the four planes are required not only for good efficiency but for timing. Moreover, if the malfunction occurs in M3,4 or 5 we can go to defining the starter for the MUon algorithm as a double rather than a triple coincidence. In any case, it is possible to continue operations with good efficiency.

Question: For the 3D-Flow implementation - 45 K separate adjustable delays seems very undesirable? Even if this is possible, are the delays stable ayt the 3-5 ns level presumably needed to remain in synch at the 3D-Flow chip.

Answer: The delays were introduced to compensate for different z positions of planes, different lead lengths for the leads form the pads to the periphery of the chambers, and for different lead lengths from the preamp/discriminators to the 3D-Flow and the data pipeline.

We expect that a 64 ns total delay with a 4 bit programmable

setting (resolution of 4 ns) will be adequate to handle the variation that we need to accommodate. Such delays are commercially available and have part to part variation less than the 4 ns resolution.

Question: Note 97-024 implies a solution with the processors in front of the shielding wall. Surely the radiation level will be too High??

Answer: The position of the processor (either for the Marseilles or the 3D-Flow option is, indeed, in front of the shielding wall at one side of the detector close to the M5 muon plane. At that positon, based on MARS calculations, we expect a radiation level less than a few rads per year, quite manageable. We would not expect to have have to use radiation tolerant electronics at such low doses.

For the alternative L0 muon trigger the questions can be answered as follows:

* What efficiencies are assumed per chamber and station?

In working out the alternative solution (note 97-024) we assumed in the current simulation the chamber/station efficiency of 100%

* What happens if sectors malfunction/need to be switched off?

If some sectors are malfunctionning, apriori, this could affect the

"efficiency" of the fast identification of a muon track by more than 11%. A majority logic using 3 out of 4 mu2 to MU5 has not yet been tried out. However, if important parts of the FE electronics which construct the sectors are dead (ex group of many neighboring sectors or all of them), then the level 0 muon trigger cannot work. Such a malfunction has to be repared.

* Note 97-024 implies a solution with the processors in front of the shielding wall. Surely the radiation levels here will be too high?

The location of the electronics for the muon trigger is not decided yet. It can be located a) close to the muon chamber FE electronics b) in the racks dedicated to the fast electronic, close to the zone wall between the muon chamber and the shielding wall or c) in the electronic baracks behind the shielding wall. In most of the case the radiation dose is low:

o In case a) the dose is different for the different muon chambers location. For MU1, it is expected between 10 and 30 Krad/y, requiring radiation tolerant electronics. For the other muon chambers the dose is below 1 Krad/y allowing the use of standard electronic.

o In case b) the dose is below 1Krad/y (This point has to be checked with H.J. Hilke). Thus standard electronics can be used.

o Behind the shielding wall the dose will be negligeable.

* What do you perceive as the critical items in the progress of the system you have proposed, and what do you think would be a reasonable schedule for addressing such items ? what program of studies/tests/simulations, if any , do you plan to follow in the period, let's say, between now and the end of '99 ? (S. Connetti)

The isolation of the critical items of our system is underway, as well as the method to address them. We plan to use simulation based on VeryLog to study deeply the behaviour of this system. We also have in mind the

construction of dedicated harware for the part which can not be well described by a simulation. In the coming weeks we will in position to determine a rough schedule. In paralellel, we have to improve our simulation, taking into account the more severe physical backgrounds producing extra hits in the chambers. This could deteriorate the performance of the fast muon id algortithm.

-------------------------------------------------------------------------

> 3) Pile-up veto. > -How long does it take to get all the data in for an average event?

System runs at LHC frequency: data read-in in parallel within 25 ns

> -What is the estimated latency?

Latency is 0.625 microsec

> -What happens if coherent noise causes all strips to fire?

We get an overflow in the histograms, this should be detected in the peak finders and no vertex should be produced. Al the error and test conditions still have to specified in detail.

> -Where is this processor physically? >

Outside the detector, in the pit at a distance of several meters

Most of this and further information can be found in the LHCb note 97-016

----------------------------------------------------------------- > 4) L0 decision unit > -Why does it use the gamma coordinates at the preshower? > -Does the L1 trigger have access to any tracking detector info. for > gamma triggers? The TP implies not - but then can L1 improve on the > gamma trigger? >

Gamma positions of the preshower are sent to the L0 decision unit, to allow are more complex decision, for instance calculate a quality factor of this event, based on the position and p_t of the found L0 candidates.

To improve the selectivity it is being discussed to use the L1 tracking trigger to establish an isolation criteria of the photon. The implementation of this would however require some additional hardware.

More detail you can find in new "trigger corrected" version of our note

in cernsp ~kostina/public/note97-015.2.ps

----------------------------------------------------------------------- > 5) L1 vertex trigger > -What happens if some sections of first 3 stations are dead?

The algorithm does not rely on specific triplets of stations for track definition. A track will be found if three successive hits are seen anywhere in the detector. Evidently, dead sections will have an impact on trigger efficiency through deterioration of hit efficiency.

> -Doesn't the multiplication of probabilities to give a total event > probability produce a multiplicity dependent bias?

The total event probability is not taken as the multiplication of individual track probabilities. Therefore, although there is a small dependence on event multiplicity, the effect is small.

> -How does the tail of the latency vary with increasing noise in the > detectors?

We do not envisage to have levels of noise that would correspond to more than a few percent of the number of real hits. We would raise the detector thresholds accordingly if needed.

No systematic study of latency versus noise has been performed yet. We are planning to introduce a common framework for timing and performance studies, something which has not available for the TP, where speed and performance were assessed separately.

However, the only part of the algorithm affected by the increased noise will be the track finder, where the time taken increases as the second power of the number of hits. Very roughly, a 20% increase in the number of hits would result in an overall latency 20% higher. The track finder has not been fully optimised yet and is currently responsible for half the total latency. We envisage to take this number down to a third.

> -With increasing beam background?

Beam background is not expected to be in any way dominant in LHC.

> -What is the L1 VTX rejection for events already passing L0? Note 98-006 > implies the numbers are for all events, not L0 passthroughs.

All the numbers given in the TP are for events that have already passed the L0. Note 98-006 was done earlier and gives numbers without L0. This is the reason that the numbers given in the two documents differ by a few percent.

> -What are the arrangements for monitoring and checking performance - > deciding on requirement for new alignment constants? This is a complex

> and high performance system - monitoring will be vital. >

We envisage to use the secondary port of the processors used in the trigger farm to collect monitoring and quality control data periodically. There will be a dedicated processor in charge of monitoring and quality control.

Alignment constant calculation and updating is an important issue. For trigger use we envisage to have a simplified arrangement of three (or possibly four) alignment constants per detector wafer and use real events to align. A few x 10K events at the start of a new run will be sufficient to calculate/update those alignment constants. During this

period the vertex trigger decision will be 'reject'. The alignment calculation will be performed on the monitoring/quality control processor using either raw or digested data from the trigger processors,

who would be running a dedicated alignment algorithm at the time.

----------------------------------------------------------------------

> 6) Tracking trigger > -I am not clear of how much is gained by the level 1 trigger if the > vertex trigger is already applied - does it select different events?

Unfortunately this question has not been studied yet carefully. A few qualitative statements can be however made: The type of information used in the two systems is orthogonal: The track trigger rejects minimum bias events, where the high p_t L0 trigger decision was based on a fake particle or on a wrong p_t measurement. The L1 vertex trigger rejects events, which have not the required vertex topology. Therefore we expect the two rejection factors can be multiplied if both triggers are applied. - Good B events are selected by both systems with a relatively high efficiency, so we expect that the two triggers will select partly different partly the same events. This overlap is particularly usefull to monitor efficiencies. See also above.

> -In general, more detail on the overall efficiency, latency, performance > of the L1 would be helpful. >

The tracking trigger will be studied in greater details in near future. It is believed, that its implementation is not a critical issue.

> -Why are the results for the muon trigger shown in 12.23 of the Tp so > different from those in note 98-021?

The plots differ in the assumed cuts applied on the L0 trigger. In the the TP the same values were taken, which are defined in 12.3.4. on page 111. The lower the L0 p_t cutoff is selected, the better the tracking trigger can improve in p_t measurement..

> Given that the improvement is > shallow as a function of cut-off, particularly for the pi and mu cases, > is this a useful trigger, particularly since in principle one ought to

> compare with making the same harder pt cut-off at L0? >

Comparing Fig 12.12. (L0 muon trigger performance) with Fig 12.23 (L1 track trigger performance) shows that applying the tracking trigger with a p_t threshold of 1.4 GeV reduces the background by a factor 5, while signal efficiency is reduced by 30%. If the same minimum bias reduction would be required from the L0 trigger only, an increase of the threshold to 3 GeV would be necessary, which caused the signal efficiency to drop by a more than a factor 3. Similar number can be read from the graphs on the hadron trigger.

----------------------------------------------------------------------- > 7) Level 2 > -It would be nice to see figures for c separate from uds. What > suppresses c particularly in L0 and 1? Simply the pt cuts? >

For generic uds, c and b events generated over the full 4*pi solid angle, which have already been selected by the Level-0 and Level-1 triggers, the efficiencies to pass the Level-2 trigger are:

uds = 5.2% ; c = 18% ; b = 66%

These numbers may be compared with the less detailed breakdown given in Table 12.5 of the Technical Proposal.

Charm events are indeed suppressed by Level-0 using the P_t cuts, since they have a relatively soft Pt spectrum. The vertex trigger of Level-1 suppresses them further, since charmed hadrons have a shorter lifetime and lower decay multiplicity than beauty hadrons. Sometimes however, the Level-1 vertex trigger fires, after finding a couple of high impact parameter tracks from charm hadron decay plus one or two additional large impact parameter tracks due to multiple scattering. The Level-2 vertex trigger can usually reject such events, since it correctly parametrizes the impact parameter resolution taking into account multiple scattering (thanks to its knowledge of the particle momenta).

Comparing with Tatsuyas presentation:

After L1, Tatsuya said - uds : c : b = 0.79 : 0.15 : 0.058

We say the L2 effi. are then - uds , c , b = 5.2% , 18% , 66%

So after L2, one expects - uds : c : b = 0.79*5.2 : 0.15*18 : 0.058*16 = 0.39 : 0.25 : 0.36

This agrees fairly well with Tatsuya's presentation - uds : c : b = 0.44 : 0.23 : 0.33

Any differences are probably due to the very limited number of c Monte Carlo events which we have.

--------------------------------------------------------------------------

> 8) DAQ > -What happens to the readout network if the event size/throughput > increases by 25%? 50%? > We aim at having a technical safety factor of 2 with respect to the "normal working conditions" as stated in the TP. This means that the readout network should be able to sustain an aggregate throughput of twice the nominal value of about 4 GByte/sec without congestion.

This doubling of throughput could be caused by a doubling of the event size or a doubling of the trigger rate or a combination of both, as explained in [ref], p.12, 2.4, and in the TP, p.130.

We have also to envisage possible overflow conditions. A discussion is given in [ref], chapter 6. To summarise, we can consider 2 types of overflow:

1) at the level of the front-end read-out units (RU), due to an excess of data somewhere or by an unusual trigger rate. In this case new triggers are blocked until space is available again in the RUs.

2) SFC buffers are protected against variations in data throughput by the read-out network. However they may still overflow as a consequence of a mismatch of the processing power and the event rate. In the case of local effects, re-distribution of events is thinkable. If the effect is global (i.e. the total processing power does not match with the event building capacity, the trigger rate must be reduced.

Reference:

[ref] "DAQ Implementation Studies", LHCb 98-029, 9 February 1998

-------------------------------------------------------------------------------

> 9) Computing > -The requirements on the ODBMS are truly frightening. BaBar is finding

> problems with ODBMS/Operating system compatibility. What steps will you > take/how confident are you that you will have a product which will cope > with your requirements? >

The ODBMS is a new technology (first ODMG standard in 1993) and Objectivity/DB is a commercial tool. There are clear reasons for choosing this technology and product as the basis of the HEP event storage model as outlined in the many RD45 status reports and related documentation. However there are clear risks as well, some of which are outlined below. It is obviously important to understand all the risks and weigh them against the advantages, and this we intend to do before any final decisions are taken.

Objectivity/DB is a commercial software product and therefore there are always potential problems of incompatibility with other commercial software, in particular the operating system. Objectivity/DB is available on most UNIX and NT platforms from all the major vendors. An example of the sort of problem that can happen is the one you mention. Objectivity/DB discovered an aCC compiler bug that made it impossible to

compile the Objectivity /DB kernel code on HP/UX. Thus support for Objectivity/DB on HP/UX had to be temporarily dropped by BaBar. The likelihood of this happening depends crucially on the market for the commercial tool. For example, in the case of ORACLE/DB the market is so large that computer vendors make rigorus checks that ORACLE runs before releasing new versions of their OS, and thus the risk is minimal. ORACLE

was in quite widespread use in HEP during the LEP era and was used quite

successfully. The worry is that at present the market for ODBMS's is still small, although there has recently been a breakthrough into the telecommunications and scientific applications market. The main ODBMS vendors are small companies and therefore vulnerable. The evolution of the market will therefore be an important factor in determining the eventual solution adopted.

Portability of code and data between ODBMS vendors is an important isssue, in order to minimise the reliance on a particular product. The ODMG standard promises portability of code and data. However none of the

ODBMS vendors is 100% compatible with the standard, and this is another worry. Although Objectivity/DB is compliant to the 75% level, there are certainly specific features of this product that are exploited. The impact on the experiment's application code can be reduced however by having an interface layer of code between it and the Objectivity/DB interface. Another ODBMS (Versant) is being evaluated by the RD45 project to understand the work involved in migrating from one system to another.

The eventual solution must satisfy our specific technical requirements. These requirements are still being studied, and experience with Objectivity/DB is still being gained with small evaluation and prototyping projects. Data replication and clustering strategies, management of database schemas, and the interface to the mass storage system, are all issues that require further study. In LHCb we are currently specifying projects to model the detector geometry and the event structure and to build storage models using Objectivity/DB. We will build simulation, reconstruction and analysis frameworks and study the data storage components in realistic conditions with real users. In this way we can get a detailed understanding of whether the solutions can satisfy our needs.

The data recording needs of LHCb are comparable to those of the COMPASS experiment, which will begin full datataking in the year 2000 when the event sample is expected to be 300 TB per year. This will provide us with an early test of the population of the database at expected datataking rates of ~10 MB/s and also of the scalability of the data storage model to manage these very large data samples. LHCb's needs are a factor of 3-4 smaller than those of ATLAS, CMS and ALICE. ALICE has the added problem of collecting all its data in 1 month, the time the LHC is operated in heavy-ion mode, and therefore at a very high rate (> 1GB/s). A project studying high speed mass storage (~1 GB/s in 2005) is currently being proposed to the LCB with a milestone investigating 100 MB/s by the year 2000. This project however is not (a-priori) using Objectivity/DB but rather trying to probe the limits of storage technology (I/O, mass storage system, robotics).

To sumarise, in taking this approach we are intending to join our efforts to the other HEP experiments who have decided on this strategy. Our timescale is such that we can clearly benefit from the experience of

the pioneers (NA45, COMPASS, BaBar included) and we have the benefit of time to see how the commercial market evolves. Clearly there is no need to take final decisions now, and we must continue the R&D effort for at least two years at which time the situation should be much clearer.

----------------------------------------------------------------------- > > > from Andrei Rostovstev: > > - Have you considered a possibility to build a high-pt low-level track > trigger (similar to HERA-b)? This option seems to > have more flexibility than the calorimetric hadron trigger. > Example of tests of gas pixel chambers for HERA-b is encouraging: > high efficiency, low material thickness, possibility of fast signal > within 25ns for cells of 4*4mm, low occupancy allowing to combine > few small cells into one channel, presummably better transverse momentum > resolution than calorimetric trigger, compactness in space giving more

> freedom to use longitudinal space budget for the whole experiment, etc. > - Would it be possible to utilize slow charged hadrons for tagging > in the present tracker configuration?

HERA-B has a high p_t pretrigger, which sends O(0.5) tracks per BX to the FLT and thus does not reduce significantly the minimum bias events by itself. The FLT does the actual event reduction on the basis of specific track selections and invariant mass cuts. For the FLT this is assumed to be a "small additional load".

In contrast to HERA-B, L0 needs to be a real trigger, which reduces the event rate for each channel by about a factor 30. A recent study by the HERA-B collaboration shows indeed, that asking for two high p_t (>1.5 GeV/c) tracks could actually reduce the minimum bias rate by about a factor 20.

However the trigger philosophy of LHCb is more inclusive. It allows to trigger generally on B decay events, requiring single high p_t identified leptons or hadrons and information about vertex topology. The hadron calorimeter allows to identify the hadrons and is therefore in principle better suited to select inclusive high p_t hadron events. We believe, that the chosen combination of the L0 hadron calorimeter trigger with the L1 track trigger is optimal to select B physics in the LHC environment. Without a hadron trigger the acceptance for important channels like B_s -> D_sK, B_d -> DK^* and B_d -> pi^+ pi^- would be reduced by typically a factor of 4 or more.

----------------------------------------------------------------------------