Home Trigger E-mail Notes Meetings Subsystems Search

Home Trigger E-mail Notes Meetings Subsystems Search

[ Home | Contents | Search | Post | Reply | Next | Previous | Up ]

Status Answer / reminder Trigger Meeting

Date: 4/28/98
Time: 8:47:13 AM
Remote Name:
Remote User: lhcb


Dear Colleagues

To discuss the questions of the referees and our answers to them I remind you on our trigger meeting on

April 28, 1000h, roomm 40-R-C10 *******************************

The status of the answers is summarised below, this acts the same time as an agenda.

See you tomorrow Ueli


VERSION 27.4.98 / 2300h

> Trigger/DAQ: > from Brian Foster > 1) More information on Timing and Fast Control requirements

The TTC system has always been identified as a clear candidate for common development between the LHC experiments. When the design began the LHCb proposal did not exist and hence the development was driven by the requirements of Atlas and CMS. However in the past year LHCb has discussed with the RD12 TTC designers (B Taylor et al) the special requirements of LHCb. These discussions are continuing in the spirit that the 'standard' TTC system must fulfill the requirements of all the LHC experiments (as requested by the LHCC).

The requirements for all the experiments are similar in terms of clock frequency, jitter on the clock etc.

We have identified the main two requirements on a TTC system that differ for LHCb compared to Atlas or CMS to be the following:

a) A Level-0 accept rate of 1 MHz (as compared to <100 khz for atlas and cms)

b) The necessity of transmitting another Level of trigger decisions to the detector electronics at a rate of 1 MHz.

Point a) of the above should not be be problem for the TTC system since for all experiments it transmits the Level-0 decision at 40 MHz. The high Level-0 trigger rate makes the event counter features of the TTC receiver chip unusable for us, since it would mean the loss of 1 or 2 clock cycles if it is read out. However the event counter however can be implemented externally to the TTC receiver chip.

Point b) of the above needed more discussion since it was not forseen in the design and implementation of the RD-12 system. We have studied the problems and are confident that we can transmit the Level-1 decisions at an average frequency of 1 MHz using the broadcast feature of Channel-B of the RD-12 system. This channel has a maximum transmission frequency of broadcasts of 1.25 MHz. Clearly this imposes a stringent limitation on the average Level-0 trigger rate. However trigger rates significantly higher than 1 MHz would immediately also cause problems elsewhere, e.g. in the readout of the Level-0 de-randomizer buffers, etc. (Note: Instantaneous Level-0 trigger rates above 1.25 MHz will lead to a higher latency of the Level-1 trigger and has to be absorbed in the Level-1 buffers.)

Having studied the RD-12 implementation of the TTC system we have concluded that the system is usable for LHCb. Our comments on the possibility to configure the chip without the TTC system were taken into account when the TTC receiver chip was recently re-implemented. Unless new requirements come up ( e.g. that a synchronous clock frequency of 80 MHz is needed at the front-end electronics) during the detailed technical design of the LHCb detector, we do not believe that a re-implementation is needed especially for LHCb.

One concern however is not yet resolved, namely the questions of transmission errors within the RD-12 system. This is a common problem for all LHC experiments and in this spirit it is currently under study.

Reference: ---------- LHCb 98-031, DAQ "Timing and trigger distribution in LHCb", 9.2.1998

------------------------------------------------------------------------------- > > 2) Luminosity measurement, how it is done, whether this requires any > particular special DAQ capabilities, such as rate? Or time-stamping? If > there will be some sort of forward-backward Roman pots for elastic > scattering, what are the rates? What is the accuracy that can be > obtained? How does it compare with whatever the machine will be using > for lumi optimisation?

Tatsuya and Nikolai follow this, see also physics meeting

------------------------------------------------------------------------------- > > 3) Related to the above, how to estimate dead-times as a function of > particular triggers? Or will there simply be one overall dead-time > number? Will it be averaged, or related to actual currents in particular > bunches? Will latencies/rates for each trigger be a) available b) > stored?

In general the system will be run such, that the deadtime is very small, since in this new generation experiments, the pipeline is not stopped anymore for readout. Triggering of events can continue even during the readout of earlier events. HERA-B is the first experiment, which runs such a system and experience from there has and will certainly continue to influence many details of the LHCb trigger and dataacquisition details.

However due to dataflow limitations (CPU power and buffer size) and due to the finite size of the derandomizer buffers (see proposal page 34) deadtime can happen if any of the buffers in the system become full. In this case any triggering of the entire experiment will be blocked until enough empty buffers become available again or an operator intervention occurs (see also answer to question B.8 and LHCb 98-029, "DAQ Implementation Studies")

Such occurencies, the reason for them and the total deadtime it caused will be recorded in detail. However since this depends on the history, the last event triggered before the deadtime occurs, is usually not significant in any respect, for the same reason correlations to certain bunchcrossings are not expected.

The latency of a given event in the various trigger steps is however very interesting to understand the system behaviour and will be recorded carfully.

> What is the strategy envisage for changing trigger conditions > during a fill, or over longer periods?

In general trigger conditions should not be changed during a fill. Defining trigger conditions over longer periods will be done in close collaboration between the physics coordinator and the trigger coordinator.

> What is the strategy for ensuring > that all triggers have enough redundancy that other independent triggers > are available to measure their efficiency?

Level 0 triggers are redundant to a certain amount, since all the channels are also triggered partly by the decay products of the other b quark. This allows cross checks of efficiencies. In level 1 the tracking and the vertex trigger can be run in parallel, allowing to determine efficiencies from the data, since these two triggers work on rather independant quantities.

To monitor the inefficency of the pileup veto systems a small rate of events will be triggered without the veto condition, allowing to check the system off-line.

Random triggers will be taken at a low rate to monitor off-line the detectors and the overall trigger system performance and stability.

------------------------------------------------------------------------------- > > 4) Related to the above, how to deal with "satellite" bunches in the > machine if they exist? This is a significant problem at HERA, where the > proton beam can have as much as 10% of its intensity in the next rf > bucket. In principle, the luminosity measurement/experimental trigger > will react very differently to events from these satellites. >

Due to the RF structure of LHC of 2.5 nsec satellite bunches will occur at a distance of 75 cm to the main bunch. If this satellite interact with the main bunch, the nearest satellite interaction point will occur at 75/2 cm from the nominal interaction point. At HERA the proton satellite intensities are in normal operation below 1%, however in cases where the filling timing adjustements are not optimal, values in excess of 10% have been observed. The LHC experts predict [ref] satellite capture rates normally of order few o/oo, the luminosity of the satellite / main interaction being suppressed by better than 10E-4.

These satellite interactions will have negligible acceptance in the vertex trigger due to their offset in z. The luminosity measurement, based on the number of interactions per bunch crossing seen by the vertex detector will not see them either.

[ref] Sylvain Weisz, LEMIC meeting, 31.7.97

------------------------------------------------------------------------------ > 5) How sensitive are the various parts of the trigger to movements > of the beam? There is a discussion for the vertex trigger, but > how about the tracking trigger? For the vertex trigger, there > statement is that the beam needs to be stable to 200 microns ... > although this will be true in the short term, over periods of weeks or > months expereince at HERA eq shows that the beam can drift by > significantly more than this? What is your strategy to cope with this? >

The beam offset affects the trigger only through the second order effect of the r-phi geometry not being exact at large offsets. The effect seen in the current version of the algorithm is larger than this. We are therefore confident that we can decrease the dependence of the trigger efficiency on the vertex position by improving the trigger algorithm. We are working towards an improved version of the algorithm.

We envisage to have the whole vertex assembly on verniers where we can alter the position and tilt of the vertex detector as a unit according to beam conditions.

sent email to Georg v. Holthey, 24.4., expect statements of beam movements.

------------------------------------------------------------------------------- > 6) Clearly small changes in the characteristics of the non-b signal > can have major effects on trigger rates and efficiencies. What > variations are caused by using the extremely limits of the currently > determined proton structure functions? In particular, how do such > extremes effect the E_t and p_t distributions? Will the relevant x > regions for LHC have been measured by HERA, and with what precision?

Could Ivan and Sergio work on this? I guess this will probably mainly be measured at Tevatron, not HERA.

Sergio confirmed.

------------------------------------------------------------------------------- > > 7) How to cope with variations in the machine backgrounds by > factors or 2 - 4? For ATLAS/CMS, it is always said that the > real interaction rate far exceeds any machine backgrounds..is > this also true for LHCb?

As it is known from previous studies made eg. for IP1 the characteristics of the machine generated background depend strongly on the accelerator layout and optics located close to the IP, changing with it by factors or even orders of magnitude. So the detailed study of this kind of the background requires the frozen layout and optics which is not yet fixed for LHC-B IP8. Preliminary considerations based on the previous experience were formulated in the LHC-B note 97-013, stressed the necessity of the expanded study which is now under consideration.

-- from (Vadim Talanov, talanov@mx.ihep.su)

also some answer from Brad and Andrei expected. ------------------------------------------------------------------------------- > > 8) Figure 6.3 on page 34 of the TP worries me. If the L0 trigger > rate increases by 25% from the design (is 1 MHz design, or maximum?) > then you start to run into trouble very quickly.....shouldn't you > to be safe really readuce the readout time to options A or B, i.e. > 500 nsecs?

The experiment is specified for a maximum L0 trigger rate of 1 MHz. However in all the various aspects of this limitation some safety is built in (see also the answer on the TTC system). 10% safety in the behaviour of the derandomizer buffers is considered to be enough as a technical contingency, therefore version D in table 6.1 was chosen as the baseline option. Choosing a readout speed of 500 ns would have significant impact on the way the vertex trigger is read out (multiplexing of 32 channels with 40 MHz clock).

In practice the L0 trigger rate will needed to be adjusted such, that there is some safety to cope with variing beam quality and luminosity during a fill. An automatic tool to scale the trigger conditions will be foreseen to make optimally use of the total bandwidth available.

------------------------------------------------------------------------------- > [B] More specific points > 1) 3D flow in general > -What is the status of the 3D flow chip? > -Have prototypes been made? > -If so, what was the performance? > -In the 3D flow implementations, what happens when individual processors > malfunction?

-> Sergio and Dario confirmed ------------------------------------------------------------------------------- > > 2) Muon trigger: > -What efficiencies are assumed per chamber and station? > -What happens if sectors malfunction/need to be switched off? > -For the 3D flow implementation - 45K separate adjustable delays seems > very undesirable? Even if this is possible, are the delays stable at the > 3 - 5 nsec level presumably required to remain in synch at the 3D flow > chip? > -Note 97-024 implies a solution with the processors in front of the > shielding wall. Surely the radiation levels here will be too high?? > -Why are the results for the muon trigger shown in 12.23 of the Tp so > different from those in note 98-021? Given that the improvement is > shallow as a function of cut-off, particularly for the pi and mu cases, > is this a useful trigger, particularly since in principle one ought to > compare with making the same harder pt cut-off at L0? >

-> Brad and Andrei (confirmed) Elie Aslanides, Peter Schleper

>From Renaud and Elie:

We can not come to the trigger meeting devoted to the referees questions. However, we have prepared some answers for the questions concerning our system.

* What efficiencies are assumed per chamber and station?

In working out the alternative solution (note 97-024) we assumed in the current simulation the chamber/station efficiency of 100%

* What happens if sectors malfunction/need to be switched off?

If some sectors are malfunctionning, apriori, this could affect the "efficiency" of the fast identification of a muon track by more than 11%. A majority logic using 3 out of 4 mu2 to MU5 has not yet been tried out. However, if important parts of the FE electronics which construct the sectors are dead (ex group of many neighboring sectors or all of them), then the level 0 muon trigger cannot work. Such a malfunction has to be repared.

* Note 97-024 implies a solution with the processors in front of the shielding wall. Surely the radiation levels here will be too high?

The location of the electronics for the muon trigger is not decided yet. It can be located a) close to the muon chamber FE electronics b) in the racks dedicated to the fast electronic, close to the zone wall between the muon chamber and the shielding wall or c) in the electronic baracks behind the shielding wall. In most of the case the radiation dose is low:

o In case a) the dose is different for the different muon chambers location. For MU1, it is expected between 10 and 30 Krad/y, requiring radiation tolerant electronics. For the other muon chambers the dose is below 1 Krad/y allowing the use of standard electronic.

o In case b) the dose is below 1Krad/y (This point has to be checked with H.J. Hilke). Thus standard electronics can be used.

o Behind the shielding wall the dose will be negligeable.

* What do you perceive as the critical items in the progress of the system you have proposed, and what do you think would be a reasonable schedule for addressing such items ? what program of studies/tests/simulations, if any , do you plan to follow in the period, let's say, between now and the end of '99 ? (S. Connetti)

The isolation of the critical items of our system is underway, as well as the method to address them. We plan to use simulation based on VeryLog to study deeply the behaviour of this system. We also have in mind the construction of dedicated harware for the part which can not be well described by a simulation. In the coming weeks we will in position to determine a rough schedule. In paralellel, we have to improve our simulation, taking into account the more severe physical backgrounds producing extra hits in the chambers. This could deteriorate the performance of the fast muon id algortithm.

------------------------------------------------------------------------------- > 3) Pile-up veto. > -How long does it take to get all the data in for an average event? > -What is the estimated latency? > -What happens if coherent noise causes all strips to fire? > -Where is this processor physically? >

We (Leo and me) have discussed these items. All answers are in the note. Anyway I would put them explicitly in the answer list and suggested this to Leo. Leo is collecting comments. Luminocity measurement question is also for us, right?

Regards, Nikolai. ------------------------------------------------------------------------------- > 4) L0 decision unit > -Why does it use the gamma coordinates at the preshower? > -Does the L1 trigger have access to any tracking detector info. for > gamma triggers? The TP implies not - but then can L1 improve on the > gamma trigger? > statement 12.3.4., page 111. Gamma positions of the preshower are sent to the L0 decision unit, to allow are more complex decision, for instance calculate a quality factor of this event, based on the position and p_t of the found L0 candidates. - An algorithm to improve the gamma candidates by the tracking trigger (track veto or the like) has not been studied so far.

--- in chapter 15 in the eff. tables it says sometimes L1 trigger. What is really ment is L1 vertex trigger.

-> Sergio and Ueli ------------------------------------------------------------------------------- > 5) L1 vertex trigger > -What happens if some sections of first 3 stations are dead?

The algorithm does not rely on specific triplets of stations for track definition. A track will be found if three successive hits are seen anywhere in the detector. Evidently, dead sections will have an impact on trigger efficiency through deterioration of hit efficiency.

> -Doesn't the multiplication of probabilities to give a total event > probability produce a multiplicity dependent bias?

The total event probability is not taken as the multiplication of individual track probabilities. Therefore, although there is a small dependence on event multiplicity, the effect is small.

> -How does the tail of the latency vary with increasing noise in the > detectors?

We do not envisage to have levels of noise that would correspond to more than a few percent of the number of real hits. We would raise the detector thresholds accordingly if needed.

No systematic study of latency versus noise has been performed yet. We are planning to introduce a common framework for timing and performance studies, something which has not available for the TP, where speed and performance were assessed separately.

However, the only part of the algorithm affected by the increased noise will be the track finder, where the time taken increases as the second power of the number of hits. Very roughly, a 20% increase in the number of hits would result in an overall latency 20% higher. The track finder has not been fully optimised yet and is currently responsible for half the total latency. We envisage to take this number down to a third.

.... effect on tails is asked ....

> -With increasing beam background?

Beam background is not expected to be in any way dominant in LHC.

> -What is the L1 VTX rejection for events already passing L0? Note 98-006 > implies the numbers are for all events, not L0 passthroughs.

All the numbers given in the TP are for events that have already passed the L0. Note 98-006 was done earlier and gives numbers without L0. This is the reason that the numbers given in the two documents differ by a few percent.

> -What are the arrangements for monitoring and checking performance - > deciding on requirement for new alignment constants? This is a complex > and high performance system - monitoring will be vital. > can The importance of monitoring and quality control has not been underestimated. We envisage to use the secondary port of the processors used in the trigger farm to collect monitoring and quality control data periodically. There will be a dedicated processor in charge of monitoring and quality control.

Alignment constant calculation and updating is an important issue. For trigger use we envisage to have a simplified arrangement of three (or possibly four) alignment constants per detector wafer and use real events to align. A few x 10K events at the start of a new run will be sufficient to calculate/update those alignment constants. During this period the vertex trigger decision will be 'reject'. The alignment calculation will be performed on the monitoring/quality control processor using either raw or digested data from the trigger processors, who would be running a dedicated alignment algorithm at the time.


> 6) Tracking trigger > -I am not clear of how much is gained by the level 1 trigger if the > vertex trigger is already applied - does it select different events?

Unfortunately this question has not been studied yet carefully. A few qualitative statements can be however made: The type of information used in the two systems is orthogonal: The track trigger rejects minimum bias events, where the high p_t L0 trigger decision was based on a fake particle or on a wrong p_t measurement. The L1 vertex trigger rejects events, which have not the required vertex topology. Therefore we expect the two rejection factors can be multiplied if both triggers are applied. - Good B events are selected by both systems with a relatively high efficiency, so we expect that the two triggers will select partly different partly the same events. This overlap is particularly usefull to monitor efficiencies. See also above.

> -In general, more detail on the overall efficiency, latency, performance > of the L1 would be helpful. >

The tracking trigger will be studied in greater details in near future. It is believed, that its implementation is not critical.

> -Why are the results for the muon trigger shown in 12.23 of the Tp so > different from those in note 98-021?

The plots differ in the assumed cuts applied on the L0 trigger. In the the TP the same values were taken, which are defined in 12.3.4. on page 111. The lower the L0 p_t cutoff is selected, the better the tracking trigger can improve in p_t measurement.

> Given that the improvement is > shallow as a function of cut-off, particularly for the pi and mu cases, > is this a useful trigger, particularly since in principle one ought to > compare with making the same harder pt cut-off at L0? >

Comparing Fig 12.12. (L0 muon trigger performance) with Fig 12.23 (L1 track trigger performance) shows that applying the tracking trigger with a p_t threshold of 1.4 GeV reduces the background by a factor 5, while signal efficiency is reduced by one third. If the same minimum bias reduction would be required from the L0 trigger only, an increase of the threshold to 3 GeV would be necessary, which caused the signal efficiency to drop by a more than a factor 3. Similar number can be read from the graphs on the hadron trigger.

------------------------------------------------------------------------------- > 7) Level 2 > -It would be nice to see figures for c separate from uds. What > suppresses c particularly in L0 and 1? Simply the pt cuts? >

For generic uds, c and b events generated over the full 4*pi solid angle, which have already been selected by the Level-0 and Level-1 triggers, the efficiencies to pass the Level-2 trigger are:

uds = 5.2% ; c = 18% ; b = 66%

These numbers may be compared with the less detailed breakdown given in Table 12.5 of the Technical Proposal.

Charm events are indeed suppressed by Level-0 using the P_t cuts, since they have a relatively soft Pt spectrum. The vertex trigger of Level-1 suppresses them further, since charmed hadrons have a shorter lifetime and lower decay multiplicity than beauty hadrons. Sometimes however, the Level-1 vertex trigger fires, after finding a couple of high impact parameter tracks from charm hadron decay plus one or two additional large impact parameter tracks due to multiple scattering. The Level-2 vertex trigger can usually reject such events, since it correctly parametrizes the impact parameter resolution taking into account multiple scattering (thanks to its knowledge of the particle momenta).

------------------------------------------------------------------------------- > 8) DAQ > -What happens to the readout network if the event size/throughput > increases by 25%? 50%? >

We aim at having a safety factor of 2 with respect to the "normal working conditions" as stated in the TP. This means that the readout network should be able to sustain an aggregate throughput of twice the nominal value of about 4 GByte/sec without congestion.

This doubling of throughput could be caused by a doubling of the event size or a doubling of the trigger rate or a combination of both, as explained in [ref], p.12, 2.4.

We have aslo to envisage possible overflow conditions. A discussion is given in [ref], chapter 6. To summarise, we can consider 2 types of overflow:

1) at the level of the front-end read-out units (RU), due to an excess of data somewhere or by an unusual trigger rate. In this case new triggers are blocked until space is available again in the RUs.

2) SFC buffers are protected against variations in data throughput by the read-out network. However they may still overflow as a consequence of a mismatch of the processing power and the event rate. In the case of local effects, re-distribution of events is thinkable. If the effect is global (i.e. the total processing power does not match with the event building capacity, the trigger rate must be reduced.


[ref] "DAQ Implementation Studies", LHCb 98-029, 9 February 1998

------------------------------------------------------------------------------- > 9) Computing > -The requirements on the ODBMS are truly frightening. BaBar is finding > problems with ODBMS/Operating system compatibility. What steps will you > take/how confident are you that you will have a product which will cope > with your requirements? >

-> John Harvey, confirmed

----------------------------------------------------------------------- > > > from Andrei Rostovstev: > > - Have you considered a possibility to build a high-pt low-level track > trigger (similar to HERA-b)? This option seems to > have more flexibility than the calorimetric hadron trigger. > Example of tests of gas pixel chambers for HERA-b is encouraging: > high efficiency, low material thickness, possibility of fast signal > within 25ns for cells of 4*4mm, low occupancy allowing to combine > few small cells into one channel, presummably better transverse momentum > resolution than calorimetric trigger, compactness in space giving more > freedom to use longitudinal space budget for the whole experiment, etc. > - Would it be possible to utilize slow charged hadrons for tagging > in the present tracker configuration?

HERA-B has a high p_t pretrigger, which sends O(1) tracks per BX to the FLT and thus does not reduce the minimum bias events by itself. The FLT does the actual event reduction on the basis of specific track selections and invariant mass cuts. For the FLT this is assumed to be a "small additional load".

The trigger philosophy of LHCb is more inclusive. It allows to trigger generally on B decay events, requiring single high p_t identified leptons or hadrons and information about vertex topology. In contrast to HERA-B, L0 needs to be a real trigger, which reduces the total event rate by about a factor 10.

The hadron calorimeter allows to identify the hadrons and is therefore in principle better suited to select inclusive high p_t hadron events. It has been shown (ref. Guy), that the resolution of the hadron calo is not crucial for the hadron trigger performance. We believe therefore, that the chosen combination of the L0 hadron calorimeter trigger with the L1 track trigger is optimal to select B physics in the LHC environment.

---------------------------------------------------------------------------- > > Physics: > from An > 1) Kaon tagging. Could you please give the numbers for expected > dilution > for electrons, muons and kaons separately to see the weight of the Kaons


> 2) K+pi channel. It doesn't seem to be statistically limited if you > even require only lepton tagged events. Isn't?

Yes, can be used in additinoal as monitoring. -> physics meeting

> 3) D+K* channel. Is it possible here to use the gamma-trigger to > exploit D0-kpipi0 channel or pi0 from K*? No

> 4) Have I forgoten something else, where the hadronic trigger gives a > unique opportunity for physics reaches of the experiment?

-> Physics meeting

Home Trigger E-mail Notes Meetings Subsystems Search