Online measurement of Beam Luminosity and Exposed Luminosity for Run II. 13-MAR-1996 first draft 22-OCT-1997 add derivations 29-OCT-1997 more about integrated luminosity Introduction: ------------ The Run I Trigger System was designed and built to include all necessary scalers to monitor the instantaneous luminosity of *THE BEAM*. With some afterthoughts, efforts, and compromises, the Level 1 Trigger Hardware was later modified to provide a measurement of the luminosity that the *SPECIFIC TRIGGERS* were exposed to. The resulting implementation relied on some assumptions and had serious limitations. For Run II we would like to include all luminosity measurements as an intrinsic part of the design of the Run II Level 1 Trigger equipment. The resulting implementation should use minimum assumptions, and provide maximum flexibility. Measuring delivered Beam luminosity: ------------------------------------ Measuring the Beam luminosity 'L_Beam' online in the L1 Trigger is accomplished by means of monitoring "something" (i.e. some process) of known cross section over a time interval. We then need to use Poisson statistics to extract the beam luminosity. We monitor the selected process by tying up a counter to an appropriate detector (e.g. signal detected in some L0 scintillator in Run I). The detector, and the attached scaler, is NOT discriminating against multiple interactions occurring for a given beam crossing; it is only reporting AT LEAST ONE occurrence of the monitored process. In other words, the counter is allowed to increment at most once per beam crossing, and only if the monitored process has been detected for that crossing. The rate at which the counter increments is our "Beam Luminosity Indicator". Once the cross section and efficiency of the detector are determined, we can derive the Beam luminosity as outlined below. The number of occurrences per crossing for the monitored process follows a Poisson Distribution (i.e. binomial with p<<1 and n>>1; small probability per particle, lots of particles in each bunch) The probability of 'x' interactions per crossing is: x mu -mu P(x,mu) = ---- * e ; 'mu' is the average number of interactions x! per crossings. The probability of NO interaction in a given crossing is -mu P(0) = e ; (we drop 'mu' from the notation for 'P') The probability of at least one interaction in crossing which is what the counter is measuring. -mu P(x>0) = 1 - P(0) = 1 - e We can extract 'mu': mu = - loge ( 1 - P(x>0) ) Now we need to attach 'mu' to the cross-section of the selected process. The cross-section 'sigma' is related to the luminosity 'L_Beam' and the actual interaction rate 'Int_Rate' by: Int_Rate = L_Beam * sigma Int_Rate is distributed across successive bunch crossings, and can be expressed as the product of the average number of interactions per crossing ('mu' above) and the rate of the crossings 'Crossing_Rate': Int_Rate L_Beam * sigma mu = ------------- = --------------- Crossing_Rate Crossing_Rate and combining the two expressions of 'mu' we get 'L_Beam': Crossing_Rate L_Beam = - ------------- loge ( 1 - P(x>0) ) sigma 'Crossing_rate' and 'sigma' are known constants. We thus need to measure P(x>0) to get 'L_Beam'. We call 'Indicator' the scaler monitoring the known process as described above, and we also tie a scaler to the number of Beam Crossings. Instead of using differential notation (i.e. d(Indicator) and dt), it makes more sense to use what is available online in hardware. The quantities used by TCC are scaler increments that we can write as 'Delta_Indicator' and 'Delta_Crossing' (the number of beam crossings is how time is measured). Delta_Indicator P(x>0) = --------------- Delta_Crossing thus Crossing_Rate Delta_Indicator L_Beam = - ------------- * loge (1 - --------------- ) Cross_Section Delta_Crossing This is the derivation for the instantaneous luminosity. The Delta time interval for measuring 'Delta_Indicator' and 'Delta_Crossing' needs to be large enough to smooth out statistical variations. This was empirically chosen to be 1 minute during the highest luminosities of Run I. Per-bunch scaling: ----------------- The Beam delivered by the Tevatron is made of multiple (159) individual bunches of protons and antiprotons crossing at DZero. Measuring an instantaneous luminosity at DZero requires keeping track SEPARATELY, then SUMMING, the luminosity provided by each of the 159 BUNCHES. In the calculation above, the average number of interactions per crossing 'mu' is not typically constant across all the bunches. The calculation above is only valid for individual bunches (or more precisely for proton and antiproton bunch pairs crossing at DZero); the critical step was when we equated the two expressions of 'mu'. This per bunch counting is necessary because the bunches are (typically) NOT equally populated AND the expression 'L_Beam' above is NOT linear with respect to 'P(x>0)'. If either of these 2 conditions could be avoided we would NOT need to implement per bunch scaling to obtain the instantaneous or integrated luminosity. The Indicator used during Run I was a Level 0 Vertex detection that had a large cross section so that the Indicator scaler was running at nearly the beam crossing rate. The expression 'L_Beam' was thus HIGHLY non-linear [from the term Log(1-x) with x~=1] for the luminosity ranges seen at the end of run I. L_Beam = Summation ( L_Bunch(i) ) i = 1,159 Bunch_Rate(i) Delta_Indicator(i) L_Beam = Summation( ------------- * loge(1 - -----------------------) ) i = 1,159 Cross_Section Delta_Bunch_Crossing(i) Luckily, we don't need to keep per-bunch accounting of Bunch_Rate(i) and Delta_Bunch_Crossing(i). These are simply one 159th of the global numbers (all bunches combined) of beam crossings, 'BeamX_Rate' and 'Delta_BeamX'. Delta_Indicator(i) L_Beam = A * Summation( loge (1 - ------------------) ) i=1,159 Delta_BeamX / 159 1 BeamX_Rate where A = --- * ------------- ; 'A' is a constant depending 159 Cross_Section only on the indicator chosen. (BeamX_Rate = 7.6 MHz) The units of instantaneous luminosity are cm^-2 * s^-1. Another unit used in luminosity measurements is the barn (1 barn = 10^-28 m^2 = 10^-24 cm^2), and with instantaneous luminosities at the Tevatron above 10*10^30 cm^-2*s^-1, the more useful subunit is the inverse microbarn per second (10^30 cm^-2 * s^-1)for instantaneous luminosities and inverse nanobarn (10^33 cm^-2), or inverse picobarn (10^36 cm^-2), or even femtobarn (10^39 cm^-2) for the luminosity integrated over a run, a day, a store, or for the amount of data recorded on tape by the experiment. Measuring integrated Run Luminosity delivered: --------------------------------------------- Measuring the INTEGRATED LUMINOSITY over a long period of time (e.g. a run) requires keeping track of the instantaneous luminosity at small time intervals. The Luminosity is a function of time (which may be falling even more rapidly during run II than it was during run I). Computing the integrated Luminosity will thus require keeping track of the instantaneous luminosity at regular, small time intervals, small compared to any variation in 'L'. Here again, we could have gotten away with a single measurement at the end of the run if 'L' had been linear with respect to P(x>0). If we call 'Instantaneous_L_Beam', to be precise, the instantaneous beam luminosity that we simply called 'L_Beam' above, we get the integrated beam luminosity 'Integrated_L_Beam' as: Integrated_L_Beam = Integral ( d(Instantaneous_L_Beam) * dt ) t=0, t=end run We approximate the integral with small time intervals 'T' ('T' going from 1 to 'N' to cover the time of the whole run being measured). Each interval has a length 'delta_t(T)' during which the luminosity is considered constant and stable. Integrated _L_Beam = Summation ( Instantaneous_L_Beam(T) * delta_t(T) ) T=1,N If the time intervals are of constant length (this depends on implementation and hasn't been defined yet), delta_t(T) becomes a constant delta_t Integrated _L_Beam = delta_t * Summation ( Instantaneous_L_Beam(T) ) T=1,N The time interval 'delta_t' needs to be determined, and should be equal to or greater than the time interval that we use to determine a stable instantaneous luminosity (i.e. the time used for 'delta_BeamX' above). This depends on how smooth the instantaneous luminosity is, and its falling rate. The Trigger Control Computer (TCC) can compute and serve the current 'Instantaneous_L_Beam' at any time, but DZero needs a run luminosity database and a proper server software to record and retrieve this information (see further details below with specific trigger luminosities). Partial Summary: ---------------- Measuring instantaneous and integrated beam luminosity requires one set of per-bunch scalers monitoring Indicator(i), and requires capturing these scalers at (regular) time interval to obtain Delta_Indicator(i). The Trigger Control Computer (TCC) can collect the scalers at regular time intervals (1 minute), apply the formula above and derive the instantaneous luminosity. TCC can then be a server for this luminosity information. DZero also needs a luminosity database to collect this information and later serve run summary information. DZero also needs to retrieve this information and send it to the Accelerator Control Room. Specific Trigger Luminosity: ---------------------------- Measuring the luminosity that a particular Specific Trigger is exposed to means carrying the same measurement as for beam luminosity, but limited to the crossings where the Specific Trigger was exposed to. Different Specific Triggers might not be able to get exposed to the same beam crossings because they are enabled/disabled at different times by the DAQ system (e.g. different front-end busy requirements). Different Specific Triggers might not be interested in looking at the same crossings and may include a restriction to a specific subset of the delivered Beam luminosity (e.g. require a single interaction, or require the vertex at center of detector). The luminosity 'L_ST(n)' that a particular Specific Trigger 'n' is exposed to is derived the same way as the beam luminosity, but the calculation needs to be restricted to the beam crossings 'ST_exposed_Rate(n,i)' that the Specific Trigger was exposed to. ST_exposed_Rate(n,i) Delta_ST_exposed_AND_Indic(n,i) L_ST(n) = Sum( ------------------ * loge(1 - -----------------------------) ) i=1,159 Cross_Section Delta_ST_Exposed(n,i) We can monitor 'ST_exposed_Rate(n,i)' with 159 per-bunch scalers 'ST_exposed(n,i)'. There is no guarantee that 'ST_exposed(n,i)' is uniformly distributed across all the bunches (i.e. it is NOT simply one 159th of the overall 'ST_exposed(n,i)'). In fact it is clear that ST_exposed(i) may be correlated to the bunch luminosity profile (i.e. the other ST_exposed(n,j<>i) ). For example, a bunch with high luminosity will have a higher probability of generating an L1 Trigger Accept, thus causing DAQ dead-time for several bunches directly following the higher luminosity bunch, and causing a non-uniform distribution of ST_exposed(i). We could describe this as a "shadowing" of less populated bunches by more populated ones. Delta_ST_exposed(n,i) Delta_ST_exposed(n,i) ST_exposed_Rate(n,i) = --------------------- = ------------------------ Delta_time Delta_BeamX / BeamX_Rate We could monitor 'Delta_ST_exposed_AND_Indic(n,i)' with another set of 159 per-bunch scalers counting the occurrences of (ST_exposed(n,i) AND Indicator(i)). To avoid implementing this second set of per-bunch scalers, we need to assume that the process driving Indicator(i) and the 'ST_exposed(n,i)' signal are de-correlated. Once de-correlated, we can simply multiply 'Delta_Indicator(i)' (already scaled for L_Beam) by the fraction of 'Delta_ST_exposed(n,i)' (same as above) compared to 'Delta_Bunch(i)'. Delta_ST_exposed(n,i) Delta_ST_exposed_AND_Indic(n,i) = Delta_Indic(i) * --------------------- Delta_Bunch(i) Note that this expression makes 'Delta_ST_Exposed(n,i)' cancel out in 'L_ST(n)'. A quicker way to come to this point is to notice that the term inside the log was measuring P(x>0), and this probability doesn't change once we restrict ourselves to the beam crossings for which the Specific Trigger is exposed. Delta_ST_exposed(n,i) * BeamX_Rate Delta_Indic(i) L_ST(n) = Sum( ---------------------------------- * loge(1 - --------------) ) i=1,159 Cross_Section * Delta_BeamX Delta_Bunch(i) or Delta_ST_exposed(n,i) Delta_Indic(i) L_ST(n) = A * Summation( -------------------- * loge(1 - ---------------) ) i=1,159 Delta_BeamX /159 Delta_BeamX/159 This summation over all the bunches to get 'L_ST(n)' is the same as the summation to get 'L_Beam', with the only difference being the weighting ratios 'a(n,i)' of the fraction of ST_exposed for each bunch. L_ST(n) = Summation( a(n,i) * L_Bunch(i) ) i=1,159 One key point here was the assumption that 'ST_exposed(n,i)' and 'Indicator(i)' are de-correlated. This property is NOT in contradiction with the accross-bunch correlation mentioned above. The point here is that, for each crossing of one particular bunch, the state of the ST_exposed signal is NOT dependent on the occurrence or non-occurrence of the Indicator signal for this bunch. This seems to be a safe assumption, as the state of ST_exposed right before the crossing is a preset condition with no relation of causality to the next bunch crossing about to occur. Another view is that the ST_exposed state is dependent on the luminosity in the previous bunches (and even maybe on the PREVIOUS crossings of this particular bunch) but not on the current crossing of this particular bunch. Partial Summary: ---------------- Measuring instantaneous and integrated Specific Trigger luminosities requires only one additional set of per-bunch scalers monitoring 'ST_exposed(n,i)' per Specific Trigger, and we must capture this scalers at (regular) time interval to obtain 'Delta_ST_exposed(n,i)'. Alternate luminosity measurements: ---------------------------------- Some ideas have already been proposed during Run I for measuring high luminosities by a different method (e.g. monitoring the energy in a Trigger Tower perpendicular to the beam axis). One could imagine that DZero would want several concurrent measurements of the luminosity. Some measurements might be tailored to low or high luminosity; and not all Specific Triggers might want to use the same measurements. Allowing for multiple measurements of the luminosity requires only one additional set of per-bunch scalers for each new luminosity measurement indicator to obtain the delivered beam luminosity AND all Specific Trigger luminosities. Implementation problem: ----------------------- The Level 1 Framework would need to keep track of the luminosity indicator for each bunch (up to 159) and for each Specific Trigger; for a total of 128 * 159 ~= 20,000 scalers. This large number is considered too costly for implementation in hardware and for hardware maintenance, as well as for reading all these scalers at regular time intervals. Proposed Solution: ------------------ Nothing can be done about the bunch multiplicity factor of 159 (unless we could find an indicator that is "nearly linear" with the luminosity over the range of interest). But we can attack the other factor (128) if we can require common settings between some of the Specific Triggers and then enforce a limit on the number of different settings we need to monitor. A compromise would be to have a small number of groups (4 or 8) of Specific Triggers that are individually monitored but allow to recover the luminosity information for all the Specific Trigger members of each group. Forcing Specific Triggers into groups with common deadtime for each beam crossing does not seem immediately straightforward as some simple and essential deadtime factors like prescaling are inherently going to force different deadtimes for each prescaled Specific Trigger. The reason we had to carry the Specific Trigger luminosity computation on a per-bunch basis was that the ST_exposed signal is NOT evenly distributed among Beam bunches. ST_exposed is, on the contrary, correlated with the bunch luminosity profile. However NOT ALL the individual contributions to the disabling of Specific Triggers are correlated to the bunch luminosity profile. For example, the prescaler disable signal can be implemented as random (or as egalitarian) with respect to individual bunches and thus be de-correlated from the bunch luminosity profile. Sources of Specific Trigger Disable can be classified as correlated or de-correlated to the bunch luminosity profile. The compromise presented below in the form of Specific Trigger Luminosity Groups is a limitation to a small manageable number (4 or 8) of the total of different combinations of all the CORRELATED sources of Specific Trigger disable, and accurately keep track for each Specific Trigger Exposure Group of their global contribution on a per-bunch basis (i.e. 159 scalers for each Group). The other part of the compromise is to allow each Specific Trigger to have totally independent settings (and thus have different deadtime) for all other DE-CORRELATED sources of disable, and separately keep track for each Specific Trigger of the global contribution of those sources of disable (i.e. one scaler per Specific Trigger). List of contributions to the Specific Trigger Enable/Disable: 1) All sources of DAQ disable COOR enable/disable Geographic Section disable Prescaler Level 2 Disable (if any) Level 3 Disable (probably needed but not yet defined) Individual Specific Disables Global Specific Trigger Disables Autodisable 2) All other Exposition Restrictions wired as And-Or Terms, e.g.: Background rejection Detector recovery Blanking Classification: 1) Some elements are DE-correlated from the luminosity bunch distribution COOR enable/disable (because it is very slow) Prescaler (not by definition, but can be implemented that way) Level 2 Disable (not fixed delay from L1 Accept) Level 3 Disable (not fixed delay) some sources of Global Specific Trigger disable (some are not cf. below) all Individual Specific Trigger Disable (...or this method fails) Autodisable (very slow) 2) Some elements are (or can be) correlated to the bunch luminosity distribution Geographic Section disable (go busy after Trigger Accept) all And-Or Exposition Restrictions (by definition) some sources of Global Specific Trigger disable (e.g. skip next crossing after Trigger Accept) Definition: ----------- Specific Trigger Luminosity Groups (STLG) or Specific Trigger Exposure Groups (STEG) or Specific Trigger Readout and Exposition Restriction Groups (STRERG) are Sets of Specific Triggers with a) common And-Or Exposition Restriction requirements and b) common Geographic Section Readout requirements and c) common sources of correlated Global Sp Trg Disables Programming: The Level 1 Trigger Framework will let COOR specify a number (4 or 8) of Specific Trigger Luminosity Groups. Each Group is defined by - its set of And-Or requirements for Exposure Restrictions - its set of Geographic Sections whose Front-End Busy must be obeyed - its set of Correlated Global Specific Trigger Disable signals. Each Specific Trigger is then programmed to belong to one and only one Specific Trigger Luminosity Group. The Specific Trigger obeys the exposure restrictions defined by the luminosity group that it belongs to. Those sources of exposure restrictions (i.e. Specific Trigger Disable) are the only sources of Specific Trigger Disable that the Specific Trigger is subject to which are correlated to bunch luminosity profile. All other sources of Specific Trigger disable applying to each Specific Trigger have to be de-correlated to the bunch luminosity profile. Specific Trigger Luminosity Computation: ---------------------------------------- We had derived above L_ST(n) = Summation( a(n,i) * L_Bunch(i) ) ; with 'a(n,i)' the fraction i=1,159 of ST_exposed for Specific Trigger each bunch. If we call 'LG_correlated_a(g,i)' the fraction of Beam crossings that the Specific Trigger 'n' is exposed to but only including the contribution from all sources of correlated disable that constitute the luminosity group 'g' that the Specific Trigger 'n' belongs to. If we call 'ST_decorrelated_a(n)' the fraction of Beam crossings that the Specific Trigger 'n' is exposed but only including the contribution from all sources of decorrelated disable for that Specific Trigger. Note that these two fractions are not exclusive (they overlap) and are de-correlated from each other for each bunch so that they can be multiplied to obtain the overall 'a(n,i)' from above: a(n,i) = ST_decorrelated_a(n) * LG_correlated_a(g,i) and L_ST(n) = ST_decorrelated_a(n) * L_Group(g) with L_Group(g) = Summation( LG_correlated_a(g,i) * L_Bunch(i) ) i=1,159 We see that the summation term only involves the Luminosity group, and that the luminosity quantity for each Specific Trigger can be obtained by applying one simple correction to the proper luminosity group. Let's call 'Group_exposed(g,i)' the per-bunch scalers for the Specific Trigger Exposed signal for each Luminosity Group 'g' obtained by logical AND of all the individual sources of correlated disable of this Luminosity group. Let's call 'ST_decorrelated_exposed(n)' the scaler for Specific Trigger 'n' Exposed signal obtained by logical AND of all decorrelated sources of disable for this Specific Trigger. then Delta_Group_exposed(i) Delta_Indic(i) L_Group(g) = A * Summation( ------------------ * loge(1 - ---------------) ) i=1,159 Delta_BeamX /159 Delta_BeamX/159 and Delta_ST_decorrelated_exposed(n) L_ST(n) = -------------------------------- * L_Group(g) Delta_BeamX Measuring Specific Trigger Integrated Run Luminosity: ---------------------------------------------------- Measuring the Specific Trigger integrated Luminosity over a long period of time (e.g. a run) requires keeping track of the Specific Trigger instantaneous luminosity at small time intervals. As explained above, the instantaneous Beam Luminosity is a function of time, which is falling somewhat smoothly during the accelerator store. The Specific Trigger Luminosity will probably be more chaotic because of the fluctuations in the deadtime caused by any of the components of the DAQ system (front-ends, L1, L2, L3, Data Logger). Computing the Specific Trigger integrated Luminosity might thus require keeping track of the Specific Trigger instantaneous luminosity at time intervals smaller than what is required for Beam luminosity alone in order to catch all fluctuations. If we call 'Instantaneous_L_ST(n)', to be precise, the instantaneous beam luminosity that we simply called 'L_ST(n)' above, we get the integrated beam luminosity 'Integrated_L_ST(n)' as: Integrated_L_ST(n) = Integral ( d(Instantaneous_L_ST(n)) * dt ) t=0, t=end run As for the Beam luminosity above, we use the same small time intervals 'T' of length 'delta_t(T)'. Integrated _L_ST(n) = Summation ( Instantaneous_L_ST(n,T) * delta_t(T) ) T=1,N If the time intervals are of constant length : Integrated _L_ST(n) = delta_t * Summation ( Instantaneous_L_ST(n,T) ) T=1,N The time interval 'delta_t' needs to be determined, and should probably be equal to the time interval that we use to determine a stable instantaneous luminosity (i.e. the time used for 'delta_BeamX' above) in order to catch as much of the fluctuations as possible. The Trigger Control Computer (TCC) can compute and serve the current 'Instantaneous_L_ST(n)' at any time, but DZero ADDITIONALLY needs a run luminosity database and a proper server software to record and retrieve this information. DZero needs to determine what level of information it wants to retrieve from TCC and what level of information the experiment wants to record in the run luminosity database. The lowest level would be the individual per-bunch scaler increments (i.e. the 'delta_BeamX', 'delta_Indicator(i)'. Delta_Group_exposed(i), and Delta_ST_decorrelated_exposed(n) above). The next level would be to let TCC (or some other code that writes to the database) do the math to derive and write 'Instantaneous_L_Beam' and Instantaneous_L_ST(n). The next level would be for TCC to compute and keep incrementing the integrated run luminosities: Integrated _L_Beam and Integrated _L_ST(n). TCC will certainly do the math for the instantaneous luminosities anyway (as in Run I) in order to feed online monitoring programs (like TRGMON, or the feedback to the accelerator division) but it is an independent decision to chose what to record in the database. Recording all the basic information gives the ability to re-derive the quantities later, in particular if one is not confident that all contributions and corrections have been taken into account. The down side is the volume of data to record. Conclusion: ----------- The Run II Level 1 Trigger Framework will be instrumented with the appropriate scalers (as described above) to fully provide instantaneous beam luminosity online measurements and 128 individual Specific Trigger instantaneous luminosity online measurements. DZero needs to implement a database to record this online Level 1 luminosity information. The exact structure and responsibilities of this database and supporting server programs and the details of the information served by TCC need to be defined.