20-APR-1995 What should be readout ---------------------- per event begin/end run monitoring This is an initial collection of ideas. What and how do we select what to put in the data block that is read out with every event. This discussion is probably most important for the frameworks; it might be simpler to think of what to do for the L1 Caltrig. There are three basic ways/occasions we get data out of the trigger system (not counting the register read/write from TRICS, or diagnostics and test software) - data block readout systematic with each event data will always stays with the event all data contemporary of the event affects event size, and max DAQ rate - monitoring information somewhat irregularly updated on TCC only served to the user on explicit demand (via TRGMON) interpreted and displayed on TRGMON screens and by a global monitoring program not recorded, captured "once in a while" (once per run?) by captain - begin/end run files Explicitly and systematically requested at begin and end of run also at begin and end of stores also during forced PAUSEs (e.g. alarm) Read by Luminosity and store keeper program Files are saved data block readout ------------------ We can include some beam crossing history: current crossing that caused the positive trigger decision one or more previous crossing (if not first crossing in superbunch) next crossing (if not last crossing in superbunch) could apply to input quantities (e.g. l1 caltrig input energies) input states (e.g. andor terms) scalers Who are the customers, and what would they like to see: Physics or further filtering write all results out (global energies, seeds) current/previous/next Reconstruction write all input quantities write all input states ( some internal history (e.g. current state of prescaler gate) Verification write all outputs: states, decisions, Test, diagnostics, check, consistency write all inputs + outputs any duplicate data current/previous/next Monitoring Data Block Spy: Will it be possible to implement a similar concept for run II ? The main characteristics of this function are grab a copy of a data block during readout (in full, all parts contemporary) Do not affect the readout (content) of the event captured Do not affect the readout rate Is accessible to TCC for inclusion in the monitoring pool Can still force an event when the acquisition is broken Type of data: Inputs quantities (energies, L0 Vertex...) enable/disable specific trigger states Front end busy states andor terms Intermediate results prescaler gate states Outputs Scalers Notes: 1) Reading out more (too much) data affects: - longer the dead time before the Level 1 framework is ready to start making new decision (transfer to SAR) - lower maximum trigger rate (higher level of front-end busy) - bigger event size 2) As far as I know the specific trigger enable and specific trigger fired count has not been of much use. When we dump an event, we might look at these scalers to get a feeling of the specific trigger rate, or deadtime, but this information could be obtained (with more difficulty and with a special utility that doesn't exist) from end run files. The only direct use we ever had was a consistency check of comparing the current and previous value of these scalers. monitoring data --------------- The monitoring data should contain 1) the data from event readout 2) any additional scalers states (shouldn't they all be included in the event readout ? ) We Need a forced synchronous load of all scalers, controlled by TCC on demand. It might also be desirable to request that this synchronous load be done for the next crossing with a positive trigger decision so that all the monitoring data be contemporary. Since the Level 1 Monitoring Program(s) will most likely need to ask for, and access a copy of "the next event", we need to implement a mechanism to allow TCC to request and obtain this data (cf. the run I DB Spy) After a positive trigger decision, it is pictured that the Level 1 trigger framework will "pause" for some number of microseconds while all the Level 1 cards that hold information needed for readout ship their data to a set of SAR modules. These SAR modules will implement the full Level 1 and level 2 buffering protocol and the high speed readout as specified for Front-End buffering and transfer to Level 2 and/or Level 3. To implement this initial data transfer (from Level 1 cards to the Level 1 SAR modules) no long term buffering is needed, since no other Level 1 decision will be made before the transfer is completed. How do we tap into this data? 1) Should we provide an extra buffer (separate from the high speed readout) for parallel and synchronous load of all the data (states, energies, etc) that is being shipped out? 2) Alternatively can we expect to get the data back from the high speed readout system? in or after the SAR modules ? Remember that a) we would like to get the data before it gets filtered by Level2 and b) consistently even while the rest of the DAQ system is hung. "Random crossing" vs "next event" (with random crossing when no event flowing). What are the relative advantages? Do we need both, or is the "next event" enough (as was done in run I)? begin/end run files ------------------- The current (Run I) policy is to readout EVERY scaler we own and make them all available in an RCP file written to the host. This includes the short (SBSC) and long (DBSC) scalers. These files are read to extract luminosity and dead time information. There are more than just begin and end run files, but the content is identical, just the name is different: Begin Store files End Store files Begin Run files End Run files Pause Run files requested when an alarm pauses the run Resume Run files There is probably no need for change.