This was derived from material prepared for the L1Cal NIM06 article. Run IIb L1Cal Online Control ============================ 11-Dec-2006 1. Goals ======== The design of the Run IIb L1Cal Trigger was motivated by some specific algorithms (cf. http://www.nevis.columbia.edu/~evans/l1cal/algos/sw_algos.html), but building a system implementing those algorithms too literally, or solely driven by the thresholds and parameters determined by Monte Carlo analysis would produce a rigid solution, cumbersome to adapt to refinements or to new requirements. The system cannot implement only the high luminosity triggering requirements without being flexible enough to support commissioning and test activities, with their specific needs and requirements. Designing a versatile and flexible system starts with making thresholds and parameters accessible, including the option to enable or disable features. These features are accessible via control registers, all programmable from external computer software, instead of being fixed inside the FPGA firmware. This avoids needing to recompile the firmware every time a threshold is adjusted or having to maintain several concurrent versions of this firmware for different luminosity or testing conditions. Furthermore the act of programming these control registers is a dynamic run-time process, instead of a static sequential list of IOs performed in a fixed startup sequence. This avoids needing to involve a system expert for every minor threshold change or every new test. The primary goal of the L1Cal Online Control software is to hide all the complexity of the underlying hardware, while making run time programming of the L1Cal Trigger possible and accessible to all DZero users in simple and logical terms. 2. Centralized Control (i.e. COOR) ================================== All components of the DZero trigger and DAQ system are configurable and programmable, forming a large set of resources and parameters needing to be configured before -- or in order to -- start collecting data. One can distinguish two main modes of operation, corresponding to the presence or absence of collisions in the Tevatron. During the period of time called a store when the Tevatron is circulating beams of protons and antiprotons, the trigger and DAQ systems are configured to collect specifically chosen events, for physics analysis. During such periods, the programming of the trigger and DAQ system is driven by a complete and carefully managed list of requirements and parameters making optimal use of all available resources. This set of requirements is called a trigger list and achieves a compromise between the different physics goals to optimize the selection process, while remaining within the available bandwidth limitations, as a function of the instantaneous delivered luminosity. This trigger list includes a specification of which L1Cal resources are being used, and the value of all associated parameters and thresholds. The Tevatron does not deliver beam continuously, and needs some setup time between the end of one store and the beginning of the next. Maintenance or repairs also account for accelerator down times, during which the experiment does not remain idle. The desire or necessity to collect data is not limited to beam physics data, as the different detector and trigger system maintainers make use of down time to calibrate, check, repair, or update their system. The need for L1Cal triggering services does not stop either. L1Cal experts may work on calibration or a new feature before it can be used in a future trigger list, and other detector groups may need the L1Cal Trigger to select events and diagnose a source of noise. Even if no test or commissioning activity is taking place, the best way for the experiment to wait for the next store is to keep all systems operational and collect data at low rate triggered by cosmic rays or random sampling (zero bias), thus exercising the readiness of the whole chain of detector, front end electronics, trigger, DAQ, data logging, and monitoring before the next store. Wether there is one overall trigger requirement for beam physics, cosmic, or zero bias, or multiple requirements, independent and less orchestrated, all configuration and programming requests are coordinated by one central DZero application called COOR. The global trigger list is submitted to COOR during beam physics, while multiple users may be submitting simpler separate requests to focus on particular tasks. COOR is the central originating point connecting and sending requests to all the distributed subsystems to be controlled. COOR is thus unifying all these various subsystems to present one common interface, for the trigger list, or for the individual users. ref: http://d0server1.fnal.gov/www/online_computing/documents/Coor/coorover_20051014.pdf The Run IIb L1Cal system was designed and built in two parts (the ADF crates vs. the TAB and GAB crate) by two separate university groups, but these two halves operate together to form one complete L1Cal trigger system, and are thus presented as one unified device to the rest of the experiment via one common interface to COOR. 3. L1Cal Trigger Control Computer ================================= The L1Cal Trigger Control Computer (TCC) is the computing platform common to the two halves of the L1Cal system and used to program the L1Cal Trigger hardware. Following the Run IIa model, this computer provides a high level interface between COOR (or L1 experts) and the L1Cal system. Text commands describing the triggering resources in simple terms are translated into one or many register IOs to one or more of the ADF, TAB or GAB cards. The control computer hides the details and complexity of the hardware implementation to only expose its functionality in the simplest possible form. The control computer makes thresholds and control parameters accessible to COOR and thus to the trigger list and other trigger users. The L1Cal Control Computer is used to prepare the L1Cal system for data taking, but it must be noted that it does not participate in Trigger Tower signal processing nor in triggering algorithm, and does not contribute to event readout. The Trigger Control computer used to control the L1Cal system is a standard PC running the Linux operating system and part of the DZero online computing cluster. L1Cal TCC needs to access the various control registers of the L1Cal hardware, as described next. This control path makes the L1Cal TCC the closest and fastest point of access to all the L1Cal hardware (e.g. 5us to write a control register on an ADF card), and thus best situated for IO intensive tasks. 4. L1Cal Control Path ===================== The L1Cal Trigger Control Computer needs to access the eighty ADF cards in their four 6U VME crates, the eight TAB and one GAB cards in one 9U custom crate, and the readout support cards in one 9U VME crate. The L1Cal TCC uses a commercial bus interface to the VME bus architecture. The Model 618 bus adaptor from SBS Technologies recently acquired by GE Fanuc Embedded Systems (http://www.gefanucembedded.com) provides a very flexible hardware and programming interface to access one remote VME bus. The Model 618 adaptor consists of one PCI module located inside the PC and one VME card located in the remote crate, linked by an optical cable pair. The VME module of the Model 618 bus adaptor is placed in a separate 9U VME crate called the Communication Crate. The Communication Crate hosts the additional VME bus interfaces needed to access the ADF and TAB/GAB hardware as well as the other support cards needed by L1Cal. Figure: http://www.pa.msu.edu/hep/d0/ftp/run2b/l1cal/tcc/drawings/l1caliib_communication.gif To access the four ADF crates, the L1Cal system uses a set of Vertical Interconnect modules built by Fermilab and already used in other subsystems at DZero. Ref: http://www-linac.fnal.gov/LINAC/hardware/vmesys/boards/vi/viInfo.html The Vertical Interconnect (VI) cards come in two types. One VI Master Card is located in the Communication Crate, and is connected to 4 VI Slaves, one in each ADF Crate. The VI Master maps the VME A24 address space of each remote ADF crate onto four contiguous segment of VME A32 addresses in the Communication Crate. The user software running on the L1Cal TCC generates VME A32/D16 cycles in the Communication Crate, while the VI Master and the VI Slave transfer and transform these requests into VME A24/D16 cycles in the targeted remote crate. The VME cycle completion status and the data read back are returned to the Communication Crate, and thus to the L1Cal software. The Communication crate hosts one additional VI Master to access one more VI Slave, located in the L1Cal Readout Crate. The Communication Crate is also home to the VME/SCL card with its serial links connected to each of the eight TABs and one GAB cards in their custom crate. The user software running on L1Cal TCC generates VME A24/D32 cycles to the VME/SCL, which in turn generates a serialized transaction directly to the targeted TAB or GAB module. 5. L1Cal Control Software Interfaces ==================================== The functionality required from the control software on L1Cal TCC is defined by three interfaces. 5.a. COOR Interface ------------------- The COOR interface can be considered the central mission for L1Cal TCC, as it allows COOR to configure the system to implement the desired Trigger List and select the desired data for physics analysis. The protocol and syntax definition for the COOR interface is an extension of the similar syntax and protocol used to program the Run IIa L1Cal. Two computer programs are communicating across the COOR interface, but a feature of this interface is to be based on human friendly text messages describing the programmable resources in term of logically named entities with programmable properties, using numeral indices and familiar units. These messages never refer to individual ADF or TAB card, or card locations, register addresses, or register content. Topological information is specified in terms of Trigger Tower indices, directly proportional to pseudo-rapidity and azimuthal angles, and energy thresholds are specified in GeV units. These messages are normally not interpreted or even seen by a human, unless the system is being tested, or until some problem needs to be investigated, but all messages remain accessible in log files and remain an easily understandable trace of the running conditions at a particular time. The added effort to encode and decode COOR requests into text messages is also helpful in commissioning and testing of the system. The full protocol and syntax is available here http://www.pa.msu.edu/hep/d0/ftp/run2b/l1cal/tcc/coor/ and some examples are covered in section 8.b and 8.c. 5.b. L1Cal Expert Interface --------------------------- During normal operation, with or without beam, the COOR interface is the main control path into the system for the typical system user, but the control software also offers direct access for the L1Cal experts, e.g. to run calibration and diagnostics tools, and most extensively during the initial commissioning of the system. The L1Cal expert is offered low level direct access to the card registers, and can configure FPGA firmware. This interface is provided in the form of a point an click set of dialogs and menus which supports scripting via command files to setup particular tests (cf. section 6). 5.c. Monitoring Interface ------------------------- The Monitoring interface is used to watch the operation of the L1Cal system. The ADF, TAB, and GAB provide registers to capture event information, or report on operational status. The L1Cal Control Software is responsible for the retrieval of this monitoring information, while L1Cal TCC is not the ultimate consumer or presenter of that information. The L1Cal control software collects and manages the monitoring information (cf. section 8.e) while external monitoring clients request fresh samples of monitoring information for analysis and display . (ref. http://www.nevis.columbia.edu/~evans/l1cal/hardware/monitoring/TCC_monitoring_v02.pdf) 6. GUI vs Engine ================ The Run IIb L1Cal software is based on the Run IIa L1Cal software, which was developed on Microsoft Windows NT. Almost all of the DZero online software has migrated to Linux, and L1Cal TCC is a Linux computer. The ADF test stand at MSU uses Windows, and most of the code has been developed using Microsoft Visual Studio. The resulting Run IIb L1Cal software is thus able to run on either Linux or Windows. The dependence of the Run IIa software on Windows was almost exclusively tied to the user dialogs and menus created by Visual Studio. The compatibility with Linux was achieved by replacing and separating the Graphical User Interface (L1Cal GUI) from the rest of the Trigger Control Software (L1Cal TCS). The L1Cal Control Software now consists in a pair of applications, the GUI application -- used by system experts -- and the TCS application, sometimes also called "the engine" of the control software. Figure: http://www.pa.msu.edu/hep/d0/ftp/run2b/l1cal/tcc/drawings/l1cal_iib_tcc_software.gif The TCS application is implemented in C++ and C, while the GUI application is implemented in Python (http://www.python.org) with TkInter (http://wiki.python.org/moin/TkInter) which are available for both the Linux and Windows platforms, among many others. The GUI is nominally running on the same L1Cal TCC computer as the TCS, but one benefit from splitting the application comes from gaining the option to run the GUI from a different computer located at DZero or at an expert's remote institution. Furthermore the GUI is a non-critical part of the control software which can be started and stopped separately and independently from the TCS. Zero, one, or more GUI applications, as needed, can be connected in parallel to the TCS application at any given time. The GUI implements the Interface used by the L1Cal expert, but is not used by or needed for the interface to COOR or to Online Monitoring. The GUI application is started from a terminal window, which remains an active component of the GUI application. In addition to the point and click TkInter interface, The GUI prints text strings to the terminal window, keeping track of its actions. All text outputs are also captured in a log file. The GUI application presents itself as one main dialog window including a list of buttons, each corresponding to a sub-dialog tailored to a particular type of interaction with the TCS application. ref. http://www.pa.msu.edu/hep/d0/ftp/run2b/l1cal/tcc/l1cal_iib_gui/ Each GUI sub-dialog consists in a number of entry fields where the user can manually specify target addresses and values, before clicking on a named button and generate an action. The GUI communicates with the TCS by exchanging XML (Extensible Markup Language; http://www.w3.org/XML/) text strings. The XML string contains XML elements with XML attributes with values corresponding to and read from the sub-dialog entry fields, thus defining the parameters set by the user for executing that command. The GUI sends the XML command to the TCS which parses the XML string, performs the requested action, and returns a completion status also in the form of an XML text string with XML elements, XML attributes, and values matching the results of the action. The GUI parses the XML reply, displays the results in the sub-dialog fields, prints a descriptive message to the terminal window, also captured in the log file. The XML strings are decoded in the TCS using the Xerces-C++ parser (http://xml.apache.org/xerces-c/), while the GUI uses the xml.dom.minidom module included in the Python distribution. One windfall advantage of using Python for the implementation of the GUI is in the ability to invoke text command files at run-time without needing to provide a dedicated syntax definition or script parser. All actions, basic or complex, invoked by clicking the L1Cal GUI command buttons are implemented by Python functions in the GUI code, following the TkInter model, and these functions and actions are accessible to these external command files. One particular GUI sub-dialog lets the user specify and execute such command file, interpreted at run time. The user can thus perform, in an automated fashion, any arbitrary long or complex list of actions. These actions could have been invoked by manually entering register coordinates and clicking buttons, while the command file can prepare repetitive tasks in advance. In addition to the GUI button functions, all native features of the Python language are available for additional computations, iterations, obtaining coordinates via interactive input, writing logs, invoking other user Python modules, or launching separate external programs. Such command files thus become a flexible and powerful run time extension of the GUI for implementing commissioning tasks. External GUI command files were, for example, the building blocks of the battery of tests performed on each individual ADF card after assembly and before usage in production. Another set of command files were most valuable again during the commissioning and integration phase of the ADF and TAB/GAB. 7. Inter Task Communication =========================== For its communication across each of its three interfaces the TCS uses the ITC (Inter Task Communication) software package developed internally by DZero. ITC is based on the open-source ACE software (ADAPTIVE Communication Environment; http://www.cs.wustl.edu/~schmidt/ACE.html). ITC provides the high level management of client-server connections where the communication between processes running on a common or separate computers is dynamically buffered in message queues. ITC is used quite extensively throughout the DZero online applications for transferring event data, but also for control and monitoring information. The TCS uses ITC (a) to receive text commands from COOR and send an acknowledgement back with the command completion status, (b) to receive XML string commands from the GUI application and send XML string replies to the GUI, and (c) to receive fixed format binary monitoring requests from the monitoring clients and send the requested fixed format binary block of data. 8. Main Control Operations ========================== 8.a. Configuration ------------------ L1Cal Configuration is the term used to described the action of loading the firmware into the FPGAs of the L1Cal system. After powering up, all the FPGAs in all ADF, TAB and GAB cards need to be configured, i.e. they must receive a specific binary sequence, called bitstream, in order to perform become the logic circuitry they were intended to implement. The firmware bitstream was synthesized in advance (not by the control software), and is made available to the TCS in the form of text files. In the case of the ADF FPGAs, these files are in Motorola S-record format. The files are parsed and the extracted bitstream is sent to the targeted ADF FPGA one byte at a time. A similar sequence is followed for the TAB and GAB FPGAs, but the bitstream files are in decimal format and the data is transferred 16 bits at a time. The TCS can configure individual FPGAs, and a L1Cal expert can manually target a specified device on a particular card, but, in practice, the FPGAs are all configured in one pass by the L1Cal expert as part of the power up procedure. In the L1Cal system, the same firmware is loaded into all 160 ADF FPGAs, while the TABs and GAB firmware is FPGA dependent. Configuration of the full L1Cal system is a one click operation initiated from the GUI. The sequence of which specific bitstream file needs to be loaded into which card and which FPGA is controlled by a text file. This text file is maintained by the L1Cal experts to match the current official version of all firmware. Configuration is normally invoked only once, after power up, but if an expert has been testing some new functionality between stores, and has thus reloaded one or more FPGA sites with such test firmware, the full system configuration of the L1Cal system is used again to guarantee a return to the official configuration before the next store. After successful completion of the L1Cal configuration, the hardware implements all the logic and the intended functionality for L1Cal triggering, but is not yet ready for operation and not ready to receive programming from COOR. It first needs to be initialized. 8.b. Initialization ------------------- Initialization is the procedure bringing the system to a defined idle state, ready for run programming. Initialization is the step logically following FPGA configuration, where the TCS resets and synchronizes all the logic circuitry implemented by the hardware, cables, and firmware, and prepares each specific channel and algorithm for processing its assigned input and producing its intended output. Initialization includes programming geometric constants, lookup tables, calibration parameters. The Initialization sequence needs to overwrite every control register existing in the system and guarantee a known and defined state, clearing any history dependence, and getting out of all possible error conditions, thus achieving full reproducibility of this initialized state. Initialization is also a mechanism to verify that all resources are reachable, and programmable, or otherwise bring errors to the attention of the experts before the system is used. The most IO intensive part of the initialization is in the programming and verification of the 2,560 ADF Et Lookup memories corresponding to some 1.3 million VME IOs, taking about 5 seconds. ref: http://www.pa.msu.edu/hep/d0/ftp/run2b/l1cal/tcc/l1cal_iib_tcs/l1cal_iib_initialization.txt Initialization may be initiated for different reasons. After power up and FPGA configuration, the startup procedure follows with a manual Initialization giving the operator a first chance to watch for and verify the proper and successful completion of this operation. An Initialization is also requested by COOR when it first connects to L1Cal TCC, or whenever COOR loses its connection to L1Cal TCC, e.g. when the TCS is restarted, and COOR forces the system back to a defined state. Initializing before the beginning of the next store verifies system readiness, and erases any special programming that may have been manually modified during test periods. After successful completion of system Initialization, the L1Cal implements all the intended logic, and is processing all its input signals, but not yet programmed to deliver useful triggering information, e.g. as might be expected by the physics trigger list. 8.c. Run Time Programming ------------------------- Run time programming is the procedure which defines the meaning of the L1Cal trigger output signals, i.e. the meaning of the And-Or Terms which the GAB sends to the L1 Trigger Framework (cf. 5.b and 5.f). Run time programming is performed by COOR and occurs before a new run is started, if the run requirements involve triggering information from the L1Cal Trigger, or any time such L1Cal requirement is added or changed. COOR receives requests (e.g. the trigger list specifications) and manages the available resources, e.g. COOR assigns which TAB hardware threshold is used to implement a particular cut specified in the user requests. More generally, run time programming involves setting up all the programmable parameters available in the TAB and GAB algorithms. The format and syntax of the interface to COOR is not explained in detail here, ref: http://www.pa.msu.edu/hep/d0/ftp/run2b/l1cal/tcc/coor/ but an example can illustrate the work performed by the TCS. Let's imagine a user's request asking for events where the L1Cal Jet Algorithm has reconstructed at least two jets within the pseudo rapidity region segment [-3.2,+3.2] and above a 5.0 GeV threshold. Programming the L1Cal trigger to start sending an And-Or term to the Level 1 Framework that represents this condition will involve two separate steps, corresponding to two separate messages from COOR to L1Cal TCC. The first step corresponds to programming the TAB cards into finding all the Jet Objects with reconstructed Et energy equal to or greater than 5.0 GeV. The payload of the first message would look like: "L1CAL_Ref_Set Jet_Et_Ref_Set 1 TT_Eta(-16:16) TT_Phi(1:32) Energy_Threshold 5.0" The keywords "L1CAL_Ref_Set" and "Jet_Et_Ref_Set" identify the resource being programmed as being a TAB Jet Object energy thresholds. This message is not targeting a single TAB FPGA or a single TAB card, but a set of Jet threshold resources corresponding to a range of Jet Object coordinates, hence the name Reference Set. The TABs perform seven local comparisons on the reconstructed energy quantity of the potential Jet based on each Trigger Tower coordinate, and this message specifies that comparator number 1 is being programmed. The Trigger Tower "TT_Eta" index range specified as "(-16:16)" corresponds to the Trigger Tower pseudo-rapidity coverage [-3.2,+3.2] while the "TT_Phi" index range "(1:32)" corresponds to the full azimuthal coverage. Each TAB FPGA is in charge of a patch of 4x4 eta and phi Jet Object coordinates, and this message is thus targeted to 64 separate FPGAs which will need to be programmed with a specific value corresponding to the threshold given after the "Energy_Threshold" keyword in GeV units and floating point format. The TCS will convert and round off this value to the nearest lower integer, matching the 1/4 GeV per count units used in TAB processing. In summary, the TCS software will program the value 20 into the register controlling the Jet Et Energy Threshold number 1 for 64 separate FPGAs in 8 TAB cards. The second step is to program the GAB card to assert one of its And-Or Term for every beam crossing where the number of Jet Objects passing the 5.0 GeV threshold as reported by the TAB cards reaches or exceeds the count value of 2. This second message would contain a command of the form: "L1CAL_to_L1FW Jet_All_Term 0 Use_Ref_Set 1 Count_Threshold 2" The keywords "L1CAL_to_L1FW" and "Jet_All_Term" identify the type of And-Or Term being defined while the following value identifies the particular instance of this type of And-Or Term. The value after the keyword "Use_Ref_Set" specifies which previously programmed TAB Jet Reference Set is being tracked, and the value after the keyword "Count_Threshold" is the desired minimum count. In this simple example, the above pair of messages is sent to L1Cal TCC and programs the L1Cal Trigger to flag the condition specified by the user, but COOR must also setup the rest of the L1, L2, and L3 trigger systems before the run can be started, including programming a Specific Trigger in the L1 Trigger Framework to pay attention to the L1Cal And-Or Term now programmed by the pair of messages above to select the events sought by the user. 8.d. Excluding Trigger Towers ----------------------------- Another category of run-time messages is invoked to cope with L1Cal input signal problems. Such problems could originate in the electronics of a calorimeter pre-amp, in a BLS card, or may be caused by sparks occurring inside the calorimeter cryostat itself. When a BLS signal sent to an ADF card is affected, the energy reported for the corresponding Trigger Tower can be falsely interpreted as a large energy deposit. This would induce the L1Cal algorithms into overestimating the energy calculated for one or more Trigger Objects, which in turn would create an excess of L1 Accept decisions. Such excess events will be largely rejected by the L2 or L3 Trigger, but would still become a problem when they start saturating the available L1 trigger and DAQ bandwidth. The only recourse is to suppress the contribution of the faulty tower, at the cost of degrading the trigger acceptance. Excluding a Trigger Tower EM or HD contribution is accomplished by programming the corresponding ADF card to always report zero GeV of energy for that Trigger Tower. This task is not controlled by COOR, but is the responsibility of the L1Cal experts. This would be initiated in response to the detection and diagnosis of an excessive trigger rate, impeding physics data taking, and traceable to one Trigger Tower. The list of which set of Trigger Towers is being excluded at a given time is controlled through a text file, which would be updated by the L1Cal expert and manually executed, usually before starting a new run. This file is also executed during the Initialization sequence described above. 8.e. Managing Monitoring Information ------------------------------------ This section describes how the monitoring resources available on the ADF, TAB and GAB cards are controlled and how the information is retrieved. The Level 1 Framework flags some of the L1 Accept decisions by asserting the "CollectStatus" L1 Qualifier which is broadcast to all the Geographic Sections via the Serial Command Link. This typically occurs every 5 seconds, whenever events are flowing through the DAQ system. The expected response of L1 and L2 trigger sub-systems participating in this monitoring is to capture, collect and make available some monitoring data corresponding to the triggered crossing. This may include data describing the event itself, or the system status at, or up to, that time. In particular, the L1Cal ADF cards have been designed to allow the occurrence of CollectStatus to cause the capture a whole Turn worth of Trigger Tower energy data, including the 36 Live Crossings included in a turn. Part of the Initialization sequence (cf. section 8.b) includes setting up the Output Et Monitoring Memory of all ADF cards to recording mode, where they continuously capture the Output Et data being sent to the TAB cards. ref: http://www.pa.msu.edu/hep/d0/ftp/run2b/l1cal/hardware/adf_2/drawings/data_path_fpga_signal_processing.pdf Saving the beam crossing flagged with the CollectStatus Qualifier is then a question of stopping the overwriting of this monitoring memory, simultaneously across all 80 ADF cards. This function is accomplished by the monitoring support circuitry on the SCLD card. The TCS only needs to arm this SCLD circuitry, then poll one ADF card until it finds that the Address Generator has been stopped. The TCS can then read from the ADF Output Et monitoring memory blocks a copy of the Et values which were sent to the TAB cards for the 36 live crossings captured around the triggered crossing. The TCS also retrieves the final count reached by the Address Generator when the CollectStatus qualifier was received, and derives the Tick Number corresponding to the crossing that generated this particular L1 Accept. The monitoring data collection phase ends after all ADF Output Et Address Generators have been restarted, and the SCLD monitoring support circuitry has been re-armed. The system is then ready again to capture the next assertion of the CollectStatus L1 Qualifier. When triggering and data flow is stopped -- for any reason -- the CollectStatus flag can no longer be received, since no L1 Accept is generated. Instead of letting stale monitoring data be displayed by the monitoring system, it is much preferable to capture a random turn of data, continue reporting fresh information, and provide up to date diagnostic information. L1Accepts with CollectStatus normally occur within 5 seconds of each other, and the TCS will timeout after 6 seconds, prodding the SCLD to initiate the immediate capture of monitoring data. Monitoring data collection will then proceed in the same manner, but this monitoring information will be labelled as coming from a random crossing, instead of including identifiable triggered data. The TAB and GAB monitoring data is retrieved in a similar manner during the monitoring data collection phase, but the TCS needs to fulfill one additional responsibility. Error conditions detected by the TAB and GAB cards are reported in and read from monitoring registers, but the act of reading these registers also clears the error status. The TCS thus maintains a set of software counters, incremented each time the corresponding error status is found asserted during each monitoring data retrieval. These software counters are also part of the monitoring data sent to the monitoring clients. The TCS maintains the last set of monitoring data retrieved from the hardware in a monitoring pool. A dedicated server process implements the monitoring interface and replies to monitoring clients by sending blocks of monitoring data filled using that monitoring pool. 8.f. Online Calibration ----------------------- Between stores, the TCS can also perform some control tasks independently of and without interfering with the rest of the acquisition system. One important task is to calibrate the zero energy response of the Trigger Tower Et sent to the TAB cards. The differential BLS input signal is AC-coupled to the ADF input stage, but DC offsets inherent to the differential amplifier are still introduced and become part of the input to the ADC. ref: http://www.pa.msu.edu/hep/d0/ftp/run2b/l1cal/hardware/adf_2/drawings/adf_2_differential_amp.pdf The input signal is also carrying some amount of synchronous noise which is summed to any potential energy deposit thus appearing at the moment of digitization, as another source of offset. As described in section 5.d these offsets are balanced and compensated by summing a programmable constant voltage generated by a DAC. A TCS tool named Find_DAC is used to tune the DAC control value to generate the desired zero energy response, also called pedestals, uniformly predefined as 50 counts of Raw ADC counts, and 8 counts of ADF Output Et. This tool is invoked between stores, as needed, when the pedestal values have drifted away from these design values. Find_DAC programs the DAC with a some extreme and intermediate values while comparing the resulting average ADC response to an acceptable range, and calculates the amount of noise. Find_DAC can then determine a coarse interval of DAC control values surrounding the value sought after, and ramps the DAC control value across this range. ref: http://www.pa.msu.edu/hep/d0/ftp/run2b/l1cal/tcc/l1cal_iib_tcs/find_dac_steps.txt Find_DAC reports the DAC control value having generated the histogram whose average is closest to the target Raw ADC zero energy response of 50 counts. The output of Find_DAC may then be designated to become the text file used to program the DAC control values of subsequent Initialization sequences.