Notes about the L1 Calorimeter Trigger Readout ------------------------------------------------ Initial Rev. 6-July-2001 Current Rev. 10-July-2001 This file contains all of our notes about the HSRO Readout of the Level 1 Calorimeter Trigger. The first major section is Daniel Mendoza's detailed description of the setup of the VRB's, VRBC, and VBD in the L1 Cal Trig readout. Next is a section that lists all the documentation files involved with the Run II L1 Cal Trig readout. The intent is to include here a short comment describing what information is in each file. The third section is the original background notes about HSRO of the L1 Cal Trig. The forth section is a summary of the various clock and timing and control signals involved in the readout of the L1 Cal Trig. L1 Calorimer Trigger Readout Crate VRB's ------------------------------------------ Four VRBs wil be used to receive Trigger Tower Data and provide it to L2 a L3. Their location within the crate and their function is described in: http://www.pa.msu.edu/hep/d0/ftp/l1/cal_trig/hardware/ rack_crate/run_ii_m101_card_addresses.txt I. Hardware Settings -------------------- i. VRB Settings ------------ All four VRBs will run in the so-called "TRIGGER MODE", with the VTM (VRB Transition Module) in 20-bit mode and uses D17 and D19 of the G-link data lines as the end of record flags. For this configuration, the settings of the S-3 dip switch should be as follows: Switch Status Comment ------ ------ ------------------------------------- S3-1 -- OFF Flash Bank 0 write protection DISABLED S3-2 -- OFF Flash Bank 0 write protection DISABLED S3-3 -- ON Diagnostic RS-232 port DISABLED S3-4 -- OFF Remote programming ENABLED S3-5 -- OFF | S3-6 -- ON | S3-7 -- OFF | Application # 5 - Trigger Mode S3-8 -- ON | The settings for dip switches S1 and S2 must be left unchanged. That is, S1 - ON and S2 - OFF ii. VTM settings ------------ 1. 20-bit mode. For the VRB Trigger Mode configuration, the VTM should be set in 20-bit mode. This is acomplished by removing the jumper JP8. 2. Rate The VTM, running in 20-bit mode at 53 MHz, requires a rate setting of at least 20*53*10^^6 Mbit/s (=1060 Mbit/s). It looks like a conservative setting is to run it in the range of 630-1250 MBS. In this case, jumpers JP6 and JP7 should be left IN. iii. VRB Controller Settings ----------------------- Only two hardware parameters that can be configured on the VRB Controller, dip switches S1 and S2. Dip switch #1 configures the upper byte of the crate ID. Since the crate ID won't be larger than 0xFF, all the switches S1-1 through S1-8 should be left in the OFF position. Dip Switch #2 configures the interrupt level when working in SDAQ mode. This crate will not be ever used in SDAQ mode, therefore all the switches S2-1 through S2-8 should be left in the OFF position. iv. VBD Settings ------------ There are only 2 user configurable jumpers (according to ZRL's literature) MJ1 and CJ1. MJ1 selects the base address and size of the buffer memory. CJ1 selects an 8Kbyte region of I/O space for the Control Status Register and control memory. The current settings on the VBD establish a base address for the buffer memory at 0x300000 and the 8K region for the control memory at 0x6000. It can be inferred from the MJ1 jumpers installed in pins 1 trough 20 and from the CJ1 jumper installed in pins 1 and 2. II. Software Settings --------------------- i. VRBs ---- Unlike the VRBs used in the TFW, the VRBs being used in the L1 Cal Trigger readout crate, have the date code 1208 (Feb 8/2001). Two main differences affect the operation of these modules in "Trigger Mode". They are: 1. Addition of a control bit to disable backplane status signals when the board is not in use. 2. Default buffer configuration is 16 * 2K (16 Buffers of 2 Kbytes each) for buffer 0-15, with overlapping 8 * 4K for buffers 16-23. However, since by default the status signals are ENABLED, no software modification is required to the VRB Control register. Nonetheless, the option of disabling any of the boards by software, without having to remove the board from the crate, should be taken advantage from. ii. VRB Controller -------------- Four VME accessible registers are used to control and monitor the operation of the VRB Controller module used in the L1 Cal Trig Readout crate. A more complete description of all the registers can be found at //d0server4/users/mendoza/vrbc/ VME ADD Name R/W Size ------- ------------- --- ------ 0x48000A* Reset R/W Word 0x4800B2 Path Control R/W Byte 0x4800D0 Slve Rdy & Finish R/W Byte 0x4800D2 Status Register R Word/Byte RESET: (*) Accessing this memory location (W), causes a RESET to the VRB Controller clearing all the registers and |+|+|+|+|+|+|+|+|+|+|+|+|+|+|+|+|+|+|+|+|+|+|+|+|+|+|+|+|+|+|+|+|+|+|+|+| Reader's Guide to the Files describing the L1 Cal Trig HSRO ------------------------------------------------------------- This section is a list of the files that describe the readout of the L1 Cal Trig. Topics covered include: data format, hardware, timing, In the directory: http://www.pa.msu.edu/hep/d0/ftp/l1/cal_trig/readout/ data_to_l2_cal_pp.txt historical_notes_on_cal_trig_readout_for_run_ii.txt readout_notes.txt tt_readout_hardware_description.txt In the directory: http://www.pa.msu.edu/hep/d0/ftp/l1/cal_trig/hardware/general/ calorimeter_trigger_tt_data.txt In the directory: http://www.pa.msu.edu/hep/d0/ftp/l1/cal_trig/hardware/erpb/ erpb_fpga_description.txt In the directory: http://www.pa.msu.edu/hep/d0/ftp/l1/cal_trig/hardware/dc/ dc_board_description.txt In the directory: http://www.pa.msu.edu/hep/d0/ftp/l1/framework/hardware/aonm/ bougie_fpga_description.txt cal_trig_ro_fpga_description.txt Relevant Run I L1.5 Cal Trig Documents ---------------------------------------- In the directory: http://www.pa.msu.edu/hep/d0/ftp/run1/l15/caltrig/timing/ ctfe_erpb_crc_mtg_timing.txt In the directory: http://www.pa.msu.edu/hep/d0/ftp/run1/l15/caltrig/cards/ erpb_and_dc_users_guide.txt |+|+|+|+|+|+|+|+|+|+|+|+|+|+|+|+|+|+|+|+|+|+|+|+|+|+|+|+|+|+|+|+|+|+|+|+| Notes about Run II Readout ------------------------------ Original Rev. 1-MAR-1999 Current Rev. 9-July-2001 Background ---------- In Run I, we used the COMINT card to read out the L1.5 Cal Trig (reading the cards via CBUS and sending the data to the VBD, and also using a 68K CPU to modify the data and provide additional data). For Run II, we plan to NOT use the COMINT card or CBUS, but rather to readout the Cal Trig using the ERPB's, some FM Cards, and possibly making a Super_DC to replace the current DC. Recall that the ERPB has access only to the following information: 1. EM Et for each Trigger Tower 2. Total Et for each Trigger Tower It does not know about: 1. Masks of Towers above reference for the 4 EM and 4 Total Reference Sets 2. Counts of Towers above reference for the 4 EM and 4 Total Reference Sets 3. The multiple threshold comparisons on the number of Towers above reference for each of the 4 EM and 4 Total Ref Sets 4. Masks/Counts/Threshold Comparisons of Large Tiles above reference for the 8 Large Tile Reference Sets. 5. Global sums of EM and Total Et 6. Multiple threshold comparisons on the Global EM and Total Et sums 7. Global sum of Px and Py, and associated multiple threshold comparisons on Missing Pt. 8. Level 0 Fast Vertex Bin etc. Let's look at each type of data and see what is involved in the data collection. EM and Total Et --------------- To start, just consider EM and Total Et. At a minimum, the functionality required is essentially what the ERPB already does: collects the EM and Total Et data for each tower and delivers it in 16-bit chunks to a consumer (the DC). Recall that in single rack, we have 128 Trigger Towers, each of which provides 16 bits of data (8 bits EM Et, 8 bits Total Et). At our normal (Framework) readout speed of 16 bits/132 ns, 16.9 us are required to read out a rack (assuming one rack per VRB channel). The ERPB can operate at this rate. At the G-link/Finisar maximum rate of 16 bits/18.8 us (ignore for now the 20-bit mode), only 2.4 us are required per rack. The ERPB has never been run at this high rate, and it is not obvious that it would work. Our deadtime target for L1 Accepts is 5-8 us so we either need to transport data at a high rate or split each rack across multiple VRB channels (simplest would be 4 VRB channels per rack, i.e. one channel per group of 8 CTFE cards/pair of ERPB cards -- a total of 40 VRB channels/10 VRB cards to do this readout, but other arrangements are also possible). Unlike the current arrangement, though, there is latency on the L1 Accept, necessitating a Beam Crossing History Shift Register to record the EM and Total Et. This shift register will need to be not more than 32 stages deep. Also, the final output will be serial optical rather than parallel copper. There is the additional possibility of requiring more than just a single tick's worth of data for any L1 Accept. Dean may want zero- suppressed address/data pairs for a large number of ticks (before and after? the triggered tick). This does not necessarily increase the BXHSR storage requirements, but does increase the readout time and complexity considerably. Looking at these issues in order of complexity: 1. Parallel-Cu to serial-optical This translation is simple, and could be easily done with a single THE Card per VRB Channel (replacing the readout functions of the DC). This requires 10-40 (!) THE Cards, depending on the mapping of ERPB's to VRB channels. Alternatively, we could make a "Super_DC" for each rack with one or more optical outputs. This solves the problem of where to put the THE Cards. 2. BXHSR This is most easily performed in the ERPB FPGA's. If we could move 16 bits of data from the ERPB to the DC every 4 ns (!), we could do the BXHSR in THE Card. This is absurd but is there any other trick we can do to try to move this function into THE Card? 3. Zero-suppression Again, this is most easily performed in the ERPB FPGA's, but is there a trick? Masks of Towers above Reference ------------------------------- Collecting the Jet Masks in the ERPB is a big advantage, as they are distributed across all of the racks, and thus difficult to read out otherwise. What is involved in doing this? Each rack produces 64 16-bit words of Jet Mask data (128 Trigger Towers * (4 EM Et Ref Sets + 4 Total Et Ref Sets) / 16 bits/word). This could potentially be reduced by zero-suppression, depending on the typical hit density, but let's ignore that for now. Also let's imagine that no one will request these masks for ticks other than the triggered tick. This is an additional 50% (!) more data per rack than required for just EM/Total Et data. In order to calculate Jet Masks, we need the Reference Set data. ERPB's have no connection to TCC, so getting this data into the ERPB's is problematic (short of "hard-coding" it into the FPGA's, which is undesirable). A "Super_DC" could have a TCC link (parasiting from the existing CBUS?) and could calculate Jet Masks as the data comes in from the ERPB's. This technique would also not increase the ERPB-to-Super_DC data volume, only the Super_DC-to-VRB volume. Counts of Towers above Reference (and thresholds on these counts) ----------------------------------------------------------------- This data is localized to a single T3 crate, and thus can most likely just be sent to a FM card for readout. The L1 Cal Trig produces 8, 11-bit counts of this type, and about 4 threshold comparisons on each count. A single FM card could easily handle this data. Masks of Large Tiles above Reference (and counts and thresholds) ---------------------------------------------------------------- Re-creating the Large Tile Masks is considerably more complicated, as multiple towers must be added up to provide the data. This is do-able in a Super_DC, but cumbersome. There are only 40 Large Tiles in the L1 Cal Trig, and 8 Large Tile Ref Sets. Therefore there are: 40 Tiles * 8 Ref Sets = 320 bits of Large Tile Masks 8 Ref Sets * 6 bits = 48 bits of Large Tile Counts 3 count thresholds * 8 Ref Sets = 24 bits of Threshold Compares ---- 392 bits total Note that in practice much fewer than 8 Ref Sets have been used, reducing this requirement considerably. Note also that "raw" Et for each of the 40 Large Tiles is NOT AVAILABLE (nor was it in Run I). It is easy to collect the Counts and Threshold Compares (they are both available in a single T3 crate), but the Large Tile Masks are distributed across all of the T1 crates. How do we collect these? Is it easier to just re-create them? Everything Else --------------- Almost everything else is available in a single T3 crate. The data volume is relatively small: 24 bits of Global EM Et 24 bits of Global Hadronic Et (?) 24 bits of Global Total Et <16 bits of Global Et threshold compares 24 bits of Px 24 bits of Py 8 bits of Missing Pt 8 bits of Missing Pt threshold compares 8 bits of L0 Fast Vertex Bin This data would nearly fit on a single FM card. There was some additional data read out from M114, detailing which Specific Triggers were programmed in some particular fashions (e.g. which Specific Triggers used which Ref Sets). Do we still need this? If so, an FM comes to mind again (this data was stored in AOC's before). Other Customers for the Data ---------------------------- Besides the VRB's (and the L1 Trigger Framework), there is at least one other customer for (some of) the L1 Cal Trig Data: the L2 Cal_pp. The L2 Cal_pp will want access to: 1. EM Et / Total Et for each Trigger Tower 2. Jet Masks for (multiple?) EM and Total Ref Sets 3. Jet Masks for (multiple?) Large Tile Ref Sets 4. other data also sent to the VRB's (?) It is important that the Cal_pp only receive a subset of the data sent to the VRB's. Moreover, it will be convenient to define that the Cal_pp receive either ALL or NONE of the data associated with a given VRB channel (simplifying the task of splitting this data). The simplest way to send the data to the Cal_pp is by passively splitting the G-link cable to the VRB and sending a copy to the Cal_pp. This puts limits on the formatting of this data, it must look good to both the VRB and the Cal_pp. Another technique may be to use the Manuel Martin's Broadcaster Card, which is being designed to fill a similar role for the Sci Fi Trigger. It receives data on a serial copper link (Cu G-link?) and provides 2 G-link fiber outputs (1 for the VRB, 1 for the L2 CFT_pp). It has an additional output to drive a Sci Fi "Trigger Manager" card. The limitations imposed by this card are not yet clear. ERPB Issues ----------- The current ERPB uses an XC4002A FPGA, in a PC84 package. This FPGA has 64 CLB's and a maximum of 61 user IO's. This device is obsolete (unavailable from Xilinx as of September 1997). According to the ERPB docs, the Run I ERPB design used 44 CLB's and 50 user IO's, placing the device at 69% (CLB's) and 82% (user IO's) full. This does not bode well for future expansion of the ERPB functionality using this silicon. One idea is to rip these parts off from the ERPB and replace them with larger chips. Other parts available in this same footprint at 5V: XC4003E -4, -3, -2, -1 (all CI except -1 which is C only) XC4005E -4, -3, -2, -1 (all CI except -1 which is C only) XC4006E -4, -3, -2, -1 (all CI except -1 which is C only) XC4007E -4, -3, -2, -1 (all CI except -1 which is C only) XC4010E -4, -3, -2, -1 (all CI except -1 which is C only) C=Commercial I=Industrial Another issue with the ERPB's is that the design was done neither in schematics nor in VHDL, but directly in EditLCA using script files (EditLCA is Xilinx's old version of EPIC, the tool used to directly play with the silicon). This presents a couple of problems. It is hard to understand the design (it was not done at MSU), and it is hard to make incremental changes (we don't use EditLCA, although we have an old version available if necessary). In order to use this design, we will probably need to re-create it in schematic or VHDL form. We do have all of the information necessary to do this, as well as detailed placement/ floorplanning data. There are 2 GAL's on the ERPB: 1. Transmit GAL (22V10) 2. Fault GAL (16V8) The DC has 4 GAL's (1 16V8 and 3 22V10's). We do have equations for these (rather simple) GAL's. We also have full schematics for the ERPB and DC (although not directly usable in Mentor Graphics), and probably could get Gerber data from Steve Pier at UCI. |+|+|+|+|+|+|+|+|+|+|+|+|+|+|+|+|+|+|+|+|+|+|+|+|+|+|+|+|+|+|+|+|+|+|+|+| Description of the various Clock and Timing and Control signals in the L1 Cal Trigger Readout --------------------------------------------------- Many of the timing and control signal involved with the ERPB circuit board, ERPB FPGA, DC circuit board, DC logic diagrams and programmable parts, and Bougie are all very confusing just because they have so many different names and some of their functions changed between Run I and Run II. The following section will try to at least list all these signals, most of which are straight forward in their function. MTG Connector on DC looks like: (This is labeled "JM1") ------------------------------ Run 1 Names ------------------------ DC Input ERPB DC Pin # Name Name Name Pol. Comment ----- ---- ---- ---- ---- ------- 1 NINV unused 2 INV unused 3 MTG_0 INPUT_CLK Setup_0 NINV 4 MTG_0 INPUT_CLK Setup_0 INV 5 NINV unused 6 INV unused 7 MTG_1 TOTAL SETUP_1 NINV 8 MTG_1 TOTAL SETUP_1 INV 9 MTG_2 STORE\ SETUP_2 NINV 10 MTG_2 STORE\ SETUP_2 INV 11 MTG_3 LATCH\ SETUP_3 NINV 12 MTG_3 LATCH\ SETUP_3 INV 13 MTG_4 SETUP_4 NINV unused 14 MTG_4 SETUP_4 INV unused 15 NINV unused 16 INV unused 17 MTG_5 SETUP_5 NINV unused 18 MTG_5 SETUP_5 INV unused 19 NINV unused 20 INV unused 21 MTG_6 XMIT_TRIG XMIT_TRIG, SETUP_6 NINV 22 MTG_6 XMIT_TRIG XMIT_TRIG, SETUP_6 INV 23 MTG_7 SETUP_7 NINV 24 MTG_7 SETUP_7 INV 25 MTG_8 DIST_ADR\ NINV 26 MTG_8 DIST_ADR\ INV 27 MTG_9 DIST_STB\ NINV not routed to ERPB 28 MTG_9 DIST_STB\ INV not routed to ERPB 29 MTG_X NINV spare 30 MTG_X INV spare 31 MTG_Y NINV spare 32 MTG_Y INV spare 33 NINV unused 34 INV unused The differential ECL "MTG Signals" are received on each DC card on its JM1 connector. These "MTG Signals" are converted to TTL for use on the DC. These "MTG Signals" are buffered as differential ECL and along with some additional signals are sent off of the DC on the JP1 connector and routed, via the "parallel cable" to all the ERPB cards in parallel. All signals on this "parallel cable" are differential ECL. Pinout of the Parallel Cable DC to all ERPB's Run II ------------------------------------------------------- The differential ECL signal layout is the normal setup, i.e. odd pin number is the direct signal and the next higher even pin number is the complement of that signal. Pin Number Pair ---------- 1,2 #1 diode to Vee, #2 no connection 3,4 no connection 5,6 Received on ERPB but not routed anywhere. 7,8 The DC sends copies of the DST_4 signal on both of these pairs. 9,10 DST_6 DIN for ERPB FPGA Configuration FPGA pin 71 11,12 DST_5 CClk for ERPB FPGA Configuration FPGA pin 73 13,14 DST_4 PROG* for ERPB FPGA Configuration FPGA pin 55 15,16 DST_3 17,18 DST_2 Negative Eta 19,20 DST_1 21,22 DST_0 Xmit_Clk 23,24 MTG_8 25,26 MTG_7 27,28 MTG_6 29,30 MTG_5 31,32 MTG_4 33,34 MTG_3 Two_Tick_Readout (Run 1 Latch/) 35,36 MTG_2 Capture (Run 1 Store/) 37,38 MTG_1 EM/Total_Bar 39,40 MTG_0 Input_Clock Description of some of these signals as used by the ERPB FPGA: Input_Clock The falling edge of this signal latches the data from the CTFE board. EM/Total_Bar Tells the ERPB which kind of data it is ingesting. Use of the MTG_0 : MTG_8 signal on the DC card ---------------------------------------------- I think that only MTG_(7:9) are actually used on the DC circuit board. These are used to control when the ERPB FPGA's are configured. All of the MTG_(0:8) signals are run around on the DC and they are called by a number of different labels so it is had to tell. A description of the MTG to DC cables and the DC to ERPB cables is in the log book for: 28,29,30-APR-1999 At: Fermi. This includes a description of the setup of the dip switches on the DC.