Planning for Hub Testing Rev: 20-Sep-2016 1 -- Reminder: List of Hub FPGA (non-physics) functions/bricks ------------------------------------------------------------ This tries to be an inclusive list of all the functions and topics that will need some FPGA firmware 1.1 assist with ROD power up 1.2 combine slot address and shelf address 1.2.1 result is provided to ROD 1.2.2 result is used internally by Hub FPGA 1.3 IPbus interface for connection to this Hub's switch 1.4 IPbus interface for connection to the other Hub's switch 1.5 receive and decode GBT data via minipod This is an MGT receiver 1.5.1 recover TTC clock The output drives 40.08 MHz and (indirectly) 320.64 MHz PLLs 1.5.2 recover TTC control data 1.6 create Combined Data merge the recovered TTC control data with ROD 1 & 2 control info to send to each FEX, ROD, other HUB 1.7 receive Readout data These are MGT receivers 1.7.1 from all FEXs 1.7.2 from the Hub FPGA on other Hub 1.8 send Readout data These are MGT transmitters 1.8.1 to the ROD on this Bub 1.8.2 to the ROD on other Hub 1.9 I2C target for access from IPMC or front pannel 1.10 I2C master for control of PMbus control and status accessible via IPbus registers 1.11 logic for TBD usage of 3x front panel LEDs 1.12 retrieve miniPODs monitoring data control and status accessible via IPbus registers 1.13 MDC/MDIO monitoring of each switch chip control and status accessible via IPbus registers 1.14 MDC/MDIO monitoring each phys chip control and status accessible via IPbus registers 1.15 Misc IPbus registers for control 1.15.1 set spare Hub-ROD control/status signals 1.15.2 3x I2C buffer enable control signals 1.15.3 13x fanout equalizer enable control signals 1.15.4 driving/suppressing 40.08MHz onto backplane (i.e. Hub 1 vs Hub 2) 1.16 Misc IPbus registers for status 1.16.1 sysmon voltages and temperature 1.16.2 lock detect from each PLL 1.16.3 loop detect status from each switch chip 1.17 send Readout data to miniPOD transmitter Phase 2 only 1.18 read data from additional miniPOD Reciever maybe needed in Phase >=2 1.19 DCS support ? Any additional support needed for DCS monitoring in addition Probably already included and accessible via IPbus registers 2 -- Test configurations ------------------------ This a list of likely hardware test stages. They sorted by test site location (MSU, UK, CERN) and in order of increasing complexity Each entry below tries to list all the features that CAN be tested with each combination of hardware These stages are not necessarily sequential phases. Some may happen in parallel. Some may have higher priorities at a given time. Some may be set aside for a while (e.g. IPMI) Some may be on hold at times, waiting for parts or FW. There is a lot to test in a limited time. Some compromises, choices and input from real life will help this evolve into a real plan with sequential phases. 2.1 at MSU 2.1.1 HUB SN00 (with no FPGA) on the bench The first HUB card assembled without a HUB FPGA will be called Serial Number 0 close scrutiny visual ohm meter final assembly soldering all "one-of" components front panel and other mechanical components test of power supplies sequencing PMbus via front panel voltage tuning current monitoring (practice only) oscillators and PLLs tracking over LHC frequency range Broadcom switch boot from PROM content test via front panel only Chip B to Chip B Chip A to Chip B Chip C to Chip B with backplane male adapter plug(s) from front panel to any slot FW required none for FPGA Broadcom switch PROM content needed 2.1.2 HUB SN01 (with FPGA) on the bench The first HUB card assembled with a HUB FPGA will be given Serial Number 1 final assembly power supplies oscillators JTAG first connection detect Hub FPGA load first firmware in FPGA load first firmware in PROM boot firmware from PROM Broadcom switch repeat tests as needed Firmware needed do-nothing-but-safe FW 2.1.3 HUB SN01 in ATCA shelf First Hub Firmware required need "dummy" IP bus registers for register test can be tested via switch or first without switch via port from other Hub and backplane adapter do we need something special for sysmon data to see voltages and temperatures from both Logic Regions Broadcom switch test via front panel connectors to Hub FPGA optionally to backplane female adapter(s) MGT communication test to 12x GTH TX to 3 GTH RX via miniPODs and fibers need BERT-based test firmware need miniPODs need prizm tp MTP cables need MTP to LC "octopus" cable incrementaly add more MGT RX and TX measure MGT power supply currents compare to XPE predictions Hub Firmware could also already add mastering I2C from IPbus add reading miniPOD monitoring data from IPbus practice Aurora on VU125 2.1.4 one HUB + one ROD (at MSU or probably first Cambridge) ROD power up sequence Hub Firmware needed ROD power management IPbus to ROD direct from front panel looped through Hub switch ROD JTAG load ROD firmware I2C Read ROD I2C targets check if I2C usable while ROD off check ROD and HUB non-interference alternate I2C mastering MGT communication Hub to ROD 2x lanes HUB VU GTH -> ROD V7 GTH could also use 12x HUB VU GTH TX to 6x? ROD V7 GTH RX via miniPODs IBERT and/or custom tests Hub to ROD spare signals ROD-controlled LEDs ROD-controlled Lemo What firmware do we need from UK What firmware do we need to send to UK Decide when to send a working HUB to UK Current plan is to do first tests at Cambridge and bring one ROD back 2.1.5 one HUB with IPMC Ethernet to IPMC direct from front panel looped through Hub switch I2C multi-mastering can we see what IPMC reads? Shelf Address Hub Firmware needed Geographic address logic 2.1.6 two HUBs MGT communication over backplane 2 lane GTY to GTY readout to other Hub 1 lane GTY to GTY combined data to other Hub 2.1.7 one HUB + one FEX or FTM (+ROD) MGT communication over backplane 6 lane from FEX V7 GTH to HUB Ultrascale GTH or GTY depending on FEX slot BERT Aurora walk FEX through 12 slots in full crate or 4 slots in mini crate what FW do we need from L1calo ? what SW do we need from L1calo ? 2.2 at Cambridge or UK current plan is to have first meeting of ROD and HUB at Cambridge after that HUB SNxx stays at Cambridge ROD SNxx comes to MSU 2.2.1 one ROD + one HUB (at MSU and/or Cambridge) same as 2.1.4 2.2.2 one ROD + one HUB (+FTM/FEX) similar to 2.1.7 one HUB + one FEX or FTM (+ROD) what Firmware does L1calo need from us? FW to send fixed/simulated combined data (TTC data + ROD info)? e.g. send a L1Accept via IPbus register command what Software does L1calo need from us? to support FW above is it just generic IPbus support or additional python? Only one UK test stand or two ? Cambridge RAL will they want 2 HUBs, when? 2.3 at CERN CERN will likely be the only place where we can test TTC input (is that true?) and where the highest number of FTM and FEXs will come together in one single crate 2.3.1 one HUB + one FEX + GBT whenever FELIX test stand is available and HUB Firmware ready TTC clock recovery TTC data extraction generate Combined Data 2.3.2 one or two HUB+ROD + many FEXs (+ GBT optional?) goal is to stress crosstalk and measure fidelity vs line speed how many FTM/FEXs and what optimal combinations of slots BERT-only or also Aurora (or other official protocol)? FEX to ROD FEX to Hub what else can be tested only when many FEXs are in the same crate? Does it need TTC data? maybe not if just IBERT if yes, use simulated TTC data or real FELIX input? 2.4 Are there other test stands beside MSU, UK, CERN? what functionality would they need beside switching (should not need FW or even FPGA) synchhonous 40.08 MHz clock fake L1Accept? Currently no plans for other test stands, correct? 3 -- List of Topics needing attention before tests --------------------------------------------- 3.1 IPbus FW for Ultrascale port to Ultrascale probably not fully testable on VC108 ? 3.2 locate a PC and install IPbus library maybe check with Reiner, Dave Sankey Ed and Yuri have initial experience what computer do we use what OS what software 3.3 I2C software to PMbus via front panel Brian has explored some of this what computer do we use what OS what software need a front-panel connector adapter 3.4 RegTest software for IPbus rewrite or adapt VME version from CMX or DZero probably simple, depending on IPbus libraries 3.5 Firmware do-nothing-but-safe add features over time IPbus ROD power up Aurora etc Need to write more detailed road plan 3.6 Switch PROM(s) what content prom content generator understood need to buy programmer 3.7 switch test plan there are standard tools like ping, iperf from generic linux can use perfSONAR needs installation measures one way latency, packet drop, throughput over long periods, probably most interesting to us is packet drop how many ports do we test at once? 4 reachable via front panel 13 over backplane plug remember 1Gb overall, not per port expected usage is all communication from backplane to one front port what computers? Temporary squat of MSU T3 Pile of retired T2 nodes MSU salvage PCs $35 Rasperry Pi or similar extensive tests on first board build test stand for production for acceptance test 3.8 TTC LHC clock and TTC data recovery porting to Ultrascale What FW source do we get recover LHC clock recover control data What do we need to adapt just port to Ultrascale? What FW do we need to add combined data what else? Overlap with anybody else in L1calo? gFEX? L1Topo? 3.9 combined data protocol keep pushing on Ian and Dave Ian may be starting a document. 3.10 FEX RO protocol keep pushing on Ian and Dave It looks like eFEX and jFEX will be using different protocols and maybe even speed 3.11 Understand sysmon on Ultrascale What do we need to know to retrieve information from both logic regions (both halves) of the VU125 Maybe straightforward, maybe not. 3.12 programming of Flash Memory through FPGA This is done via Vivado and JTAG load special firmware to program NOR Flash through the FPGA see XAPP 1220 Afterward, normal HUB configuration at power up is from the flash memory using "Master BPI mode" 3.13 strategy for IPbus register address map Do we need to start planning on how we devide the IPbus address space 3.14 management of Official Online Hub FPGA Firmware We will have several types of Official Hub FPGA Firmware Geographic addressing may help, but not sufficient for Hub1 versus Hub2 versus Test crate with SingleHub eFEX versus jFEX (and L1topo) Naming strategy Versioning strategy Repositary What else? 3.15 management of Test Hub FPGA Firmware Many instances will be created over time to support the tests Some will be superset of each other Some will be dedicated for some specific acceptance test Need cataloging, documenting and description strategy Naming strategy Versioning strategy Repository What else? 3.16 generation of MAC and IP addresses given a slot and crate address ask Ian and Dave There was an initial proposal could hardcode at beginning 3.17 IPMC PROM what content what programmer when is it needed can we completely ignore IPMC at first? Do we know what will be available when? Do we know who will work on it? 4 -- misc notes: questions, asumptions, current understanding ------------------------------------------------------ 4.1 Aurora The Xilinx LogiCORE IP Product Guide are confusing PG046: Aurora 8B/10B core using UltraScale architecture GTH transceivers The Aurora 8B/10B core configures Channel Phase Locked Loop (CPLL) in UltraScale PG074: Aurora 64B/66B supports only the UltraScale GTY (GTYE3) devices http://www.pa.msu.edu/hep/atlas/l1calo/hub/hardware/components/xilinx/Aurora/aurora_8b10b_logicore_ip_pg046_01apr2015.pdf http://www.pa.msu.edu/hep/atlas/l1calo/hub/hardware/components/xilinx/Aurora/aurora_64b66b_logicore_ip_pg074_08jun2016.pdf 8b/10b supported on GTH up to 6.6 Gbps maybe hackable to higher line rates by using QPLL instead of CPLL but we may not ever need that 8b/10b not supported on GTY not officially supported via GUIs but preliminary success modifying intermediate configuration files 64b/66b not explicitly listed for GTH but Vivado seems to support it anyway 64b/66b supported on GTY no problem 4.2 what is the functionality needed for Phase 1, Phase 2 does the Hub FPGA need to see all FEX data for Phase 1? (in case GTY 8b/10b Aurora does not work) Expected Phase 1 backplane speed seems to be <= 6.4 Gbps especially for Aurora and eFEX Only jFEX expressed desire to use higher rate (9.6 Gbps) but not with Aurora anyway 4.3 list of protocols we need definitions for FEX->ROD payload format may just be same as GBT eFEX still wants to use 6-lane 8b/10b Aurora jFEX has 6 independent FPGA sources, using raw 8b/10b TTC input protocol Probably already defined need to find references and study it? ROD->HUB back pressure Yet undefined Combined Data Protocol Ian may be working on this heard suggestion that not all FEXs recieve the same information Geographic Addressing method email discussion started MAC and IP addressing method email discussion started 4.4 line rates Current asumptions and unknowns Data Direction Path Phase 1 Phase 2 -------------------+------------+---------------------+----------------+------------------ FEX Readout FEX->ROD Backplane up to 6.4 Gbps up to 9.6 Gbps HUB Readout HUB->ROD Backplane & On-Card N/A match FEX->ROD TTC input GBT->HUB miniPOD Rx 4.8 Gbps 5.xx Gbps rumor ? LHC clock HUB->All Backplane & On-Card 40.08 MHz no change ROD "Back Press" ROD->HUB Backplane & On-Card ? no change Combined TTC Data HUB->All Backplane & On-Card 4.8 Gbps ? no change ? HUB Readout HUB->FELIX miniPOD Tx N/A 4.8 Gbps? more?