Notes on: VME Bus Adaptors, TCC VME Communication Crate, VME Address Space Allocation, TCC Access to L2 Crates, and Test Program Issues (including notes on VBD) Philippe Laurens, Monday, July 06, 1998 This is meant as a starting point for a note defining Address Space allocation and VME access to the L2 Processor Crates. VME Access Path for Production System: Current Topology =============== 1x Trigger Control Computer - 1x Bit3 Model 617 - Use Bit3 Windows NT API 1x VME communication crate - 1 slot Bit3 Model 617 - 4 slots Vertical Master - 2 slots L1 Cal. Trig Interface - 21-7 = 15 open slots; Two approaches - up to 7 pairs of VME Bus Adaptor Bit3 Model 412 + Optical Interface Bit3 Model 400-5 - up to 15 VME Bus Adaptor Bit3 Model 412 with corresponding 15 Optical interface in a separate VME crate Characteristics of different Bus Adaptors ========================================= Bit3 Model 617 PCI-VME Bus Adaptor ---------------------------------- Software controlled "Scatter-Gather" Memory Mapping [the name scatter-gather comes from the ability to make scattered segments of memory space on remote bus look like contiguous space as seen from the local bus] - the granularity of the mapped window is in 4 kB window segments - total of 8192 independent window segments - maximum of 32MB mapped at one time - each segment is programmed with - a VMEbus address modifier (i.e. A16, A24 or A32 address space) - optional byte and/or word swapping Supports 8 bit (D8), 16 bit (D16) and 32 bit (D32) data transfers [access size depends on your variable size: char, short or long]. Support DMA [not crucial, but may prove useful to grab monitoring data] Optional Dual Port Memory on the VME side [but we won't need this option in production setup, maybe in a test setup] Interrupts - jumpers can select VME interrupts (IRQ1 to IRQ7) to channel back to PCI - can generate VME interrupts from software on the PCI side The VME module uses a minimum of 32 bytes of VMEbus short I/O space. The base I/O address is setup with jumpers in increments of 256 bytes. The VME module can be setup to be only a receiver or allow it to be a transmitter. [we use one way communication, with the PCI side as transmitter and the VME communication crate as receiver] Bit3 Model 983 Windows NT Support Software for Model 617 -------------------------------------------------------- Two parts - Provides a Windows NT device driver. - an API [Application Program Interface] to interact with the device driver. Bit3 API Functions - Open/Close to open communication channel to the interface - Device Status, Reset, ... - Direct Read/Write to target VME address - single word or block of data - automatically starts up a DMA transfer above a selectable data transfer size threshold - Receive Interrupts - register Interrupt handling code - wait for interrupts - Send interrupts to remote bus - Map a segment of remote bus address space to user code virtual space. - this is where the 4kB segments are managed, but memory is mapped in increments of system page size (i.e. 64k). - returns a pointer in user code virtual space pointing to the base address of the mapped segment. - Two separate programs may map the same region of VME space, but the bit3 API is NOT able to reuse the same mapping resources on the Model 617. Instead, two separate sets of mapping resources are allocated, and are programmed to map the same VME address space. Bit3 Model 412 VME-VME Bus Adaptor ---------------------------------- Two access modes: - direct mode (window base address and size set up by jumper) - page mode (window base address under software control) (switch between the two modes is under software control) Direct mode - hardware jumpers on one module select a window from one crate's address space that gets mapped onto the remote VMEbus address space - the base address and ending address in the local crate are selected (with jumpers) in increments of 64k - the base address, or window size is not restricted to powers of 2 - there is no upper limit to the size of the mapped window. - the window in local VMEbus address space can be set to answer to either A24 or A32 address space, but not both. [we will need the full 32 bits of addresses in the VME communication crate] - the remote window can be biased (up or down) by forcing upper address bits high or low. - the base address of the mapped window in the local crate must be an integral multiple of the window size for biasing to operate correctly. [that is because no arithmetic is done on the address, only bits forced high/low]. Page mode - allows access to the full A32 (and/or A24) address space of the remote crate through a smaller window in the local crate. - selectable window size from 64kB to 1MB (in increasing powers of 2) - a set of registers on the adaptors must be loaded with the upper part of the desired address in the remote crate Common to both modes - D32, D16, and D8 data accesses - Under software control, the VME cycles generated in the remote chassis can be forced to use any access modifier, i.e. any of the A16, A24, or A32 access modes. Interrupts - The cards can be configured to propagate VME interrupts (jumpers select one or more of the IRQ1 to IRQ7 to propagate) from one backplane to the other. - The cards can be configured so that interrupts can be generated in the remote backplane (jumper selects IRQ1 or IRQ2) by writing to the local card's CSR (in short I/O space). Dual port RAM - The dual port memory module can be placed at either end of the cable - Accessing the dual port memory only generates VME cycles in the crate it is being accessed from [i.e. TCC accessing a dual port module located in the L2 crate does not generate VME cycles in the L2 crate] - The VME base address of the dual port RAM is set by jumpers independently on both cards. each VME module uses a minimum of 32 bytes of VMEbus short I/O space in its chassis. The base address is setup with jumpers in increments of 256 bytes. each module can be set to be a receiver or transmitter or both. [we use only one way communication, with a transmitter in the VME communication crate and a receiver in the L2 Crate] Vertical Interconnect --------------------- The Vertical Interconnect Modules functions as a VMEbus to VMEbus adaptor. There must be a Vertical Interconnect Master in the transmitting crate and a Vertical Interconnect Slave in the receiving crate. Up to four Vertical Interconnect Slaves can be connected to each Vertical Interconnect Master. The Vertical Interconnect Master Base VMEbus Address is selected with jumpers in the range 0x00000000- 0xfc000000 (i.e. setting A26-A31) by increments of 0x04000000 (i.e. increments of 64MB). The Vertical Interconnect Master Address Window is broken up in a fixed set of four 16 MB windows, one for each of the four Vertical Interconnect Slaves. The upper 64kB of each window (0xXXFF0000-0xXXFFFFFF) are translated to short VMEbus I/O accesses (i.e. A16) in the remote crate (0x0000-0xFFFF). The rest (0xXX000000-0xXXFEFFFF) is mapped to standard (i.e. A24) VMEbus access in the remote crate. The Vertical Interconnect Master only answers to A32 VMEbus cycles. The Vertical Interconnect Master propagates D32, D16 and D8 VMEbus cycles unchanged, to the remote crate. The Vertical Interconnect Slaves generates only A24 VMEbus cycles, and A16 short I/O cycles, and answers only to A16 VMEbus cycles. Under software control, a Vertical Interconnect Master can cause an interrupt in any of its attached remote slave crates. The documentation does not specify which VME Interrupt it is. Under software control, a Vertical Interconnect Master can also cause a reset in any of its attached remote slave crates. Under software control, a Vertical Interconnect Slave can cause an interrupt (but not a reset) in its attached remote master crate. The documentation is not very clear about this. Using Vertical Interconnect Interface Masters also requires reserving the VMEbus address space 0xFFFF0000-0xFFFFFFFF in the crate with the V.I. Master Module(s). The Vertical Interconnect Slaves use 512 bytes of short I/O space with a base address selectable by jumpers in increments of 512 bytes. Motivation for using Bit3 Model 412 for communication to L2 Crates ----------------------------------- (instead of Vertical Interconnects) - Allow access to full A32 address space in remote crate - Dual Port Memory option used as a mailbox, and avoiding TCC generating asynchronous VME cycles in the remote crates, e.g. to access monitoring data - expected use of interrupts from TCC to the L2 crates, and possibly from the L2 crates to TCC. This is clearly better supported by Bit3 than the Vertical Interconnects. Address Space issues: ===================== The 32 bits of VMEbus address space in the VME communication crate must be shared among all the customers that must be reached by TCC. Customer crates accessed by TCC via Vertical Interconnects - The Level 1 Framework (about 3 crates) - The Level 1 Scalers (about 3 Crates) - The Level 2 Framework (about 1 Crate) - The Level 2 Scalers (about 1 Crate) - The SLIM crate(s) - The mirror crate (for SCL Status,..) (to be defined) - The DZero Master Clock Crate (to be defined) Customer crates accessed by TCC via Bit3 Model 412 - The L2 Global Processor Crate (1 Crate) - All L2 Pre-Processor Crates (order of 7 Crates?) - Also includes each crate's Dual Port Memory Module Additional customers under TCC control: - The Level 1 Calorimeter Trigger; access to be defined (probably use a parallel I/O card) Problem: -------- The basic problem is that the L2 Alpha Crates are all 32 bit address spaces that we are trying to compact onto the single 32bit address space of the VME Communication Crate. The A24 backplanes connected by Vertical Interconnect Interfaces only represent a small fraction of the overall space to be mapped which is dominated by the A32 address space of all L2 Processor Crates. Solution: --------- We need to divide the VME communication Crate 32 bit address space between all customers, and we know we cannot map the WHOLE 32 bit address space of ANY customer crate at ANY time through Direct Mode Access. Non-transparent access to the whole A32 space of any L2 crate can always be provided but involves access to the Model 412 Control and Status Registers (CSR's) and using Page Mode Access through a smaller window. An additional desire is that the LARGEST possible fraction of each customer L2 Crate A32 Address Space be TRANSPARENTLY MAPPED at ALL times (using Direct Access Mode). What do we mean by transparently mapped: A Level 2 Test Rack with only 1-3 crates does not require the intermediary of a VME communication crate. The Test-TCC connected to the L2 test crates can have instead one or multiple Bit3 Model 617 PCI-VME bus Adaptor(s) going directly to the target L2 Test Crate(s) with no Bit3 Model 412 in-between. Note: A PCI based PC will have a maximum of 2-3 open PCI slots. As soon as a L2 Test setup has more than 2-3 L2 test crates, the L2 test setup will need to include the equivalent of a VME communication crate and one Bit3 Model 412 for each L2 test crate. Transparent access through the Model 412 would mean that the same software that was developed on the test stand (without a VME communication crate and Bit3 Model 412) could also run essentially unchanged on the full system (with its VME communication crate and Bit3 Model 412). This means that the remote crate address space should be visible without needing to reprogram the Bit3 Model 412. Some site dependence cannot be avoided as the base address of each crate needs to be selectable with compile-time or run-time choices. The difference between going through a single or several Model 617 must also be dealt with during software initialization. Another difficulty: Address Mapping cannot be limited to software initialization due to the bottleneck in the Model 617 where a maximum 32 MB of PCI space can be mapped to the 4GB of VME communication crate at any given time. Accessing the full space reserved for Direct Access Mode through the Model 412 will require repeated control and modification of the Model 617 mapping resources (what we are trying to limit is having to also manage the Model 412 mapping resources). Individual calls to the Bit3 API that do not use permanent mapping of PCI space to VME space is not affected by the 32MB limit of the mapped window. Any A32 address in the VME Communication Crate can always be accessed by a call to the Bit3 API, one VME address (or block of address) at a time. One should not expect that just any ARBITRARY software developed on a test stand with direct PCI to L2 Crate interface will be able to run unchanged in a production setup with its intermediate VME communication crate. But with careful planning of the test software, and management of all pieces that cannot fit the restrictions, migration can/may become easy. Dividing the VME communication Crate Address Space ------------------------------------------------- We need to divide the 32 bit address space into 'N' segments. The segments should be of equal size (assuming all L2 crates have similar needs) and 'N' should be a power of 2 (to make implementation easier, and address biasing possible). N=8 is too small with the current plans for up to 7 L2 Processor Crates (not including the L1FW,...). N=16 seems an appropriate choice to include all currently known customers, and provide room for expansion and future customers. This means we can simply use the upper 4 bits (A28-A31) to select one of 16 contiguous segments of 256 MB of Address space. One of these 256 MB segments is large enough to cover 4 Vertical Interconnect Masters (A26-A27) with 4 Vertical Interconnect Slave Each (A25-A24). 5 2 1 5 2 1 5 2 1 1 5 2 6 3 1 1 5 2 6 3 1 1 5 2 6 3 1 4 2 1 2 6 8 4 2 6 8 4 2 1 2 6 8 4 2 6 8 4 2 1 2 6 8 4 2 6 8 4 2 G G G M M M M M M M M M M k k k k k k k k k k B B B B B B B B B B B B B B B B B B B B B B B B B B B B B B B B -------+-------|-------+-------|-------+-------|-------+------- A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A 3 3 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 --------------------------------------------------------------- |V.I|V.I| | Card | Chip | Register X X X X|Mst|Slv|0 0 0| Address | Addres | Address We should thus reserve two of these 256 MB segments for all current and future Customers using Vertical Interconnect Access. We should also be careful with the upper fraction of the upper 256 MB segment because it is (partially) interfering with Vertical Interconnect Control Space. This leaves 14 segments of 256 MB of contiguous A32 VME space that can be mapped, each to one of the L2 Processor Crate, for Direct Access Mode. For L2 Crates, part of the 256 MB block of VME space in the VME communication crate should also be reserved for access to the dual port Memory. Currently the Bit3 Dual Port Memory Modules are limited to a maximum size of 8 MB. We can thus reserve the upper 8 MB (or 16 MB?) of each 256 MB segment with the option to expand downward in the future. This division for Direct Access Mode, and access to the dual port memory is fixed with jumpers, and cannot be modified or overridden by software or on the fly. VME Communication Crate A32 Address Space Allocation -- Draft VME Address range | Allocation ------------------------+-------------------------------------------------------- 0x00000000-0x0FFFFFFFF | L1 FW, L2FW, L1 Scalers, L2 FW, L2 Scalers, ... ------------------------+-------------------------------------------------------- 0x10000000-0x1FFFFFFFF | L2 Processor Crate #1 e.g. L2 Global Processor (248 MB) ------------------------+-------------------------------------------------------- 0x1F800000-0x1FFFFFFFF | L2 Processor Crate #1 Dual Port Memory (8MB) ------------------------+-------------------------------------------------------- 0x20000000-0x2F7FFFFFF | L2 Processor Crate #2 e.g. L2 Calorimeter Pre-Processor ------------------------+-------------------------------------------------------- 0x2F800000-0x2FFFFFFFF | L2 Processor Crate #2 Dual Port Memory ------------------------+-------------------------------------------------------- 0x30000000-0x3F7FFFFFF | L2 Processor Crate #3 e.g. L2 Muon Pre-Processor #1 ------------------------+-------------------------------------------------------- 0x3F800000-0x3FFFFFFFF | L2 Processor Crate #3 Dual Port Memory ------------------------+-------------------------------------------------------- 0x40000000-0x4F7FFFFFF | L2 Processor Crate #4 e.g. L2 Muon Pre-Processor #2 ------------------------+-------------------------------------------------------- 0x4F800000-0x4FFFFFFFF | L2 Processor Crate #4 Dual Port Memory ------------------------+-------------------------------------------------------- 0x50000000-0x5F7FFFFFF | L2 Processor Crate #5 etc. ------------------------+-------------------------------------------------------- 0x5F800000-0x5FFFFFFFF | L2 Processor Crate #5 Dual Port Memory ------------------------+-------------------------------------------------------- 0x60000000-0x6F7FFFFFF | L2 Processor Crate #6 ------------------------+-------------------------------------------------------- 0x6F800000-0x6FFFFFFFF | L2 Processor Crate #6 Dual Port Memory ------------------------+-------------------------------------------------------- 0x70000000-0x7F7FFFFFF | L2 Processor Crate #7 ------------------------+-------------------------------------------------------- 0x7F800000-0x7FFFFFFFF | L2 Processor Crate #7 Dual Port Memory ------------------------+-------------------------------------------------------- 0x80000000-0x8F7FFFFFF | L2 Processor Crate #8 ------------------------+-------------------------------------------------------- 0x8F800000-0x8FFFFFFFF | L2 Processor Crate #8 Dual Port Memory ------------------------+-------------------------------------------------------- 0x90000000-0x9F7FFFFFF | L2 Processor Crate #9 ------------------------+-------------------------------------------------------- 0x9F800000-0x9FFFFFFFF | L2 Processor Crate #9 Dual Port Memory ------------------------+-------------------------------------------------------- 0xA0000000-0xAF7FFFFFF | L2 Processor Crate #10 ------------------------+-------------------------------------------------------- 0xAF800000-0xAFFFFFFFF | L2 Processor Crate #10 Dual Port Memory ------------------------+-------------------------------------------------------- 0xB0000000-0xBF7FFFFFF | L2 Processor Crate #11 ------------------------+-------------------------------------------------------- 0xBF800000-0xBFFFFFFFF | L2 Processor Crate #11 Dual Port Memory ------------------------+-------------------------------------------------------- 0xC0000000-0xCF7FFFFFF | L2 Processor Crate #12 ------------------------+-------------------------------------------------------- 0xCF800000-0xCFFFFFFFF | L2 Processor Crate #12 Dual Port Memory ------------------------+-------------------------------------------------------- 0xD0000000-0xDF7FFFFFF | L2 Processor Crate #13 ------------------------+-------------------------------------------------------- 0xDF800000-0xDFFFFFFFF | L2 Processor Crate #13 Dual Port Memory ------------------------+-------------------------------------------------------- 0xE0000000-0xEF7FFFFFF | L2 Processor Crate #14 ------------------------+-------------------------------------------------------- 0xEF800000-0xEFFFFFFFF | L2 Processor Crate #14 Dual Port Memory ------------------------+-------------------------------------------------------- 0xF0000000-0xFFFFFFFFF | Reserved for Expansion, Vertical Interconnect Control ------------------------+-------------------------------------------------------- In each remote L2 Crate, the Bit3 Model 412 can be setup (with jumpers) to bias the 256-8=248 MB segment down to the lower 248 MB block of addresses. Note that two of these 256 MB segments in the VME communication Crate could be concatenated to map 512 MB from one particular L2 Processor Crate. However this would make the access to the different crates non-uniform and thus make the software in TCC more complex and requires TCC to have inside knowledge about the particularities of each L2 customer. What can be transparently mapped: Up to 14 L2 Processor Crates can thus be mapped concurrently where the lower 248 MB of each Crate's A32 address space is transparently mapped to a contiguous block of 248 MB in the A32 Address space of the VME communication crate. This includes A32/D32, A32/D16 and A32/D8 accesses. A selection of VMEbus interrupts can be propagated from the VME communication crate to some or all L2 Processor Crates, and generated by software on TCC. A selection of VMEbus interrupts can be propagated from each L2 Processor Crate to the VME communication crate, and TCC. What cannot be transparently mapped: Access to the rest of the 4GB (beyond the lower 248 MB) of the A32 address space by TCC is possible only in Page Access Mode, with a 1 MB window with base address controlled by writing to a register on the corresponding Bit 3 Model 412 in Short I/O space. Only A32 VMEbus cycle can be mapped for Direct Access Mode; A24 and A16 accesses need to use Page Mode. Causing a VMEbus interrupt selectively in individual crates must be accomplished by writing to a register on the corresponding Bit 3 Model 412 in Short I/O space since there aren't enough VMEbus interrupts to allocate one VMEbus interrupt per L2 Processor Crate. Causing a VME SYSRESET in a remote crate involves writing to a register on the corresponding Bit 3 Model 412 in Short I/O space. VME space allocation in the L2 Alpha Crates =========================================== Each Level 2 Crate also needs to divide and allocate its A32 Address space among the VME modules in the crate. This allocation needs to be done for all address modes A32, A24 and A16. constraints: - uniformity: common resource allocation makes it easier - for TCC to provide generic services, - for same test code to exercise the common parts of all crates, - and for alpha production L2 software to service all crate types with minimum site dependency. - only the lower 248 MB are visible without manipulating the Bit3 Model 412 Control Registers. Information to be gathered: - list of all types of VME masters and VME slaves in each L2 Crate - Bit3 Model 412 - MBT -- #1, #2 - VBD - Alpha Processor -- Admin. of each type - Alpha Processor -- Worker of each type - SLIC -- different types? - ... - for each object, a list of its requirements for A32, A24, A16 address space, its constraints for address setup, via jumpers or software control. VBD notes: ---------- - The VBD uses 8 kB of A16 I/O space. The VBD answers to A16 only, it does not use A24 or A32 address space. The VBD base address in A16 I/O address space is set with Jumpers in increments of 8 kB. - A list of pointers to Word Counts and a list of pointers to the start of the corresponding data must be written into the VBD using A16/D16 VME cycles. - Changing either list is not sufficient to change the VBD actions, as all parameters are only taken into account after a reset... - The VBD reads the word counts using the pointers from its list, and using A16/D16 VME cycles (while the documentation hints that it may be able to do A32 in some configuration -- not proven to work) - The VBD reads the event data using A24/D32 VME cycles (while the documentation suggests you can generate A32/D32 transfers, with extra limitations -- not proven to work) Consequences: Each Alpha processor(s) will thus need to use one of its 4 Universe Interface memory mapping windows to present the Word Count data to the VBD in A16 address space, and another window to present the event data to the VBD in A24 address space. The Administrator will also need to manage (juggle?) the Universe Interface memory mapping on its card AND on the worker card(s) so that they always present the Word Counts at fixed addresses in A16 address space and the even data sections at fixed addresses in A24 address space. L2 Crate A32 address space allocation -- Toy Model, just as a starting point ------------------------------------- Simply divide the lower 256 MB of the L2 Create A32 address space into 16 segments of 16 MB each. Ideally ALL relevant VME accessible from all card would be mapped in this space. The difference between a Test Stand and the full system would then be at its minimum. How much A32 total address space is used by an Alpha Administrator, an Alpha Worker. How much of this space should be accessible via Direct Access Mode, and how much via Page Access Mode? How much A32 address space is used by a SLIC? how much of this total space needs to be direct mapped? ***Draft*** Typical Level 2 Crate VME A32 Address Space Allocation VME A32 Address range | Usage ------------------------+-------------------------------------------------------- 0x00000000-0x0FFFFFFFF | Segment #0 Direct Mapped from TCC -- e.g. MBT #1, MBT#2 ------------------------+-------------------------------------------------------- 0x01000000-0x01FFFFFFF | Direct Mapped Segment #1 -- e.g. Administrator ------------------------+-------------------------------------------------------- 0x02000000-0x02FFFFFFF | Direct Mapped Segment #2 -- e.g. Worker ------------------------+-------------------------------------------------------- 0x03000000-0x03FFFFFFF | Direct Mapped Segment #3 -- e.g. SLIC #1 or Worker #2 ------------------------+-------------------------------------------------------- 0x04000000-0x04FFFFFFF | Direct Mapped Segment #4 -- e.g. SLIC #2 or Worker #3 ------------------------+-------------------------------------------------------- 0x05000000-0x05FFFFFFF | Direct Mapped Segment #5 -- e.g. SLIC #3 or Worker #4 ------------------------+-------------------------------------------------------- 0x06000000-0x06FFFFFFF | Direct Mapped Segment #6 -- e.g. SLIC #4 ------------------------+-------------------------------------------------------- 0x07000000-0x07FFFFFFF | Direct Mapped Segment #7 -- e.g. SLIC #5 ------------------------+-------------------------------------------------------- 0x08000000-0x08FFFFFFF | Direct Mapped Segment #8 -- e.g. SLIC #6 ------------------------+-------------------------------------------------------- 0x09000000-0x09FFFFFFF | Direct Mapped Segment #9 -- e.g. SLIC #7 ------------------------+-------------------------------------------------------- 0x0A000000-0x0AFFFFFFF | Direct Mapped Segment #10 -- e.g. SLIC #8 ------------------------+-------------------------------------------------------- 0x0B000000-0x0BFFFFFFF | Direct Mapped Segment #11 -- e.g. SLIC #9 ------------------------+-------------------------------------------------------- 0x0C000000-0x0CFFFFFFF | Direct Mapped Segment #12 -- e.g. SLIC #10 ------------------------+-------------------------------------------------------- 0x0D000000-0x0DFFFFFFF | Direct Mapped Segment #13 -- e.g. SLIC #11 ------------------------+-------------------------------------------------------- 0x0E000000-0x0EFFFFFFF | Direct Mapped Segment #14 -- e.g. SLIC #12 ------------------------+-------------------------------------------------------- 0x0F000000-0x0FFFFFFFF | Dual Port Memory Module | This VME segment MUST NOT be accessed in the L2 Crate | by TCC, it is accessed directly via its address in the | VME communication crate instead. ------------------------+-------------------------------------------------------- 0x10000000-0x1FFFFFFFF | Page Access Mode Only from TCC-- e.g. Administrator ------------------------+-------------------------------------------------------- 0x20000000-0x2FFFFFFFF | Page Access Mode Only -- e.g. Worker ------------------------+-------------------------------------------------------- 0x30000000-0x3FFFFFFFF | Page Access Mode Only -- e.g. SLIC #1 or Worker #2 ------------------------+-------------------------------------------------------- 0x40000000-0x4FFFFFFFF | Page Access Mode Only -- e.g. SLIC #2 or Worker #3 ------------------------+-------------------------------------------------------- 0x50000000-0x5FFFFFFFF | Page Access Mode Only -- e.g. SLIC #3 ------------------------+-------------------------------------------------------- 0x60000000-0x6FFFFFFFF | Page Access Mode Only -- e.g. SLIC #4 ------------------------+-------------------------------------------------------- 0x70000000-0x7FFFFFFFF | Page Access Mode Only -- e.g. SLIC #5 ------------------------+-------------------------------------------------------- 0x80000000-0x8FFFFFFFF | Page Access Mode Only -- e.g. SLIC #6 ------------------------+-------------------------------------------------------- 0x90000000-0x9FFFFFFFF | Page Access Mode Only -- e.g. SLIC #7 ------------------------+-------------------------------------------------------- 0xA0000000-0xAFFFFFFFF | Page Access Mode Only -- e.g. SLIC #8 ------------------------+-------------------------------------------------------- 0xB0000000-0xBFFFFFFFF | Page Access Mode Only -- e.g. SLIC #9 ------------------------+-------------------------------------------------------- 0xC0000000-0xCFFFFFFFF | Page Access Mode Only -- e.g. SLIC #10 ------------------------+-------------------------------------------------------- 0xD0000000-0xDFFFFFFFF | Page Access Mode Only -- e.g. SLIC #11 ------------------------+-------------------------------------------------------- 0xE0000000-0xEFFFFFFFF | Page Access Mode Only -- e.g. SLIC #12 ------------------------+-------------------------------------------------------- 0xF0000000-0xFFFFFFFFF | Unclaimed. ------------------------+-------------------------------------------------------- Note that the Dual Port Memory could be mapped anywhere above the 248 MB of Direct Mapped Space. It is placed at the same relative location in the remote crates as in the VME communication crate for reason of simplicity. This plan also need to be pushed into further details to map event buffers in alpha cards, MBT resources,... We will also need a detailed plan for Dual Port Memory space allocation. We also need a similar plan regarding the allocation of A24 and A16 VME addresses. Issues regarding Test Programs, Test Crates, and Production Setup ================================================================= What is being tested: lots of DIFFERENT things -------------------- Development/commissioning: debugging prototypes commissioning processor cards (SLIC, alpha) commissioning MBT somebody's VME interface, MBT interface, DMA engine cross-alpha-processor communication event access, buffer management event readout VBD control software remotely "trigger node" link to COOR via TCC: reboot, initialize, programming interface to L1+L2 (SCL from L1FW, answer to L2FW) Multi Worker system Pre-Processor -> Global communication monitoring data path host monitoring programs alarm message path error message path SCL initialize ...lots more Commissioning/running: commissioning new Admin Frame Code, Worker Frame code commissioning new Filter Code testing new trigger list playback of real events, MC events "shadow" node? diagnose online errors, crashes,... between run test/exercizer/diagnostics ...lots more Where can a test program run... several places ----------------------------- on alpha on SLIC on the host PC with bit3 Model 617 accessing VME - via fusion interface - via Tigger Control Software (TRICS) on TCC (if NT) on remote host talking to host PC - via fusion interface, and fusion server on TCC - via Tigger Control Software (TRICS) on TCC Probably need a combination of one/several processors and a remote program to provide outside control or stimulus cf. http://www.pa.msu.edu:80/hep/d0/ftp/tcc/l2_services/l2_services.txt for notes about generic services that may be provided by TCC for L2 access. Issues regarding test programs on Test Setup -------------------------------------------- - Bottleneck through Model 617 PCI-VME interface - I/O throughput depends on access method - fastest for directly mapped VME addresses - medium for calls to Bit3 API VME accesses - medium (guessing!) for local fission access - slow for remote host access via remote fission access - slow for remote access via TRICS - sharing access to multiple test crates - sharing Model 617 resources - sharing screen/mouse/keyboard On Production Setup ------------------- - same issues as for Test Setup plus: - additional Model 412 VME-VME Adaptor in path to L2 Crate - TCC screen location - up high in Moving Counting House - mouse/keyboard access - only one mouse/keyboard, mostly out of reach - remote access from - near one of the L2 crates, on a different floor - control room, someone's Dzero office - home instution - home modem - non-TRICS access only allowed between runs - during run TRICS access only via dual port memory - except "in case of emergency", and "on demand from expert" - every piece of monitoring/control information must go through DPM - between run TCC may be asked to access an L2 Crate's VME address space Plans for L1 test software -------------------------- Test Programs to Debug/Exercize the L1FW have to face similar issues. The model to satisfy remote access requirements and migrate L1FW in-house test programs to the production system is the following. test system at MSU - run dialog box based Visual C++ interface for input parameters to test program - display results back to dialog box, and/or additional local console window production system - use a remote PC - modified version of the above dialog box based Visual C++ interface - use ACE based DZero client-server online software - package all dialog box requests into messages sent to TCC - receive log screen information for remote console - TCC parse/execute requests and does the VME activity The key is to separate all operator interface activity from all program actions. This software segmentation is hard to retrofit and this approach must be planned into the test software right from the start of software design. We will insert the DZero Client-Server communication package (once it becomes available, late summer-early fall) with an ascii based message protocol. We are also planning for a line-mode, home-modem-friendly method to send the same ascii messages to TCC, and look at logfiles. Note also that besides fancy graphics monitoring programs that will be developed for run monitoring, we will also have a simpler ascii based host program to view all basic monitoring information (similar to Run I TRGMON) that is also a good remote diagnostics tool.