GOLD – Users Requirements
Uli, Mainz U. 12 April 2011
Prologue
Generally a requirements document would probably be written by prospective module users. Since so far the GOLD development has been driven by “the designers” it was decided to present a skeleton of a requirements document to invite “the users” to comment on requirements before being asked to check and sign off some specifications. Main subject of the upcoming review should probably be the XC6VLXT sub-system of the GOLD, though some information on the HXT devices is supplied as well. Ignore it if you do not feel like requiring HXT functionality on GOLD1.0. Please comment on missing or unacceptable requirements.
Change log
- Re-phrase requirement 17
- Some clarifications to reflect Sam’s comments
- Use cases 4, 6, 7 expanded due to request of reviewers
Table of Contents
3 Note
on ATCA compliance and other issues
The GOLD is a demonstrator module for technologies to be used on the upgrade of the Level-1 Calorimeter Trigger. Conceptually it is a fibre-optical many-input, few-outputs “data concentrator”, an architectural element to be found in several places of a future Level-1 Trigger. In particular the GOLD serves as a demonstrator for the Topological Processor and provides therefore additional electrical and optical interfaces. So as to allow for a maximum flexibility in dealing with current and future tests of electronics components and concepts for the L1Calo upgrade, the GOLD is designed as a modular system.
The GOLD mainboard carries programmable logic devices, part of the external interfaces, general module infrastructure, and power regulators. Electro-optical converters for incoming data, as well as electrical fan-out are located on an opto mezzanine module. Due to the rather moderate cost of the optical input module, the production of individually routed, algorithm specific mezzanines is envisaged. Clock circuitry is concentrated on a separate mezzanine board. A small mezzanine is dedicated to pre-configuration access to the GOLD. For the intended use of the module as a demonstrator-for-everything, the mezzanine concept is actually a requirement and an appropriate partitioning of resources into modules is important.
In an attempt to build a common test bench for two Virtex-6 product lines (LXT/HXT), GOLD is basically composed of two separate circuits, one for each component type. HXT is not yet available on the market and therefore all current documentation is focused on the XC6VLXT. The HXT components might be mounted on the module at a later date, once the designers are convinced that the current design is supporting the new chip type. The Multi-Gigabit Transceivers (MGTs) of the LXT devices support up to 6.5Gb/s data rate, higher data rates can be handled on the GOLD only once HXT devices are mounted. Both the LXT and the HXT subsystems are connected to the input mezzanine with two 400-pin FMC style mezzanine connectors each (www.vita.com/fmc.html).
For the intended use of future modules, low processing latency on the real-time data path (RTDP) is crucial. The GOLD is designed for minimum latency data transmission and processing throughout the module.
GOLD is envisaged for multiple use cases:
1. Demonstrator for AdvancedTCA module concept
1.1. Module form factor
1.2. Power supply concept
1.3. IPMB monitoring and control
1.4. Module control via serial protocol
1.5. Backplane transmission at 10Gb/s
1.6. Rear transition module concept
2. Demonstrator for Virtex-6 technologies
3. Demonstrator for optical backplane connection
4. Test bench for topological algorithms
4.1. Demonstrate upgraded L1Calo data processing in a single TP module in a single FPGA
4.2. Demonstrate upgraded L1Calo data processing in multiple TP modules (in single FPGAs)
5. Test bench for future LHC clock distribution schemes
6. Optical data sink for L1Calo modules
6.1. Demonstrate suitable parallel fibre modules and transceiver chipsets
6.2. Demonstrate data synchronization from multiple data sources
7. Optical data source for purpose of self-test and stand-alone link tests
7.1. Demonstrate electrical data replication in FPGA and optical distribution
7.2. Demonstrate passive optical data regrouping
8. Electrical data source/sink for tests with various electronics modules (limited bandwidth only)
9. Demonstrator for any other aspects of a topological processor not covered above
This section describes the external requirements on the GOLD, as derived from the use cases above. A list of detailed functional requirements, as well as rules for module design will be found in the GOLD module specifications.
Since the GOLD is meant to be a technology demonstrator, there are no strict external requirements on link count, processor performance and similar quantities. In the requirements list below, however, in such cases numbers are quoted, that are considered achievable with the current design. There is probably no point deliberately adjusting the requirements towards lower performance in the course of review and design process. However, if a use case could be made for additional resources, it would make sense to try and adjust the requirements list accordingly.
The requirements on the GOLD are as follows:
1. Compliance with ATCA specifications (also, see section 3)
1.1. Module form factor and mechanics (height, depth, zone-1 and zone-2 connectors)
1.2. Power supply concept: redundancy of -48V supplies
1.3. Signals wired to daughter socket for later implementation of ATCA specific functionality
1.3.1. IPMB-A only
2. L1Calo specific service signals within ATCA zone 2, wired to daughter socket for later use
2.1. Four signal pairs that might be used for module control connectivity (Ethernet)
2.2. One signal pair for pre-configuration access (currently USB)
2.3. One clock pair (TTC/GBT/…)
3. ATCA zone 2 connectors wired with multiple MGT links, so as to enable backplane tests up to 10 Gb/s
4. Fibre-optical input from rear transition module.
4.1. Five fibre ribbons, 12-72 fibres each
4.2. Optical blind-mate backplane connectors
5. Input signal o/e conversion and electrical replication on mezzanine module
5.1. Twelve 12-channel o/e receivers, suitable for 10Gb/s
5.2. Signal duplication at CML level, suitable for 10Gb/s
5.3. Any required AC-coupling or other signal conditioning
5.4. FMC connectors (SAMTEC SEAM) for differential RTDP signals, controls, power supply
6. A total of four FMC sockets (SAMTEC SEAF, identical pin-out each) on the main board for stack-up of the input mezzanine module(s)
6.1. Two sockets for connection to MGTs of the LXT sub-system
6.2. Two sockets for connection to MGTs of the HXT sub-system
7. For all FPGAs on the GOLD allow for choice of footprint compatible devices. Numbers given further down the documentation refer to maximum size device, though initially cheapest device will be mounted
8. FPGA sub-system (LXT devices) limited to two devices in depth (input processor, main processor), for reason of latency
9. For on-board RTDP links low-latency parallel differential connectivity must be used, at a minimum of 200Gb/s aggregate bandwidth from input FPGAs into the main processor
10. FPGA sub-system (LXT devices) to provide electrical MGT input capacity of 144 links from input mezzanine
11. LXT sub-system to provide 12 optical outputs (from main processor MGTs to front panel)
12. LXT sub-system to provide 12 differential electrical outputs (low latency, from main processor to front panel, LVDS signal level)
13. HXT sub-system capable of 144 MGT electrical inputs from input mezzanine
14. HXT sub-system capable of 12 MGT optical outputs to front panel
15. HXT sub-system capable of 144 MGT electrical inputs from backplane zone 2
16. HXT sub-system capable of 40 MGT electrical outputs to backplane zone 2
17. One common MGT clock tree to all FPGAs, one additional tree to the main processor MGTs, one additional tree to the 10Gb/s sub-system.
18. Two independent GCK (fabric) clock trees connecting to all FPGAs
19. Clock trees to be driven from clock mezzanine module, providing space and connectivity for
19.1. LHC clock signal reception (TTC/GBT)
19.2. LHC clock and data recovery
19.3. LHC clock jitter clean-up
19.4. Local crystal clocks
20. Build the clock distribution such that the system can be operated from 40.08 MHz LHC clock or 40.08 MHz crystal clock
21. Provide board-level control connectivity to all FPGAs
22. Configure FPGAs via SystemACE
23. External connectivity for control and other non-RTDP use
23.1. Provide one outgoing SFP optical link each, for purpose of DAQ and ROI connectivity
23.2. Provide a separate, local 40.00 MHz crystal clock for DAQ and ROI use only
23.3. Provide one bidirectional SFP optical link for module control (serialised VME or 1000BASE-SX), directly into main processor FPGA.
23.4. Provide optional control port (1000BASE-T) via clock module circuitry and zone2 connector
23.5. Provide USB/JTAG port to the FPGAs for pre-configuration access and configuration download. Optional connection to either front panel or backplane
24. Standards of external connectivity on real-time data path
24.1. Run all optical RTDP links via standard MTP/MPO ribbon fibre connection (12-72 fibres each)
24.2. Wire all electrical MGT signals right to the respective connectors. Links are non-buffered, DC coupled on the GOLD and may require external AC coupling and further signal conditioning
24.3. Run low-latency electrical RTDP links via daughter module to allow for choice of suitable connector style at later date
The requirements specify some level of compliance with AdvancedTCA specifications. It should be noted, however, that full compliance is neither required nor possible. Neither the authors of this document nor the module designers have access to full ATCA specifications. Any specific requirements regarding ATCA compliance would therefore have to be explicitly included in the GOLD module specifications.
The GOLD main board carries several mezzanine modules. There is no strict requirement on mezzanine module sizes and connector types, but it was decided to use industry standard FMC connectors, where possible. ATCA-typical AMC modules were not considered due to insufficient aggregate bandwidth.
Note on ATCA compliance:
1. Module may be up to 3 slots wide, with a single row of backplane connectors
2. Main board thickness is currently assumed 2mm, which might be non-compliant, though specifications are not known.
3. The ATCA specifications are not entirely clear about the presence and use of the base interface. It is understood that an ATCA compliant backplane does in fact require dual star wiring for the base interface. However, for an ATCA module to claim compliance, it seems not to be required to make use of the base interface. For GOLD the conclusion has been that there is probably no point trying to make zone 2 connectivity comply with any rule. If anybody ever planned to use GOLD on a standard dual star wired backplane, the requirements would have to be made, and respective design data would have to be supplied.
4. Some lines probably required for module services and clocking are considered in zone 2. Further L1Calo specific reserved lines could possibly still be added and defined in their position within zone 2, if requested now. If no requirements regarding placement of those lines are added here, any as yet defined lines will be located where they happen to fit!
Note on FMC pinout:
5. FMC standard is defined in VITA 57. VITA specifications are not available to the module designers. A use of GOLD subsystems with VITA 57 compliant hardware is not anticipated. In particular the allocation of power pins on the 400-pin SEAM/SEAF connector will preclude use of VITA components. If VITA compliance were required, specifications would need to be supplied.
Note on clock issues:
6. Due to limitations on MGT reference clock accuracy, no attempt will be made to make the real-time data path fit for operation at 40.00MHz and multiples. Use of 40.08MHz reference clocks is required, either from the LHC bunch clock (TTC) or local crystal clocks.
Note on DAQ/ROI issues:
7. It is assumed that for a demonstrator module it is sufficient to provide one DAQ and ROI output each. The respective lines would have to be driven with G-Link compatible signals. If experts are sure that Virtex-6 MGTs are capable of generating these signals, we could put the “connection to MGT lines” into the requirements.
Note on low latency data paths:
8. Low latency parallel I/O via differential data paths into the fabric of available FPGA devices is rather limited in aggregate bandwidth. For the chosen scheme with 4 FPGAs driving a single main processor, along with some control lines and parallel differential output from the main processor one would have to set priorities. Current scheme is a maximum of just 12 low-latency pairs to the sink (e.g. CTP), via clock mezzanine so as to sort out actual use by the rather cheap mezzanine module. This is to maximise bandwidth from the four input processors into the main processor (see requirement 9).
Note on pre-configuration access:
9. Some kind of access to the modules via the backplane, even when the FPGAs are unconfigured and therefore unable to respond to standard board control protocol (currently Ethernet) is considered mandatory for any future L1Calo module. USB has been chosen for the GOLD, since no custom software is required. Additional USB front panel access is available as well.