L1Topo – Users Requirements

 

Uli, Mainz U.  16 May 2012

 

 

Prologue

This write-up is meant to summarise requirements on external interfaces, processing capacity, and some major design constraints for the ATLAS Level-1 topology trigger module. While the document is mainly trying to summarize what the first prototype should look like, it should be understood that the prototype design should be very close to the final production modules. Differences between prototype and production modules are flagged.

 

 

 

 

Change log

 

 

 

-          2012-05-16   version 1.0

-          2012-04-24   Initial version 0.9

 

 

 

 

 

Table of Contents

1       L1Topo – Concept and Use. 2

2       Requirements. 2

3       Note on ATCA compliance and other issues. 4

 


 

1      L1Topo – Concept and Use

The “L1Topo” topology processor is a module combining topological information supplied by cluster and jet/energy processors, the muon trigger, and the phase-1 feature extractors, running topological algorithms and forwarding the results to the Central Trigger Processor. Initially, after installation and commissioning in 2013/14, the module will be processing topological algorithms on cluster and jet information only, fed into L1Topo via newly built “CMX” merger modules, installed in the current digital processors. This information will be complemented by some small data volume derived from the existent muon octant signals.

 

The topology processor is meant to consist of a single, or multiple identical advancedTCA modules installed in a standard ATCA shelf. The level of compliance with ATCA specifications is still under discussion. The name L1Topo is being used for both the individual topology processor module, and the processor consisting of a minimum of one L1Topo module in an ATCA crate.

 

Technologies to be used on the L1Topo design have been and will continue to be explored on the “GOLD”, a modular processor system with near-final capabilities of L1Topo, though based on an earlier FPGA generation. Unlike the entirely mezzanine-based GOLD, L1Topo will be an almost monolithic module, optimized for short electrical trace lengths and maximum design density. It is built such that a maximum of data can be routed into a very confined space, so as to simplify and speed up algorithms making use of data coming from the full solid angle of the detectors.

 

The L1Topo module will be dominated by Virtex-7 processors and multi-gigabit optical links on the real-time path. Ancillary components will deal with module control, environmental monitoring, FPGA configuration, DAQ and ROI links.

 

For L1Topo processing latency on the real-time path is crucial. It is designed for minimum latency data transmission and processing throughout the module.

 

 

2      Requirements

This section describes mainly the external requirements on L1Topo. A list of detailed functional requirements, as well as rules for module design will be found in the L1Topo module specifications.

 

Currently the full set of topological algorithms is not known, just a small number of algorithms has been simulated, and the VHDL description and implementation has been done for parts of an algorithm only. This makes it difficult to phrase requirements on bandwidth and processor performance. For the processing latency there do not currently exist any hard figures.

 

The following list represents an attempt to define initial requirements that might be amended in the course of the review procedure.

 

The list combines both system-level external requirements and some requirements on implementation details that are considered vital to make sure the system can be built, commissioned, and operated successfully.

 

The requirements on the L1Topo module are as follows:

 

1.       Compliance with ATCA specifications (also, see section 3)

1.1.     Module form factor and mechanics (height, depth, zone-1 and zone-2 connectors)

1.2.     Power supply concept: redundancy of -48V supplies

1.3.     Signals required for  ATCA specific functionality

1.3.1.  IPMB in zone 1, along with power sequencing circuitry

1.3.2.  Base interface in zone 2 for IP access

2.       L1Calo specific service signals within ATCA zone 2

2.1.     One signal pair for pre-configuration access (currently USB)

2.2.     One clock pair (TTC/GBT/…)

3.       Fibre-optical input from rear transition module for real-time data in ATCA zone 3

3.1.     Five fibre ribbons, 12-72 fibres each (baseline 48-way)

3.2.     Optical blind-mate backplane connectors

4.       Input signal o/e conversion

4.1.     Fourteen high-density 12-channel o/e receivers, suitable for 10Gb/s operation

4.2.     Any required AC-coupling or other signal conditioning

5.       Processors optimised for maximum bandwidth, maximum processing power, minimum latency

5.1.     Dual-FPGA symmetric processing

5.2.     Maximum MGT input bandwidth available on the market

5.3.     Maximum logic resources

5.4.     Low latency, maximum bandwidth  parallel data path linking together both FPGAs

6.       For the processor FPGAs allow for choice of footprint compatible devices. Numbers given further down the documentation refer to maximum size device, though initially smaller devices will be mounted dependent on availability

7.       Real-time fibre-optical output from each FPGA to the CTP (12-channel, suitable for 10 Gb/s)

8.       Some limited amount of low latency electrical fabric I/O to a mezzanine module (tbd)

9.       Two MGT clock trees to all FPGAs, running at multiple of base frequency (160.32)

10.    Two GCK (fabric) clock trees to all FPGAs, running at base frequency (40.08)

11.    Clock trees to be driven from

11.1.    LHC bunch clock ( multiple ) via jitter cleaner

11.1.1.      From TTCdec with optical or electrical input

11.1.2.      From optical input, for future use

11.2. Local crystal clock

12.    One control FPGA for non-real-time use

12.1. Supply with local crystal clock suitable for Ethernet

12.2. Supply with local crystal clock suitable for DAQ/ROI links

12.3. Connect to environmental monitoring circuitry

12.4. Connect to FPGA configuration circuitry

12.5. Connect to module controller via Ethernet (IP)

13.    Provide board-level control connectivity to all FPGAs

14.    Configure FPGAs via SystemACE (legacy)

15.    Configure control FPGA via standalone SPI flash memory

16.    Provide flexible and in-situ programmable configuration storage for processor FPGAs, based on control FPGA and local flash memory

17.    External connectivity for control and other non-RTDP use

17.1. Provide one outgoing optical link each, for purpose of DAQ and ROI connectivity

17.2. Provide pre-configuration access. On the prototype, this will be a Xilinx compatible USB/JTAG port.

 

 

 

 

 

 

 

I/O

From/to

bandwidth

 

 

Real-time input

various

160 * up to 13 Gb/s

Opto fibre / MTP 48

miniPOD, 8b/10b

Real-time output

CTP

24 * up to 13 Gb/s

Opto fibre / MTP

miniPOD, 8b/10b

Spare electrical

various

48 * up to 1 Gb/s

TBD, via mezzanine

LVDS

control

 

1* GbE

Zone 2

 

IPMB

 

 

Zone 1

 

Pre-config access

 

USB 480Mb/s

Zone 2

 

LHC clock

TTC

 

Electrical zone 2

 

LHC clock

TTC etc.

 

Optical

front panel

Table 1 : interfaces


 

3      Note on ATCA compliance and other issues

The requirements specify some level of compliance with AdvancedTCA specifications. It should be noted, however, that there has been no decision regarding full compliance. The decision on a full custom, non-standard backplane vs. standard backplane is yet to be taken. The authors of this document do not have access to full ATCA specifications. Any specific requirements regarding ATCA compliance would therefore have to be explicitly included in the L1Topo module specifications.

 

Note on ATCA compliance:

1.       PCB board thickness is currently assumed 2mm, which might be non-compliant, though specifications are not known.

2.       The ATCA specifications are not entirely clear about the presence and use of the base interface. It is understood that an ATCA compliant backplane does in fact require dual star wiring for the base interface. However, for an ATCA module to claim compliance, it seems not to be required to make use of the base interface. In case it turns out difficult to define base interface connectivity, alternatively front panel electrical or optical connectivity might be chosen.

3.       There have been discussions within L1Calo about the use of a common set of control and service lines in ATCA zone 2 for future L1Calo modules. On the L1Topo processor such connectivity will be implemented if a common interface can be agreed on in time. The backplane signals might comprise TTC, Ethernet for module control, pre-configuration access and possibly environmental monitoring via CANbus.

 

Note on bandwidth and processing power:

4.       The maximum MGT input bandwidth expected to be available on Xilinx devices in time for the 2013/14 upgrade is 80 links at a maximum line rate of up to 13 Gb/s per link. For the prototype 56-link, 10 Gb/s devices will be mounted. The range of footprint compatible devices comprises the most powerful devices (in terms of DSP processor slices) expected to be available from Xilinx by time of module production. The actual use of resources for future algorithms is not currently known. Nor is it possible to estimate what type of logic resources (DSP/RAM/combinatorial) will be in highest demand.

 

Note on clock issues:

5.       Due to limitations on MGT reference clock accuracy, no attempt will be made to make the real-time data path fit for operation at 40.00MHz and multiples. Use of 40.08MHz reference clocks is required, either from the LHC bunch clock (TTC) or local crystal clocks.

 

Note on DAQ/ROI issues:

6.       It is currently assumed that for L1Topo it is sufficient to provide one DAQ and ROI output each. The respective lines would have to be driven with G-Link compatible signals. It would have to be decided whether those signals are to be driven from the fabric or from MGTs

 

Note on low latency data paths:

7.       Low latency parallel input/output might be required, since the latency situation is not yet fully understood. It was decided to run the low latency paths via a mezzanine module so as to allow for signal conditioning or conversion. Since the largest fraction of fabric connectivity is required for linking the two processors to one another, low latency external connectivity will be very limited in aggregate bandwidth

 

Note on pre-configuration access:

8.       Some kind of access to the modules via the backplane, even when the FPGAs are unconfigured and therefore unable to respond to standard board control protocol (currently Ethernet) is considered mandatory for any future L1Calo module. That equals basic CPLD-based VME access found on previous L1Calo modules. USB has been chosen for the L1Topo prototype, since no custom software is required. Additional USB front panel access will be available as well.