
News
Content for the HPC community and innovation enthusiasts: tutorials, news and press releases for users, engineers and administrators
- All News
- Aerospace & Defence
- Artificial Intelligence
- Blog
- Cloud Platform
- E4 Various
- European Projects
- HPC
- Kubernetes Cluster
- Latest news
- Press
The collaboration between E4 and CERN openlab for the development of information technologies in High Energy Physics

Is it possible to adapt computing models and software to exploit the full potential of GPUs?
In this article we talk about E4’s work within the European Organization for Nuclear Research (CERN) [1], where E4 has been a supplier of high-performance computing and storage systems for many years.
Since 2018, E4 is part of CERN openlab, a public-private partnership, through which CERN collaborates with ICT leading companies and other research institutions on R&D.
As a “contributor”, E4 has worked over these years, together with CERN researchers, on joint projects to promote the development of high-performance computing technologies in the field of high-energy physics. Particularly, E4 aims to meet the computing needs, arising from the experiments of the Large Hadron Collider (LHC) accelerator [2] and its high-brightness HL-LHC upgrade, through the use of GPU accelerated applications.
Researchers and engineers have been working on ten use cases, which were grouped into seven main areas in 2019:
1. Simulating sparse data sets for detector geometries
2. “Patatrack” software “R&D” incubator
3. Benchmarking and optimisation of TMVA deep learning
4. Distributed Training
5. Integration of SixTrackLib and PyHEADTAIL
6. Particle reconstruction with machine-learning techniques
7. Allen: a high level trigger on GPUs for LHCb
In this article we will start from the latest one, Allen, and we will deepen the other in future articles in our blog.
“Allen” [3] is an initiative to develop a comprehensive high-level trigger (the first step in the data filtering process after particle collisions) based on GPUs for the LHCb experiment [4]. This project has benefited both from support of CERN openlab, as well as of engineers and physicists from E4 and NVIDIA.
The new trigger system processes 40 Tb/s, using approximately 340 latest generation NVIDIA GPU cards. Allen, from a physics point of view, reconstructs particles with the same performance obtained with traditional CPUs. Furthermore, Allen not only can be used to perform reconstructions, but is also able to make decisions about whether to accept or reject events.
Using GPUs for the LHCb experiment has the following advantages, illustrated in Figure 1:
- Current servers are already set up to be equipped with GPUs at no additional cost for this operation.
- In the provided use cases, sending data to the GPU will be functionally similar to sending data to the network card of the Event Builder (EB) server.
- The purpose of the high-level trigger is to reduce data rate by a factor between 30 and 60; if the whole first high-level trigger is done with GPUs, then data can be reduced via EB servers. This allows a considerable simplification of the network that connects the EB servers with the computing farm; the consequent effect is a significant cost reduction of the overall system.
- High-level trigger algorithms have a high degree of parallelism and they are able to take full advantage of all the TFLOPS currently available in GPUs.
A wide range of algorithms have been efficiently implemented for the Allen architecture. Applications are written in C++ with CUDA extensions. When compiled on x86 architecture, the results between GPU and CPU differ at the permille level, perfectly acceptable for the needs of the experiment.

Figure 1: Run 3 LHC architecture for the high-level trigger of the LHCb experiment
The above demonstrates the potential for GPUs to be used as “accelerators” in high-energy physics for high-frequency data processing. Indeed, Allen is the first high-throughput trigger system which will be implemented entirely using GPUs, with each GPU card reconstructing and making decisions on over 100,000 LHC collisions per second. As Allen has been developed from the outset as an experiment-independent processing framework, it could be used in the future in completely different fields, where a large and fast ability to select events is required.
Allen will be used by the LHCb experiment as the new architecture for the next LHC data take (Run 3), starting in 2021 [5].
Links:
[2]https://home.cern/science/accelerators/large-hadron-collider
[3] http://cds.cern.ch/record/2717938/files/LHCB-TDR-021.pdf