ICE4HPC: Interactive Computing Environment for HPC clusters
Today’s supercomputing challenges
The command line is how users typically communicate with systems.
There is no interaction during job execution, and the evaluation of models and simulations can only take place once the work is finished.
Creating a prototype for a new model entails working without real-time feedback or visibility, leading to potential waste of time.
Emerging user requirements
To manage users’ different computational requirements, multiple jobs need to be submitted to the queues. On the other hand, HPC workload managers are unable to provide support for elastic scaling.
Because scientists and researchers use different tools, applications, and languages for their work, their user experience is typically poor, cumbersome, and inefficient.
Web access to computing resources through notebook workspaces configured for the HPC environment. ICE4HPC system provides users with a fully integrated GUI.
Using Jupyter technologies, users gain access to creating interactive notebooks that allow them to explore data, run simulations, and visualize results in real time.
ICE4HPC supports Python, R and Julia programming languages and cutting-edge frameworks for AI and BDA: Tensorflow, Pytorch, MXNet, Rapids.AI and DASK.
ICE4HPC provides computing resources to be used interactively only through the services of Slurm workload manager.
Discover the advantages
ICE4HPC in a nutshell
- Interactive, web browser-based computing environment
- Reproducible document format (Code | Prose | Equations (LaTeX) | Visualizations …)
- Multi-user support
- Pure web-access to HPC resources
- Manages authentication (including 2FA)
- Spawns single-user servers on-demand
- Each user gets a complete notebook server
- Support for Compute nodes with low or high memory
- Support for Compute nodes with GPUs
- Fully integrated with Slurm
ICE4HPC is a Multi-tier architecture
ICE4HPC Front-End is hosted by the hpc cluster login node or a dedicated (Physical or Virtual) system, which has a similar network configuration to the login node
User interactive sessions are hosted by backend nodes, usually on a dedicated Slurm partition
From the back-end nodes it is possible to submit ordinary jobs to the other Compute Nodes
Slurm configuration and Compute Nodes constitute the back-end of ICE4HPC.
How it works
The integration of interactive computing environments on an HPC cluster often includes extensions that enable users to login, properly select resources and launch HPC workflows directly from their Jupyter notebooks.
These workflows are submitted to the Slurm scheduler in the background. This configuration greatly simplifies the process of initiating and managing HPC workflows, allowing users to concentrate on their data and simulations rather than the underlying infrastructure.
ICE4HPC supports Python, R and Julia programming languages and many cutting-edge frameworks for Artificial Intelligence and Big Data Analytics: Tensorflow, Pytorch, MXNet, Rapids.AI and DASK.
This synergy enables accelerated scientific discoveries and breakthroughs, facilitating more efficient and productive research endeavors.
ICE4HPC – FULL VERSION
- Provides the end-user with an interface for selecting computing resources to allocate
- Supports GPU computing
- Includes support for the leading frameworks for AI and Big Data Analytics
- Includes a graphical interface from which to launch standard Slurm jobs
ICE4HPC – LITE VERSION
Provides a selected range of features:
- The end user can only select the resources to be allocated on the basis of predefined profiles
- Does not include frameworks for AI and Big Data Analytics
- Does not support GPU computing
- Limited to provide an interactive environment designed for data analysis and scientific visualization purposes
INTERACTIVE COMPUTER ENVIRONMENT FOR HPC