
News
Content for the HPC community and innovation enthusiasts: tutorials, news and press releases for users, engineers and administrators
- All News
- Aerospace & Defence
- Artificial Intelligence
- Blog
- Cloud Platform
- Cloud Platform
- Collaborazioni
- E4 Various
- European Projects
- HPC
- Kubernetes Cluster
- Latest news
- Press
Reduce cost by 75% with fractional GPU for Deep Learning Inference

In this post, we’ll address how fractionalizing GPU for deep learning inference workloads with lower computational needs can save 50-75% of the cost of deep learning. This is an article from our Partner Run:AI blog.
Our partner Run: AI offers the possibility of performing, at the same time, more Deep Learning inference tasks, using the same GPU, and, therefore, fractioning it. This results in a better and more effective use of resources, greater speed, thanks to the simultaneous execution of different tasks by multiple data scientists, and a significant cost saving.
How does Deep Learning inference differ from training?
Before we get into the value of fractionalizing GPU, it’s important to explain that at each stage of the deep learning process, data scientists complete different tasks that relate to how they are interacting with neural networks and GPU. The steps can be divided into four phases: data preparation, build, train and inference.
Read more here or contact us