
Run:AI Compute Management Platform Now Available to E4 Customers
Move AI models to production 2x faster
Run:AI has built a Compute Management Platform for AI Computing which plugs into Kubernetes in order to help researchers orchestrate jobs on GPUs more efficiently. Greater efficiency yields faster modeling; one Run:AI customer recently executed 6,700 parallel hyperparameter tuning jobs and completed modeling in record time.
How Does Run:AI work?
Run:AI automates the orchestration of AI workloads and the management of hardware resources across teams and clusters. Run:AI pools compute and applies dynamic allocation mechanisms to boost resource availability at any given time. With pre-set scheduling and prioritization policies, researchers have access to as many GPUs as they need and achieve model accuracy faster.
Run:AI’s Compute Management Platform for GPU-based computers running AI/ML workloads provides:
- Fair scheduling to allow users to easily and automatically share clusters of GPUs
- Simplified distributed training across GPU clusters
- Fractional GPU to seamlessly run multiple workloads on a single GPU
- Visibility into workloads and resource utilization to improve user productivity
Do you want to know more about Run:AI?
Read these two interesting articles about Run:AI:
Reduce cost by 75% with fractional GPU for Deep Learning Inference