Home > News & Events >
How to Leverage AI Workflows with Enhanced Agility and Control
22 July 2024
We’re all aware of the boom in AI adoption across a wide range of industries, and the shift is only just beginning. Figures from Statista show that the AI market size is expected to grow by 20 times by 2030, reaching an estimated $2 trillion.
AI workflows are transforming operations across industries, from finance and defence to manufacturing and media, and everything in between. The benefits in efficiency and accuracy are evident, but one consequence of this growth is a surge in compute demand.
Goldman Sachs calculates that AI will increase power demand from data centres by 160% by 2030 as a direct impact and Microsoft cites the availability of GPUs and other components as a risk factor as demand increases. One study shows that the computational power required to sustain AI’s rise is doubling roughly every 100 days and experts also predict it will require a 10,000-fold increase in computational power demand to achieve a tenfold improvement in AI model efficiency. The figures are staggering.
The challenge now is how do we leverage AI workflows by enabling composable GPU acceleration without compromising agility, control or security? Compact and efficient system designs typically lack GPU density, but with greater GPU demand comes a greater focus on composable architecture. That’s where our CoreModule XL with Liqid SmartStack MX provides a solution, enabling dynamically configured servers and improved resource utilisation to address rapidly evolving workloads.
Delivering benefits to unlock AI workflow potential
Bringing the power of composable GPU resource to the Dell PowerEdge MX7000 modular chassis, the CoreModule XL product is ideally suited for a wide range of AI training and inference workloads, machine learning and engineering simulation.
Its power lies in its agility, allowing up to 30x 350W GPUS to be connected to the chassis and assigned to any compute sled on-demand. The large GPU memory capacity ensures it can run modern AI models, composing resources on-demand for any individual task. This ensures GPU resource can be deployed where and when it is needed, thus maximising resource utilisation to meet evolving workloads.
Critically, by dynamically connecting and scaling enterprise-grade GPUs, users can facilitate workloads that were previously unattainable to modular systems. In fact, by combining multiple GPUs in a linked configuration, up to 192GB of unified memory is available for running training and inference on large AI models, for higher accuracy or get a project running with less time required for model optimisation.
Other benefits include:
- More than eight accelerators per server with mixed deployments
- Decouple purchasing decisions – uses standard off-the-shelf GPUs and components from NVIDIA, Intel and AMD
- Scalable solution, so you can pay as you grow
- Deploy on-premises or at the edge
Want to know more?
Discover the potential of CoreModule XL with Liqid SmartStack MX to unlock the power of AI workflows with greater agility and control. DM us and one of our team will be in touch.
Alternatively, you can contact our global offices here.