Cerio Platform

Experience the Agility of Scale

Introducing the Cerio platform for composable infrastructure, designed for the AI and Cloud era.

Explore the Platform

The Cerio open systems platform provides the foundation for composable infrastructure systems designed for elastic scale and optimized for AI acceleration.

AI Optimization

The overlay services layer of the Cerio platform directs traffic through a hierarchy of virtual network paths. Administrators can define and manage traffic flows in software without being constrained by the underlying physical infrastructure. 

Fabric Management: Overlay services and device chassis are managed in software using a standards-based composability data model. 

Transport Adaptation: Native PCIe transport is decoupled from the overlay services and the underlay fabric to deliver robust systems at scale.

Multipath Transport

Hardened in high-performance computing, the Cerio underlay fabric provides automatic discovery and connectivity for composed systems. The underlay fabric uses multipathing to balance data flows across a high radix of optimal paths that share no common links, minimizing contention and congestion.

Fabric Node: Self-discovering, self-configuring and self-healing, the Fabric Node is installed in each server and chassis for fully distributed operation and direct connection to provide highly efficient data transport.  

Zero Power, Elastic Scale

The Fabric SHFL is an optical direct interconnect that implements highly scalable network topologies in composed systems. By making simple, predefined connections between Fabric Nodes in the host and chassis, the SHFL delivers the power of high-performance computing right out of the box. 

Pre-wired Topologies: Using a single optical cable to connect each Fabric Node to an optical port, the SHFL implements pre-wired topologies without any manual configuration.  

Passive Cabling: Entirely passive, the SHFL requires zero power or cooling.  

AI Optimization

The overlay services layer of the Cerio platform directs traffic through a hierarchy of virtual network paths. Administrators can define and manage traffic flows in software without being constrained by the underlying physical infrastructure. 

Fabric Management: Overlay services and device chassis are managed in software using a standards-based composability data model. 

Transport Adaptation: Native PCIe transport is decoupled from the overlay services and the underlay fabric to deliver robust systems at scale.

Multipath Transport

Hardened in high-performance computing, the Cerio underlay fabric provides automatic discovery and connectivity for composed systems. The underlay fabric uses multipathing to balance data flows across a high radix of optimal paths that share no common links, minimizing contention and congestion.

Fabric Node: Self-discovering, self-configuring and self-healing, the Fabric Node is installed in each server and chassis for fully distributed operation and direct connection to provide highly efficient data transport.  

Zero Power, Elastic Scale

The Fabric SHFL is an optical direct interconnect that implements highly scalable network topologies in composed systems. By making simple, predefined connections between Fabric Nodes in the host and chassis, the SHFL delivers the power of high-performance computing right out of the box. 

Pre-wired Topologies: Using a single optical cable to connect each Fabric Node to an optical port, the SHFL implements pre-wired topologies without any manual configuration.  

Passive Cabling: Entirely passive, the SHFL requires zero power or cooling.  

#

Calibrate your capacity

Create GPU density clusters by connecting up to 64 GPUs to a host, scale up, down or out to fit your GPU infrastructure. 

#

Save time and footprint

Build systems on the fly and compose in seconds, while reducing power, cooling, rack space and cabling.  

#

Achieve 24/7 uptime

Hotswap failed hardware in real time using software, without time-consuming manual upgrades or impact on service or production availability. 

#

Accelerate application performance

Maximize per-application data flows to optimize the performance of your infrastructure for AI, machine learning and deep learning.  

#

Eliminate vendor lock-in

Use any hardware device from any vendor or generation with full hardware heterogeneity. 

ONLINE CALCULATOR

AI Capacity Planning

AI is driving demand for GPUs and specialized accelerators to manage the complex processing demands for new applications and services. Modeling the available GPU capacity in your data center – and what you can actually access – is critical for planning the cost-effective scale of your AI infrastructure.

Assess your capacity
1/5

How many servers with X # of GPUs do you have deployed today?

Servers with 0 GPUs
Servers with 1 GPUs
Servers with 2 GPUs
Servers with 4 GPUs
Servers with 8 GPUs
2/5

Currently, what % of time is each type used in a week? (GPU usage)

Please Select server in Question 1

Servers with 0 GPUs
Servers with 1 GPUs
Servers with 2 GPUs
Servers with 4 GPUs
Servers with 8 GPUs
3/5

What number of GPUs per server would be preferred for your use case?

4/5

Do you have any applications that could use more GPUs than what you currently have deployed today?

5/5

How often do you currently add new GPU servers?

Prev
White Paper

Optimizing Open Infrastructure for AI

Distributed by Design. Calibrated for Composable Infrastructure.

The PCIe scale barrier is placing limits on traditional and composable infrastructure, adding cost and complexity. PCIe decoupling technology in the Cerio open systems platform removes the single PCIe domain limit to enable elastic scale and agility for highly optimized and cost-effective AI infrastructure.

Ready to get started?

Book a 1:1 consultation with a Cerio composable systems specialist today.
Request a Demo