Exploring Visualization and Simulation in the Data Center

For quite some time, we’ve been using visualization to represent information and data in a more understandable format. In media production, visualization lets us look at how that media was produced and then make changes to it. Even the first use of GPUs was to improve graphics or visualization from a gaming perspective. Long before AI was one of the primary use cases for GPUs, visualization of information has been very prevalent in computer science.   

Simulation allows us to look at various scenarios by simulating real world events in a synthetic environment. For example, you can simulate the behavior of fluid dynamics, simulate what a medical or military procedure might look like, or even use simulation for credit card fraud detection to understand how to detect an erroneous transaction model.  

Artificial Intelligence

The intersection between visualization where we try to simulate information through visual capabilities, and simulation to see how we can mimic real world scenarios synthetically has been coming together for some time. The GPU technology originally intended for gaming is well-designed at scale to meet most of those challenges. 

Not only that, but we’re now seeing that technologies like deep learning or anti-aliasing become more interesting as they intersect between visualization, simulation and AI. One example is using simulation for manufacturing optimization by sampling manufacturing defects and simulating what the likely failure scenarios look like, then adding in AI capabilities to proactively detect future failures. 

The ability to generate new media and new capabilities means that all these things are coming together. If we can build the right platforms at the right scale points with the right degree of economic value, we can bring the power of GPUs to these different use cases that are colliding.  

The challenge facing companies today 

To be successful, systems need to be built at the right scale, the right time, and for the right problem – and there’s currently no one system model that fits all. We want the ability (within the same environment) to build different system models that focus on visualization, simulation, data generation, and data analysis. 

With workflows in various stages, we want different models and to use different accelerators. We want to focus on the number of GPUs and how they’re deployed. So, how do we get the right building blocks and assemble them correctly for these workflows that are bringing in different disciplines at the same time?  

We also need to look at the type of GPUs. There are commercial grade GPUs used for gaming and graphics in small environments, or ones used in large data center deployments that are constructed very differently in terms of power and cooling and the environment they’re designed to work within.  

For example, you could have a small deployment for someone doing the color grading of a feature coming out of a high-end studio. What type of device do they need? If they want to bring in AI, do they have to move their entire fleet to a higher grade of GPU? Do the economics make sense for them to do that? Or can we provide the right commercial model so they can use the right GPUs for different parts of their workflow? 

The answer is yes – you don’t have to throw everything out to start doing this. You can incrementally start putting these things together without having to upgrade all your GPUs.  

Media rendering use-case 

If you look at visualization and simulation, they’re not always based in large data centers. They’re often in smaller environments like a studio. Depending on what you’re doing, you might want to use commercial grade and higher end GPUs. So, how do we do this at scale – both in a data center and a much smaller environment? 

Let’s look at one use case where the customer is doing media rendering. Imagine you’ve got 20 artists all doing color grading of different parts of the same feature or doing different features. They won’t all hit the render button at the same time – we know that.  

But if you knew the optimal set of resources, you could adjust based on what they’re working on. The artist doing the titles or doing the teaser wouldn’t need the same resources as someone that’s doing a big portion of the main feature. So, you could give them one GPU for rendering because it’s okay if it takes all night. The big portion of the feature is what will hold up the overall workflow. But until now, you couldn’t make those choices because everyone gets the same amount of GPUs.   

With composability, you have the right scale point, and the resources are available in real-time even when different workflows within workflows require a lot of information sent to high-definition screens. 

When re-encoding and batch rendering data for subsequent workflows or collaborating organizations, we’re left with both real-time requirements and overnight batch rendering jobs that the faster we can do them, the faster that equipment is then freed up for other work. 

With Cerio, you can decide how many resources are needed to optimize your workflow at any scale. You can have real-time assembly based on workloads – with software that gives you profiles of different devices that when discovered, you can decide that for this part of the workflow, I’m going to use these GPUs, and that part of the workflow, I’m going to use those different GPUs. Through composition, we make it possible for you to apply the right GPUs to the right workflow in real-time. 

Discover the future of data center transformation.

Learn more about the technology behind Cerio’s distributed fabric architecture.
Read the tech primer