DE · Topics · Resources · Resources · Sponsored Content


The Future of Engineering Computation

GPU acceleration continues to revolutionize desktop workstation capabilities. 

GPU acceleration continues to revolutionize desktop workstation capabilities. 

Image courtesy of Ansys and NVIDIA.


Engineering workstations continue to increase their computing power, with the latest GPUs from NVIDIA – notably, the NVIDIA RTX™ 6000 Ada Generation graphics cards – providing a tremendous boost when it comes to simulation.

In some recent presentations, Ozen Engineering’s MingYao Ding, VP of engineering and principal, at Sunnyvale, CA-based Ozen Engineering, outlined how GPU acceleration is improving simulation workflows.

Ozen provides engineering simulation software and training; and serves as an Ansys Simulation channel partner. In October, Ding presented a session at the virtual Digital Engineering Design & Simulation Summit titled “The Future of Engineering Computing—From Workstations to the Cloud.” 

He explained that converting CAD objects into meshes can result in models with millions and millions of degrees of freedom – the equivalent of millions of equations that have to be solved.  “This type of simulation—engineering computing— can easily become computationally heavy. It’s important to solve these problems quickly.

“One of the key challenges of engineering computing: we can never have enough computing resources to model the real world exactly as it really is in all of its intricate detail,” Ding said.

The attempt to speed up simulation involves a set of trade-offs, according to Ding. “We have to start by understanding the level of accuracy and amount of detail we want in our simulation results.  From there we can choose different types of simulation systems and different methodologies.”

GPU Revolution

As designs and models have become larger and more complex, they take much longer to analyze. Traditionally, accelerating simulation involved distributing the problem across multiple CPU cores, Ding said. “You take one big problem and chunk it up into lots of smaller problems, then send each of these sections into a different computer. After you run the simulation, you combine it all back together — and have a final result,” he says. This way can solve huge problems quickly and is often done in aerospace, automotive and electronics industries.

In the last 10 years, GPU acceleration has entered the picture with capacity to analyze large models on desktop computers. “This allows us to solve much, much faster—10x to 100x speedup with the same type of problems,” Ding says.

The key factor when selecting a GPU for engineering applications is the amount of memory available, Ding explains. “Memory is the key consideration for simulation size. All GPUs are extremely fast,” he notes. For example, the new NVIDIA RTX 6000 Ada Generation GPU offers 48GB of memory, and is 1.5 to 2X faster than its predecessor depending on the workload . 

In industrial engineering applications, some simulations may involve 10 to 20 million cells, Ding said. The amount of GPU memory will determine how large of a problem you can solve using GPU acceleration. Fortunately, the latest generation of GPUs offers more than enough memory and processing power to tackle large engineering simulations, even on the desktop.

During the presentation, Ding provided an example of an aerodynamics drag simulation. A decade ago, this type of stimulation required a high-performance computing cluster. Now using a desktop workstation with a GPU, I can get that entire drag prediction model done in less than a few minutes, Ding said. And that is using one GPU compared to 32 CPU cores. The simulation in the example required 30GB of memory, which was easily handled by the NVIDIA RTX 6000 Ada GPU used for the demonstration.

“Desktop workstations available from partners like Dell and NVIDIA can do the work of a server,” Ding added. “Engineers much prefer to have a workstation that is GPU-enabled, because it cuts simulation times down from hours to just a few minutes.”

In another online demonstration, Ozen provided an overview of how GPU acceleration can improve the performance of the Ansys Lumerical photonics simulation solution. Using the NVIDIA RTX 6000 Ada Generation GPU, simulation times were reduced from approximately 45 minutes using the CPU to less than 8 minutes on the GPU – a roughly 6x speedup.

Image courtesy of Ansys and NVIDIA.

 

GPU computing has evolved rapidly over the last 4-5 years. Initially, GPU computing involved offloading computational work from the CPU to the GPU. Now there are native GPU solvers(he gave the example of Ansys 2023R2), allowing much faster speedups. Using a GPU can be up to 13x faster than an 8-core CPU, according to Ding.

For very large models or problems, engineers can also tap into multiple GPUs in HPC clusters or via cloud computing resources. Ding said that with multiple GPUs, a large problem in Ansys Fluent can be split across 8 GPUs in the cloud, providing 32x faster performance than an 80-core CPU. “You can solve enormous problems extremely quickly,” he said.

Image courtesy of Ansys and NVIDIA.

GPUs can also be used in a range of engineering simulations, including particle, optics, photonics, and electromagnetic analysis.

“GPU simulation really is on the front burner of most of the simulation development teams these days because the memory available in these GPUs is now allowing us to do truly industrial level problems all in a workstation GPU,” he shares.

“Literally it takes less time for our team to run this CFD model than it takes to get a cup of coffee. Productivity improvements available at workstation level for all engineers who do simulation is very impressive and exciting for all of us,” he shares.

The Design & Simulation Summit presentation is available on-demand at the event website. Ozen also presented a webinar on GPU acceleration in partnership with Dell and NVIDIA.

More Dell Coverage

Artificial Intelligence for Design and Engineering Workflows
In this white paper, learn how artificial intelligence and machine learning can improve design and simulation.
3DEXPERIENCE Lab Offers Advanced Design, Simulation and XR Experiences to Startups
Dassault Systèmes’ Paris facility offers access to cutting edge Dell Technologies Precision Workstations powered by NVIDIA GPUs.
Dell Rugged Laptops Get NVIDIA RTX™ GPU Boost
The new Dell Pro Rugged 14 semi-rugged laptop supports the NVIDIA RTX™ 500 Ada Generation Laptop GPU for advanced graphics and AI performance.
NVIDIA, nTop Strengthen 3D Solid Modeling Collaboration
NVIDIA invests in nTop, integrates OptiX rendering into nTop software.
AU 2024: Autodesk Offers Glimpses of the Future with Project Bernini
New Proof of Concept at Autodesk University Hints at AI Training Based on Proprietary Data
Rise of the AI Workstation
Given the rapid interest in artificial intelligence, more workstation vendors are rising to meet demand.
Dell Company Profile

More NVIDIA Coverage

Share This Article

Subscribe to our FREE magazine, FREE email newsletters or both!

Join over 90,000 engineering professionals who get fresh engineering news as soon as it is published.


#28325