At the GPU Technology Conference in Silicon Valley earlier this year, NVIDIA CEO Jensen Huang introduced a new acronym named PLASTER to address seven major challenges for delivering AI-based services: Programability, Latency, Accuracy, Size, Throughput, Energy efficiency and Rate of learning.
Meeting these challenges will require more than just sticking an ASIC or an FPGA in a datacenter, Huang said. “Hyperscale data centers are the most complicated computers ever made — how could it be simple?” Huang said.
A new whitepaper published today explores each of these AI challenges in the context of NVIDIA’s deep learning solutions. PLASTER as a whole is greater than the sum of its parts. Anyone interested in developing and deploying AI-based services should factor in all of PLASTER’s elements to arrive at a complete view of deep learning performance. Addressing the challenges described in PLASTER is important in any deep learning solution, and it is especially useful for developing and delivering the inference engines underpinning AI-based services. Each section of this paper includes a brief description of measurements for each framework component and an example of a customer leveraging NVIDIA solutions to tackle critical problems with machine learning.
Read the whitepaper >
PLASTER: Bringing Deep Learning Inferencing to Millions of Servers
May 07, 2018
Discuss (0)

Related resources
- GTC session: Connect with the Experts: Accelerating and Deploying Deep Learning Models to Production (Spring 2023)
- GTC session: Scaling Deep Learning Training: Fast Inter-GPU Communication with NCCL (Spring 2023)
- GTC session: Connect with the Experts: Deep Learning, Machine Learning, and Data Science (Spring 2023)
- SDK: Nsight Deep Learning Designer
- SDK: Magnum IO SDK
- Webinar: Easily Deploy Multi-Framework AI Models at Scale with Triton