At the GPU Technology Conference in Silicon Valley earlier this year, NVIDIA CEO Jensen Huang introduced a new acronym named PLASTER to address seven major challenges for delivering AI-based services: Programability, Latency, Accuracy, Size, Throughput, Energy efficiency and Rate of learning.
Meeting these challenges will require more than just sticking an ASIC or an FPGA in a datacenter, Huang said. “Hyperscale data centers are the most complicated computers ever made — how could it be simple?” Huang said.
A new whitepaper published today explores each of these AI challenges in the context of NVIDIA’s deep learning solutions. PLASTER as a whole is greater than the sum of its parts. Anyone interested in developing and deploying AI-based services should factor in all of PLASTER’s elements to arrive at a complete view of deep learning performance. Addressing the challenges described in PLASTER is important in any deep learning solution, and it is especially useful for developing and delivering the inference engines underpinning AI-based services. Each section of this paper includes a brief description of measurements for each framework component and an example of a customer leveraging NVIDIA solutions to tackle critical problems with machine learning.
Read the whitepaper >
PLASTER: Bringing Deep Learning Inferencing to Millions of Servers
May 07, 2018
Discuss (0)

Related resources
- DLI course: Deploying a Model for Inference at Production Scale
- GTC session: High Scalability, Low Costs, and No Rate Limits: Peek Inside the Serverless Inference API Breaking the Mold (Presented by Lambda)
- GTC session: Fast Inference at Scale: Build SLA-Focused AI Stacks for Enterprise-Grade Scale (Presented by Simplismart)
- GTC session: Scaling Inference Using NIM Through a ServerLess NCP SaaS Platform
- Webinar: Optimizing DNN Inference with NVIDIA TensorRT on DRIVE Orin
- Webinar: Building and Running an End-to-End Machine Learning Workflow, 5x Faster