Ashwin Nanjappa

Ashwin Nanjappa is an engineering manager in the TensorRT team at NVIDIA. He leads the MLPerf-Inference initiative to demonstrate the performance and energy efficiency of NVIDIA accelerators. He is also involved in improving performance of the TensorRT DL inference library. Before joining NVIDIA, he had worked on training and deployment of DL models for CV, GPU-accelerated ML/CV algorithms for depth cameras, and multi-media libraries in cellphones and DVD players. He has a Ph.D. in computer science from the National University of Singapore (NUS), with a focus on GPU algorithms for 3D computational geometry.

Posts by Ashwin Nanjappa

Technical Walkthrough 0

Getting the Best Performance on MLPerf Inference 2.0

NVIDIA delivered leading results for MLPerf Inference 2.0, including 5x more performance for NVIDIA Jetson AGX Orin, an SoC platform built for edge devices and robotics. 11 MIN READ