Ashwin Nanjappa

Ashwin Nanjappa is an engineering manager in the TensorRT team at NVIDIA. He leads the MLPerf Inference initiative to demonstrate the performance and energy efficiency of NVIDIA accelerators. He is also involved in improving the DL inference performance of the TensorRT library. Before joining NVIDIA, he worked on training DL models for CV, GPU-accelerated ML algorithms for depth cameras, and developing multimedia libraries for cellphones and DVD players. He has a Ph.D. in computer science from the National University of Singapore (NUS), with a focus on GPU algorithms for 3D computational geometry.

Posts by Ashwin Nanjappa

Technical Walkthrough 6

Full-Stack Innovation Fuels Highest MLPerf Inference 2.1 Results for NVIDIA

Today’s AI-powered applications are enabling richer experiences, fueled by both larger and more complex AI models as well as the application of many models in... 14 MIN READ
Technical Walkthrough 0

Getting the Best Performance on MLPerf Inference 2.0

Models like Megatron 530B are expanding the range of problems AI can address. However, as models continue to grow complexity, they pose a twofold challenge for... 11 MIN READ