Jinho Suh

Jinho is a senior deep learning architect on the DLSIM team at NVIDIA. He is working on performance modeling and analysis of deep learning workloads on NVIDIA accelerators, and contributes to the NVIDIA MLPerf Inference implementation. Before joining NVIDIA, he worked on server CPU and SoC architectures and microarchitectures at Intel and Arm. He has a Ph.D. in computer engineering from University of Southern California with concentration in computer architecture.
Avatar photo

Posts by Jinho Suh

An image of an NVIDIA H200 Tensor Core GPU.
Generative AI

NVIDIA H200 Tensor Core GPUs and NVIDIA TensorRT-LLM Set MLPerf LLM Inference Records

Generative AI is unlocking new computing applications that greatly augment human capability, enabled by continued model innovation. Generative AI... 11 MIN READ
NVIDIA Jetson Orin modules.
Data Center / Cloud

Leading MLPerf Inference v3.1 Results with NVIDIA GH200 Grace Hopper Superchip Debut

AI is transforming computing, and inference is how the capabilities of AI are deployed in the world’s applications. Intelligent chatbots, image and video... 13 MIN READ
Image of Infiniband with decorative images in front.
Networking

New MLPerf Inference Network Division Showcases NVIDIA InfiniBand and GPUDirect RDMA Capabilities

In MLPerf Inference v3.0, NVIDIA made its first submissions to the newly introduced Network division, which is now part of the MLPerf Inference Datacenter... 9 MIN READ
Data Center / Cloud

Setting New Records in MLPerf Inference v3.0 with Full-Stack Optimizations for AI

The most exciting computing applications currently rely on training and running inference on complex AI models, often in demanding, real-time deployment... 15 MIN READ
Simulation / Modeling / Design

Getting the Best Performance on MLPerf Inference 2.0

Models like Megatron 530B are expanding the range of problems AI can address. However, as models continue to grow complexity, they pose a twofold challenge for... 11 MIN READ