Maggie Zhang

Maggie Zhang is a deep learning engineer at NVIDIA, working on deep learning frameworks and applications. She received her PhD in computer science and engineering from the University of New South Wales in Australia, where she worked on GPU/CPU heterogeneous computing and compiler optimizations.
Avatar photo

Posts by Maggie Zhang

Graphic with computer, cloud, and GPU icons
Conversational AI

Autoscaling NVIDIA Riva Deployment with Kubernetes for Speech AI in Production

Speech AI applications, from call centers to virtual assistants, rely heavily on automatic speech recognition (ASR) and text-to-speech (TTS). ASR can process... 13 MIN READ
Data Center / Cloud

Dividing NVIDIA A30 GPUs and Conquering Multiple Workloads

Multi-Instance GPU (MIG) is an important feature of NVIDIA H100, A100, and A30 Tensor Core GPUs, as it can partition a GPU into multiple instances. Each... 9 MIN READ
Data Center / Cloud

Accelerating AI Inference Workloads with NVIDIA A30 GPU

NVIDIA A30 GPU is built on the latest NVIDIA Ampere Architecture to accelerate diverse workloads like AI inference at scale, enterprise training, and HPC... 6 MIN READ
Data Center / Cloud

Deploying NVIDIA Triton at Scale with MIG and Kubernetes

NVIDIA Triton Inference Server is an open-source AI model serving software that simplifies the deployment of trained AI models at scale in production. Clients... 24 MIN READ
Data Science

Getting the Most Out of the NVIDIA A100 GPU with Multi-Instance GPU

With the third-generation Tensor Core technology, NVIDIA recently unveiled A100 Tensor Core GPU that delivers unprecedented acceleration at every scale for AI,... 18 MIN READ
Data Science

Getting Kubernetes Ready for the NVIDIA A100 GPU with Multi-Instance GPU

Multi-Instance GPU (MIG) is a new feature of the latest generation of NVIDIA GPUs, such as A100. It enables users to maximize the utilization of a single GPU by... 13 MIN READ