Maggie Zhang

Maggie Zhang is a deep learning engineer at NVIDIA, working on deep learning frameworks and applications. She received her PhD in computer science and engineering from the University of New South Wales in Australia, where she worked on GPU/CPU heterogeneous computing and compiler optimizations.

Posts by Maggie Zhang

Technical Walkthrough 0

Accelerating AI Inference Workloads with NVIDIA A30 GPU

Researchers, engineers, and data scientists can use A30 to deliver real-world results and deploy solutions into production at scale. 5 MIN READ
Technical Walkthrough 0

Deploying NVIDIA Triton at Scale with MIG and Kubernetes

NVIDIA Triton can manage any number and mix of models, support multiple deep-learning frameworks, and integrate easily with Kubernetes for large-scale deployment. 24 MIN READ
Technical Walkthrough 0

Getting the Most Out of the NVIDIA A100 GPU with Multi-Instance GPU

With the third-generation Tensor Core technology, NVIDIA recently unveiled A100 Tensor Core GPU that delivers unprecedented acceleration at every scale for AI… 18 MIN READ
Technical Walkthrough 0

Getting Kubernetes Ready for the NVIDIA A100 GPU with Multi-Instance GPU

Multi-Instance GPU (MIG) is a new feature of the latest generation of NVIDIA GPUs, such as A100. It enables users to maximize the utilization of a single GPU by… 13 MIN READ
Technical Walkthrough 1

Training Your Own Voice Font Using Flowtron

Recent conversational AI research has demonstrated automatically generating high quality, human-like audio from text. For example, you can use Tacotron 2 and… 12 MIN READ
Technical Walkthrough 0

Generate Natural Sounding Speech from Text in Real-Time

This post, intended for developers with professional level understanding of deep learning, will help you produce a production-ready, AI, text-to-speech model. 12 MIN READ