Simulation / Modeling / Design

NVIDIA Developer Blog 2018 Highlights

This year proved to be a banner year for the NVIDIA Developer Blog, publishing over 80 new technical articles. Given this year’s pace, it’s easy to miss out on cool new insights and information, so let’s take a little time to review the year. First, let’s look at the most popular new articles.

The Five Most Read New Posts

NVIDIA launched the shiny new Turing architecture this year, which prompted a number of Turing-related articles, particularly about ray tracing. AI articles comprise the other popular reads on the site. So let’s list the top five posts published this year.

  1. Introduction to NVIDIA RTX and DirectX Ray Tracing. Martin Stich wrote the first article on real-time ray tracing and DXR. It’s an excellent introduction, so deserves all the clicks.
  2. TensorRT Integration Speeds Up TensorFlow Inference. Written by Sami Kama, Julie Bernauer, and Siddharth Sharma, this post highlights how NVIDIA’s TensorRT inference optimization tech improves inference with Google’s popular TensorFlow engine.
  3. NVIDIA Turing Architecture In-Depth. Articles on architecture can sometimes seem dry and by-the-numbers. Authors Emmett Kilgariff, Henry Moreton, Nick Stam, and Brandon Bell keep the discussion on the most important new GPU architecture in NVIDIA’s history lively.
  4. Introduction to Turing Mesh Shaders. Some posts become “evergreen”, generating a steady stream of readers over time. Christoph Kubisch’s post on Turing Mesh Shaders is likely to draw a steady stream of readers looking to incorporate this performance-enhancing new tech built into Turing.
  5. Volta Tensor Core GPU Achieves New AI Performance Milestones. We all love to poke and prod new architectures to see just how well they perform, then argue about the results. While this carries my byline, it’s really the work of a host of NVIDIA enginners, who pulled together the benchmarks and contributed heavily to the post.

Five Cool Partner Posts

We also open up the Developer Blog to NVIDIA partners who may have interesting technical insights into how NVIDIA technology gets used in the real world. Let’s look at some particularly cool ones.

  • Real-Time Noise Suppression with Deep Learning. Davit Baghdasaryan, CEO of 2Hz.ai, dives into techniques for noise suppression using deep learning, applicable to both mobile communications and teleconferencing.
  • DeepSig: Deep Learning for Wireless Communications. Today’s wireless communications often means weak signals, spotty service, and frequent dropouts. The art of designing wireless circuits and antennas is arcane and requires deep expertise. Ben Hilburn’s team at DeepSig discuss using AI to optimize wireless communications circuits and teaches a bit about how communications technology works.
  • Hacking Ansel to Slash VR Rendering Times. Race Krehel of VR developer Warrior9 writes about using NVIDIA’s Ansel virtual camera system to cut rendering time for VR.
  • Using OpenACC to Port Solar Storm Modeling Code to GPUs. Ron Caplan’s technical team took a complex solar storm model written for CPUs and converted it for GPU acceleration using OpenACC. Come for the cool animated GIFs showing simulated solar eruptions, stay for the deep discussion on how to parallelize complex scientific applications using OpenACC. While the focus is on Fortran code the concepts apply to any language.
  • Using MATLAB and TensorRT on NVIDIA GPUs. MathWorks’ MATLAB remains one of the most popular frameworks for scientific data analysis. Incorporating TensorRT enables MATLAB users to further accelerate image recognition on GPUs. Bill Chou of MathWorks has all the details, including code snippets and performance data.

Three Cool Posts You May Have Missed

The steady stream of new posts may mean missing out on some excellent technical insights. Here are three of my favorites.

  • Fast and Fun: My First Ray Tracing Demo. Eric Haines takes the code from a popular DXR ray tracing tutorial and adds the sphereflake test scene, just for fun, and is pleased to find he can render 48 million spheres at interactive rates. It’s a nice tour of the strengths of ray tracing and an intro to basic concepts.
  • CUDA on Turing Opens New GPU Compute Possibilities. NVIDIA’s underlying framework for GPU compute, CUDA, remains a mainstay of our AI and accelerated computing efforts. Olivier Giroux discusses how new features built into Turing enable new CUDA capabilities which help programmers tackle GPU programming challenges previously unavailable in earlier architectures.
  • Storage Performance Basics for Deep Learning. You can have the fastest GPU and memory subsystem on the planet and still have performance bottlenecks if you don’t tune your applications and systems architecture to work well with existing storage technologies. James Mauro walks through some storage optimization ideas, backing them up with benchmarks — and along the way shows you how you can test and optimize your storage subsystem.

The Long Tail Continues to Yield Dividends.

Some articles never die, or even fade away. I want to highlight three timeless tutorials which provide ongoing resources for new and experienced programmers using NVIDIA technologies.

  1. An Even Easier Introduction to CUDA. Mark Harris wrote the most widely read post on the Developer Blog, containing lots of code snippets, diagrams, and examples on writing CUDA code. It closes with links to other useful articles.
  2. Programming Tensor Cores in CUDA 9. Jeremy Appleyard and Scott Yokim deliver an excellent guide to using the new Tensor Cores for fast, 16-bit matrix math, which arrived first with Volta and now Turing.
  3. The Deep Learning in a Nutshell series by Tim Dettmers. These tutorials consist of four different posts written by Tim in 2015 and 2016. If you’re new to deep learning, these represent a great way to learn more about this important AI technology. Begin with Deep Learning in a Nutshell: Core Concepts, then move on to Deep Learning in a Nutshell: History and Training. After that, move on to Deep Learning in a Nutshell: Sequence Learning and Deep Learning in a Nutshell: Reinforcement Learning.

Looking Forward

The coming year looks to be one of the most important years for AI, computer graphics, and data sciences in NVIDIA’s history. I’m looking forward to all the new articles coming from both within and outside of the company in the coming year. Drop by and visit often. Better yet, pitch me an article! You can reach me at lcase@nvidia.com.

 

Discuss (0)

Tags