GPU Accelerated Computing with C and C++

Using the CUDA Toolkit you can accelerate your C or C++ applications by updating the computationally intensive portions of your code to run on GPUs. To accelerate your applications, you can call functions from drop-in libraries as well as develop custom applications using languages including C, C++, Fortran and Python. Below you will find some resources to help you get started using CUDA.

1
SETUP CUDA

Install the free CUDA Tookit on a Linux, Mac or Windows system with one or more CUDA-capable GPUs. Follow the instructions in the CUDA Quick Start Guide to get up and running quickly.

Or, watch the short video below and follow along.

If you do not have a GPU, you can access one of the thousands of GPUs available from cloud service providers including Amazon AWS, Microsoft Azure and IBM SoftLayer. The NVIDIA-maintained CUDA Amazon Machine Image (AMI) on AWS, for example, comes pre-installed with CUDA and is available for use today.

For more detailed installation instructions, refer to the CUDA installation guides. For help with troubleshooting, browse and participate in the CUDA Setup and Installation forum.

2
YOUR FIRST CUDA PROGRAM

You are now ready to write your first CUDA program. The article, Even Easier Introduction to CUDA, introduces key concepts through simple examples that you can follow along.

The video below walks through an example of how to write an example that adds two vectors.

The Programming Guide in the CUDA Documentation introduces key concepts covered in the video including CUDA programming model, important APIs and performance guidelines.

NVIDIA also provides hands-on training through a collection of self-paced labs. The labs guide you step-by-step through editing and execution of code, and even interaction with visual tools is all woven together into a simple immersive experience.

3
PRACTICE CUDA

Practice the techniques you learned in the materials above through more hands-on labs created for intermediate and advanced users.

The CUDA C Best Practices Guide presents established parallelization and optimization techniques and explains programming approaches that can greatly simplify programming GPU-accelerated applications.

For a more formal, instructor-led introduction to CUDA, explore the Introduction to Parallel Programming on UDACITY. The course covers a series of image processing algorithms such as you might find in Photoshop or Instagram. You'll be able to program and run your assignments on high-end GPUs, even if you don't have one yourself.

Additional Resources

CODE Samples

Availability

The CUDA Toolkit is a free download from NVIDIA and is supported on Windows, Mac, and most standard Linux distributions.

So, now you’re ready to deploy your application?
Register today for free access to NVIDIA TESLA GPUs in the cloud.

Latest News

SONY Breaks ResNet-50 Training Record with NVIDIA V100 Tensor Core GPUs

Researchers from SONY today announced a new speed record for training ImageNet/ResNet 50 in only 224 seconds (three minutes and 44 seconds) with 75 percent accuracy using 2,100 NVIDIA Tesla V100 Tensor Core GPUs.

AI Research Detects Glaucoma with 94 Percent Accuracy

Glaucoma affects more than 2.7 million people in the U.S. and is one of the leading causes of blindness in the world.

AI Study Predicts Alzheimer’s Six Years Before Diagnosis

A new study published in Radiology describes how deep learning can improve the ability of brain imaging to predict Alzheimer’s disease years before an actual diagnosis.

Visualizing Star Polymers in Record Time

In the last five minutes, you have probably come into contact with more polymers than you can count. In fact, they are everywhere; in grocery bags,  water bottles, phones, computers, food packaging, auto parts, tires, airplanes, and toys.

Blogs: Parallel ForAll

NVIDIA Apex: Tools for Easy Mixed-Precision Training in PyTorch

Most deep learning frameworks, including PyTorch, train using 32-bit floating point (FP32) arithmetic by default.

New Optimizations To Accelerate Deep Learning Training on NVIDIA GPUs

The pace of AI adoption across diverse industries depends on maximizing data scientists’ productivity.

The Peak-Performance-Percentage Analysis Method for Optimizing Any GPU Workload

Figuring out how to reduce the GPU frame time of a rendering application on PC is challenging for even the most experienced PC game developers.

Parallel Shader Compilation for Ray Tracing Pipeline States

In ray tracing, a single pipeline state object (PSO) can contain any number of shaders. This number can grow large, depending on scene content and ray types handled with the PSO; construction cost of the state object can significantly increase.