GPU Accelerated Computing with C and C++

Using the CUDA Toolkit you can accelerate your C or C++ applications by updating the computationally intensive portions of your code to run on GPUs. To accelerate your applications, you can call functions from drop-in libraries as well as develop custom applications using languages including C, C++, Fortran and Python. Below you will find some resources to help you get started using CUDA.


Install the free CUDA Tookit on a Linux, Mac or Windows system with one or more CUDA-capable GPUs. Follow the instructions in the CUDA Quick Start Guide to get up and running quickly.

Or, watch the short video below and follow along.

If you do not have a GPU, you can access one of the thousands of GPUs available from cloud service providers including Amazon AWS, Microsoft Azure and IBM SoftLayer. The NVIDIA-maintained CUDA Amazon Machine Image (AMI) on AWS, for example, comes pre-installed with CUDA and is available for use today.

For more detailed installation instructions, refer to the CUDA installation guides. For help with troubleshooting, browse and participate in the CUDA Setup and Installation forum.


You are now ready to write your first CUDA program. The article, Even Easier Introduction to CUDA, introduces key concepts through simple examples that you can follow along.

The video below walks through an example of how to write an example that adds two vectors.

The Programming Guide in the CUDA Documentation introduces key concepts covered in the video including CUDA programming model, important APIs and performance guidelines.


NVIDIA provides hands-on training in CUDA through a collection of self-paced and instructor-led courses. The self-paced online training, powered by GPU-accelerated workstations in the cloud, guides you step-by-step through editing and execution of code along with interaction with visual tools. All you need is a laptop and an internet connection to access the complete suite of free courses and certification options.

The CUDA C Best Practices Guide presents established parallelization and optimization techniques and explains programming approaches that can greatly simplify programming GPU-accelerated applications.

Additional Resources

CODE Samples


The CUDA Toolkit is a free download from NVIDIA and is supported on Windows, Mac, and most standard Linux distributions.

So, now you’re ready to deploy your application?
Register today for free access to NVIDIA TESLA GPUs in the cloud.

Latest News

Generating Character Animations from Speech with AI

Researchers from the Max Planck Institute for Intelligent Systems, a member of NVIDIA’s NVAIL program, developed an end-to-end deep learning algorithm that can take any speech signal as input - and realistically animate it in a wide range of adult


From fluid dynamics and weather simulation, to computational chemistry and bioinformatics, HPC applications span across many domains.

NVIDIA and Red Hat: Simplifying NVIDIA GPU Driver Deployment on Red Hat Enterprise Linux

Based on feedback from our users, NVIDIA and Red Hat have worked closely to improve the user experience when installing and updating NVIDIA software on RHEL, including GPU drivers and CUDA

Developer Spotlight: Enabling the SKA Radio Telescope to Explore the Universe

The Square Kilometre Array (SKA) project is an effort to build the world’s largest radio telescope, with a collecting area of over one square kilometre.

Blogs: Parallel ForAll

ArchiGAN: a Generative Stack for Apartment Building Design

AI will soon massively empower architects in their day-to-day practice. This potential is around the corner and my work provides a proof of concept.

TensorFlow Performance Logging Plugin nvtx-plugins-tf Goes Public

The new nvtx-plugins-tf library enables users to add performance logging nodes to TensorFlow graphs. (TensorFlow is an open source library widely used for training DNN—deep neural network—models).

Migrating to NVIDIA Nsight Tools from NVVP and Nvprof

If you use the NVIDIA Visual Profiler or the nvprof command line tool, it’s time to transition to something newer: NVIDIA Nsight Tools. Don’t worry! The new tools still offer the same profiling / optimization / deployment workflow.

NVIDIA Boosts AI Performance in MLPerf v0.6

The relentless pace of innovation is most apparent in the AI domain.