GPU Accelerated Computing with Python

Python is one of the most popular programming languages today for science, engineering, data analytics and deep learning applications. However, as an interpreted language, it has been considered too slow for high-performance computing. That has changed with CUDA Python from Continuum Analytics.

With CUDA Python, using the Numba Python compiler, you get the best of both worlds: rapid iterative development with Python combined with the speed of a compiled language targeting both CPUs and NVIDIA GPUs.


To run CUDA Python, you will need the CUDA Toolkit installed on a system with CUDA capable GPUs.

If you do not have a CUDA-capable GPU, you can access one of the thousands of GPUs available from cloud service providers including Amazon AWS, Microsoft Azure and IBM SoftLayer. The NVIDIA-maintained CUDA Amazon Machine Image (AMI) on AWS, for example, comes pre-installed with CUDA and is available for use today.

Use this guide for easy steps to install CUDA. To setup CUDA Python, first install the Anaconda python distribution. Then install the latest version of the Numba package. You can find detailed installation instructions in the Numba documentation.

Or, watch the short video below and follow along.


You are now ready for your first python program on the GPU. The video below walks through a simple example that adds two vectors for you to follow along.

If you are new to Python, explore the beginner section of the Python website for some excellent getting started resources. The blog, An Even Easier Introduction to CUDA, introduces key CUDA concepts through simple examples.

In the Numba documentation you will find information about how to vectorize functions to accelerate them automatically as well as how to write CUDA code in Python. Download and execute Jupyter Notebooks for the Mandelbrot and Monte Carlo Option Pricer examples on your local machine.


Check out Numbas github repository for additional examples to practice.

NVIDIA also provides hands-on training through a collection of self-paced labs . The labs guide you step-by-step through editing and execution of code, and even interaction with visual tools is all woven together into a simple immersive experience. Practice the techniques you learned in the materials above through hands-on labs.

For a more formal,instructor-led introduction to CUDA, explore the Introduction to Parallel Programming on UDACITY. The course covers a series of image processing algorithms such as you might find in Photoshop or Instagram. You'll be able to program and run your assignments on high-end GPUs, even if you don't have one yourself.


The Numba package is available as a Continuum Analytics sponsored open-source project.

The CUDA Toolkit is a free download from NVIDIA and is supported on Windows, Mac, and most standard Linux distributions.

So, now youre ready to deploy your application?

Register today for free access to NVIDIA TESLA GPUs in the cloud.

Latest News

Drink up! Beer Tasting Robot Uses AI to Assess Quality

Can a beer tasting robot do a better job than humans in judging a beer? Researchers in Australia developed a robot that uses machine learning to assess the quality of the beer.

NVIDIA Announces New Software and Updates to CUDA, Deep Learning SDK and More

At the GPU Technology Conference, NVIDIA announced new updates and software available to download for members of the NVIDIA Developer Program.

Announcing Major Updates to DesignWorks and VRWorks Developer Tools

Today, in conjunction with the NVIDIA GPU Technology Conference (GTC) in San Jose, California, we are announcing major updates to our industry-leading DesignWorks and VRWorks SDKs and developer tools.

NVIDIA JetPack 3.2 Production Release Now Available

JetPack 3.2 with L4T R28.2 is the latest production software release for NVIDIA Jetson TX2, Jetson TX2i and Jetson TX1.

Blogs: Parallel ForAll

DRIVE PX Application Development Using Nsight Eclipse Edition

The NVIDIA DRIVE PX AI car computer enables OEMs, tier 1 suppliers, startups and research institutions to accelerate the self-driving car systems development.

NVIDIA OptiX Ray Tracing Powered by RTX

Ray Tracing vs Rasterization Conventional 3D rendering has typically used a process called rasterization since the 1990s. Rasterization uses objects created from a mesh of triangles or polygons to represent a 3D model of an object.

TensorRT Integration Speeds Up TensorFlow Inference

Overview NVIDIA announced the integration of our TensorRT inference optimization tool with TensorFlow. TensorRT integration will be available for use in the TensorFlow 1.7 branch.

Implementing NVIDIA Highlights Plugin for Unreal Engine 4

NVIDIA Highlights (or just Highlights) is a feature of NVIDIA Shadowplay that allows players to capture in-game moments automatically based on in-game events.