GPU Accelerated Computing with Python

Python is one of the most popular programming languages today for science, engineering, data analytics and deep learning applications. However, as an interpreted language, it has been considered too slow for high-performance computing. That has changed with CUDA Python from Continuum Analytics.

With CUDA Python, using the Numba Python compiler, you get the best of both worlds: rapid iterative development with Python combined with the speed of a compiled language targeting both CPUs and NVIDIA GPUs.

1
SETUP CUDA PYTHON

To run CUDA Python, you will need the CUDA Toolkit installed on a system with CUDA capable GPUs.

If you do not have a CUDA-capable GPU, you can access one of the thousands of GPUs available from cloud service providers including Amazon AWS, Microsoft Azure and IBM SoftLayer. The NVIDIA-maintained CUDA Amazon Machine Image (AMI) on AWS, for example, comes pre-installed with CUDA and is available for use today.

Use this guide for easy steps to install CUDA. To setup CUDA Python, first install the Anaconda python distribution. Then install the latest version of the Numba package. You can find detailed installation instructions in the Numba documentation.

Or, watch the short video below and follow along.

2
YOUR FIRST CUDA PYTHON PROGRAM

You are now ready for your first python program on the GPU. The video below walks through a simple example that adds two vectors for you to follow along.

If you are new to Python, explore the beginner section of the Python website for some excellent getting started resources. The blog, An Even Easier Introduction to CUDA, introduces key CUDA concepts through simple examples.

In the Numba documentation you will find information about how to vectorize functions to accelerate them automatically as well as how to write CUDA code in Python. Download and execute Jupyter Notebooks for the Mandelbrot and Monte Carlo Option Pricer examples on your local machine.

3
PRACTICE

Check out Numbas github repository for additional examples to practice.

NVIDIA also provides hands-on training through a collection of self-paced labs . The labs guide you step-by-step through editing and execution of code, and even interaction with visual tools is all woven together into a simple immersive experience. Practice the techniques you learned in the materials above through hands-on labs.

For a more formal,instructor-led introduction to CUDA, explore the Introduction to Parallel Programming on UDACITY. The course covers a series of image processing algorithms such as you might find in Photoshop or Instagram. You'll be able to program and run your assignments on high-end GPUs, even if you don't have one yourself.

Availability

The Numba package is available as a Continuum Analytics sponsored open-source project.

The CUDA Toolkit is a free download from NVIDIA and is supported on Windows, Mac, and most standard Linux distributions.

So, now youre ready to deploy your application?

Register today for free access to NVIDIA TESLA GPUs in the cloud.

Latest News

PGI 17.7 Delivers OpenACC and CUDA Fortran for Volta GPUs

PGI compilers & tools are used by scientists and engineers who develop applications for high-performance computing (HPC) systems.

Gradient Boosting, Decision Trees and XGBoost with CUDA

Gradient boosting is a powerful machine learning algorithm used to achieve state-of-the-art accuracy on a variety of tasks such as regression, classification and ranking.

Building Cross-Platform CUDA Applications with CMake

Cross-platform software development poses a number of challenges to your application’s build process. How do you target multiple platforms without maintaining multiple platform-specific build scripts, projects, or makefiles?

Developer Spotlight: Creating Photorealistic CGI Environments

Get to know Rense de Boer, a technical art director from Sweden, who is not only pushing the envelope of photo-real CGI environments, but he’s doing it all in a real-time engine!

Blogs: Parallel ForAll

Gradient Boosting, Decision Trees and XGBoost with CUDA

Gradient boosting is a powerful machine learning algorithm used to achieve state-of-the-art accuracy on a variety of tasks such as regression, classification and ranking.

Pro Tip: Linking OpenGL for Server-Side Rendering

Visualization is a great tool for understanding large amounts of data, but transferring the data from an HPC system or from the cloud to a local workstation for analysis can be a painful experience.

Scaling Keras Model Training to Multiple GPUs

Keras is a powerful deep learning meta-framework which sits on top of existing frameworks such as TensorFlow and Theano. Keras is highly productive for developers; it often requires 50% less code to define a model than native APIs of deep learning

Deep Learning Hyperparameter Optimization with Competing Objectives

In this post we’ll show how to use SigOpt’s Bayesian optimization platform to jointly optimize competing objectives in deep learning pipelines on NVIDIA GPUs more than ten times faster than traditional approaches like random search.