CUDA on Windows Subsystem for Linux (WSL)
Microsoft Windows is a ubiquitous platform for enterprise, business, and personal computing systems. However, industry AI tools, models, frameworks, and libraries are predominantly available on Linux OS. Now all users of AI - whether they are experienced professionals, or students and beginners just getting started - can benefit from innovative GPU-accelerated infrastructure, software, and container support on Windows.
The NVIDIA CUDA on WSL driver brings NVIDIA CUDA and AI together with the ubiquitous Microsoft Windows platform to deliver machine learning capabilities across numerous industry segments and application domains.
Developers can now leverage the NVIDIA software stack on Microsoft Windows WSL environment using the NVIDIA drivers available today.
The NVIDIA Windows GeForce or Quadro production (x86) driver that NVIDIA offers comes with CUDA and DirectML support for WSL and can be downloaded from below.
We will no longer host any preview driver for WSL2 on developer zone. Developers on the Windows Insider Program may continue to receive bleeding edge driver with bug fixes and improvements through Windows Update.
Watch this space for more updates to CUDA on WSL2 support.
GPU support is the number one requested feature from worldwide WSL users - including data scientists, ML engineers, and even novice developers.
Access Advanced AI
The most advanced and innovative AI frameworks and libraries are already integrated with NVIDIA CUDA support, including industry leading frameworks like PyTorch and TensorFlow.
The overhead and duplication of investments in multiple OS compute platforms can be prohibitive - AI users, developers, and data scientists need quick access to run Linux software on their productive Windows platforms.
Why Use NVIDIA GPUs On Windows for AI?
GPUs have a robust history of accelerating AI applications for both training and inference. If you are a Microsoft Windows user who wants to develop GPU-accelerated Linux AI applications, NVIDIA’s rich AI and Data Science software ecosystem is now accessible through WSL.
Join the NVIDIA Developer Program and come take advantage of our developer tools, training, platforms, and integrations.
Get Started Developing GPUs Quickly
The CUDA Toolkit provides everything developers need to get started building GPU accelerated applications - including compiler toolchains, Optimized libraries, and a suite of developer tools. Use CUDA within WSL and CUDA containers to get started quickly. Features and capabilities will be added to the Preview version of the CUDA Toolkit in future releases.CUDA TOOLKIT ›
Simplifying Deep Learning
NVIDIA provides access to a number of deep learning frameworks and SDKs, including support for TensorFlow, PyTorch, MXNet, and more.
Additionally, you can even run pre-built framework containers with Docker and the NVIDIA Container Toolkit in WSL. Frameworks, pre-trained models and workflows are available from NGC.DL FRAMEWORKS ›
NVIDIA NGC CONTAINERS ›
Accelerate Analytics and Data Science
RAPIDS is an open source NVIDIA suite of software libraries to accelerate data science and analytics pipelines on GPUs.
Reduce training time and increase model accuracy using proven, pre-built libraries by iterating faster, conveniently on your local Windows PCs.
“The Microsoft - NVIDIA collaboration around WSL enables masses of expert and new users to learn, experiment with, and adopt premier GPU-accelerated AI platforms without leaving the familiarity of their everyday MS Windows environment.”Kam VedBrat, Partner Group Program Manager for Windows AI Platform, Microsoft Corp.
Registered members of the NVIDIA Developer Program can download the driver for CUDA and DirectML support on WSL for their NVIDIA GPU platform.
The Microsoft GPU in WSL support was developed jointly with Nvidia to help accelerate ML applications. To learn more, click here.