Generative AI

Frictionless Collaboration and Rapid Prototyping in Hybrid Environments with NVIDIA AI Workbench

NVIDIA AI Workbench is a free development environment manager that streamlines data science, AI, and machine learning (ML) projects on systems of choice. The goal is to provide a frictionless way to create, compute, and collaborate on and across PCs, workstations, data centers, and clouds. The basic user experience is straightforward:

  • Easy setup on single systems: Click through install in minutes on Windows, Ubuntu, and macOS, with a one-line install on remote systems.
  • Managed experience for decentralized deployment: A free, PaaS/SaaS type UX in truly hybrid contexts with no need for a centralized, service-based platform. 
  • Seamless collaboration for experts and beginners: Friendly Git, container, and application management without limiting customization by power users.
  • Consistent across users and systems: Migrate workloads and applications across different systems while maintaining functionality and user experience. 
  • Simplified GPU handling: Handles system dependencies like NVIDIA drivers and the NVIDIA Container Toolkit, as well as GPU-enabled container runtime configuration.

This post explores highlights of the October release of NVIDIA AI Workbench, which is the most significant since the product launch at GTC 2024 and is a big step closer to the full product vision.

Release highlights

This section will detail the major new capabilities and user-requested updates in the latest release.

Major new capabilities include:

  • Enhance collaboration through expanded Git support, such as branching, merging, diffs, and finer-grained control for commits and gitignore.
  • Create complex applications and workflows with multicontainer environments through Docker Compose support.
  • Simple, fast, and secure rapid prototyping with application sharing with single-user URLs.

User requested updates:

  • Dark mode for the Desktop App
  • Improved installation on localized versions of Windows

Expanded Git support

Previously, AI Workbench supported only single, monolithic commits on the main branch. Users had to manage branches and merges manually, and this created various types of confusion, especially around resolving merge conflicts. Now, users can manage branches, merges, and conflicts directly in the Desktop App and the CLI. In addition, they can see and triage individual file diffs for commits. The UI is built to work seamlessly with manual Git operations and will update to reflect relevant changes.   

A screenshot of the AI Workbench Desktop App tab for Git branching showing two different branches.
Figure 1. AI Workbench Desktop App tab for Git branching

These features are found in two new tabs on the Desktop App: Changes and Branches.

  • Changes: Gives a line-by-line view of the diffs between the working tree and previous commits. Users can now select and commit file changes individually or in bulk based on visible file diffs tracked changes (addition, modification, or deletion), as well as being able to individually reject or add a file to git-ignore. The view also updates dynamically to reflect manual Git actions, for example manually staging a file and then following up with a change to the file in the working tree.
  • Branches: Provides branch management, including creation, switching, and merging, as well as visibility for remote branches on a Git server. Merging branches with a conflict initiates a conflict resolution flow that users can do within the UI, or move to a terminal or file editor of their choice. 

Learn more about how these advanced Git features work.

Multicontainer support with Docker Compose stacks

AI Workbench now supports Docker Compose. Users can work with multicontainer applications and workflows with the same ease of configuration, reproducibility, and portability that AI Workbench provides for single-container environments. 

Screenshot of a graphical UI showing affordances for adding a Docker Compose file to an AI Workbench Project.
Figure 2. The Docker Compose feature in the AI Workbench Environment Management tab

The basic idea is to add a Docker Compose-based “stack” that is managed by AI Workbench and connects to the main development container. To add the stack, a user just needs to add the appropriate Docker Compose file to the project repository and do some configuration in the Desktop App or CLI.

We’re using Docker Compose for a few reasons. First, we didn’t want to develop in a vacuum, and that’s why we’ve been collaborating with the Docker team on features like a managed Docker Desktop install

Second, we want users to be able to work with the multicontainer applications outside of AI Workbench, and Docker Compose is the easiest way to do that. The vision for this feature is to enable streamlined, powerful development and compute for multicontainer applications within AI Workbench that can then be stood up outside of AI Workbench with a simple docker-compose up command. 

This multicontainer feature is new and will continue to evolve. We would love to get feedback and help you sort out any issues through the NVIDIA AI Workbench Developer Forum

Learn more about how Docker Compose works.

Web application sharing through secure URLs

AI Workbench enables users to easily spin up managed web applications that are built into a project. The process is fairly simple: create or clone a project with the web app installed, start the project, then start the app, and it appears in your browser. 

This approach is great for a developer UX, but it wasn’t good for rapid prototyping UX and collaboration. If you wanted another user to access and test your application, you either asked them to install AI Workbench, clone the project and run it, or you had to fully extract the application to run it and make it available to the user. The first is a speed bump for the user, and the second is a speed bump for the developer. 

We eliminated these speed bumps with a simple feature that enables you to set a remote AI Workbench to enable external access and to create single-use, secure URLs for running web applications in a project on that remote. You just need to make sure the user has access to port 10000 on the remote, and the application will be directly accessible. All they have to do is click the link and go to the app.

A command line interface with AI Workbench commands showing how to open a project, start JupyterLab and then generate a URL to share JupyterLab with another user.
Figure 3. Developers can now give end users direct access to applications running in an AI Workbench Project on a remote through secure, one-time-use URLs

Enabling this kind of access is useful for rapid prototyping and collaboration. That’s why various SaaS offerings provide this as a managed service. The difference with AI Workbench is that you can provide this access on your own resources and in your own network, for example on data center resources or a shared server. It doesn’t have to be in the cloud. 

AI Workbench keeps things secure by restricting this access to a single browser and to a single application that’s running in the project. This means a user can’t share the URL with someone else, and they are constrained to the web app that you shared with them.

Learn more about how application sharing works.

Dark mode and localized Windows installation

Many users requested a dark mode option because it’s easier on the eyes. It’s now available and can be selected through the Settings window that is now available directly from within the Desktop App. Learn more about how dark mode works.

Windows users are by far our main demographic for the local installs, and not all Windows users are using the English language pack, and this blocked AI Workbench install due to how we handled some WSL commands. In particular, we’ve had users working in Cyrillic or Chinese that were blocked on Windows. We adjusted how we handle non-English language packs, and it should work well now. If you were previously blocked by this, give it a try now. If it still doesn’t work for you, let us know in the NVIDIA AI Workbench Developer Forum so we can continue to improve this capability.

New AI Workbench projects 

This release introduces new example projects designed to jumpstart your AI development journey, detailed below.  An AI Workbench project is a structured Git repository that defines a containerized development environment in AI Workbench. AI Workbench projects provide:

  • Effortless setup and GPU configuration: Simply clone a project from GitHub or GitLab, and AI Workbench handles the rest with automatic GPU configuration. 
  • Development integrations: Seamless support for popular development environments such as Jupyter and VS Code, as well as support for user-configured web applications.
  • Containerized and customizable environments: Projects are containerized, isolated, and easily modifiable. Adapt example projects to suit your specific needs while ensuring consistency and reproducibility. 

Explore NVIDIA AI Workbench example projects.

Multimodal virtual assistant example project

This project enables users to build their own virtual assistant using a multimodal retrieval-augmented generation (RAG) pipeline with fallback to web search. Users can interact with two RAG-based applications to learn more about AI Workbench, converse with the user documentation, troubleshoot their own installation, or even focus the RAG pipeline to their own, custom product. 

  • Control-Panel: Customizable Gradio app for working with product documentation allows uploading webpages, PDFs, images, and videos to a persistent vector store and query them. For inference, users can select between cloud endpoints like on the NVIDIA API Catalog or use self-hosted endpoints to run their own inference. 
  • Public-Chat: With product documents loaded, the Gradio app is a simplified, “read-only” chatbot that you can share with end users through the new AI Workbench App Sharing feature. 
A GIF demonstrating how a user can submit a query to the virtual assistant and see the generated response.
Figure 4. Using the Public-Chat web app, a read-only, pared down chat application that is meant to be more consumable and shareable to end users

Competition-Kernel example project

This project provides an easy, local experience when working on Kaggle competitions. You can easily leverage your local machine or a cloud instance to work on competition datasets, write code, build out models, and submit results, all through AI Workbench. The Competition Kernel project offers:

  • A managed experience to develop and test on your own GPUs and set up and customize in minutes.
  • Easy version control and tracking of code through GitHub or GitLab and very easy collaboration.
  • The power of using a local, dedicated IDE: robust debugging, intelligent code completion, extensive customization options.
  • Easy plugin to existing data sources (external or your own).
  • No Internet? No problem. Develop while offline.

Get started  

This release of NVIDIA AI Workbench marks a significant step forward in providing a frictionless experience for AI development across GPU systems. New features from this release, including expanded Git support, support for multicontainer environments, and secure web app sharing, streamline developing and collaborating on AI workloads. Explore these features in the three new example projects available with this release or create your own projects. 

To get started with AI Workbench, install the application from the webpage. For more information about installing and updating, see the NVIDIA AI Workbench documentation

Explore a range of NVIDIA AI Workbench example projects, from data science to RAG.

Visit the NVIDIA AI Workbench Developer Forum to report issues and learn more about how other developers are using AI Workbench.

Discuss (0)

Tags