Jarvis 1.0 Beta: What’s New


Jarvis 1.0 Beta includes fully optimized pipelines for deploying real-time conversational AI apps such as transcription, virtual assistants and chatbots.

Highlights:

  • ASR, NLU, and TTS models in NGC trained on thousands of hours of speech data
  • Integration with Transfer Learning Toolkit to re-train on custom data for ASR and NLU apps
  • Fully accelerated deep learning pipelines optimized to run as scalable services
  • End-to-end workflow and tools to deploy services using one line of code

Download Now




Introductory Resources


Quick Start Guide

Step-by-step guide to deploy pretrained models as services on a local workstation and interact with them through a client.

Get Started

Introductory Blog

Learn about the architecture, key features and components in Jarvis that help you build multimodal conversational AI services.

Read Blog

Introductory Webinar

Build a sample Jarvis application for transcription and named entity recognition.


Watch Webinar



Jarvis Samples


Virtual Assistant

Sample demonstrating a simple, but complete real-time domain specific conversational AI app.

Try Sample

Virtual Assistant (with RASA)

App showing how to integrate the Rasa Dialog Manager with Jarvis Speech Services.

Try Sample

Transcription

Sample showing transcription and named entity recognition that is fine-tuned on biomedical and clinical language.

Try Sample



Additional Jarvis Resources

Fine-Tuning with TLT

You can quickly start with NVIDIA’s free pre-trained models and fine tune them using Transfer Learning Toolkit in Jarvis. NGC contains the following models & example notebooks for speech recognition and natural language understanding.

Download TLT pre-trained models from NGC.

Get started with Jupyter Notebooks:

Documentation


Building Real-Time Apps with Jarvis Services



Ethical AI

NVIDIA’s platforms and application frameworks enable developers to build a wide array of AI applications. Consider potential algorithmic bias when choosing or creating the models being deployed. Work with the model’s developer to ensure that it meets the requirements for the relevant industry and use case; that the necessary instruction and documentation are provided to understand error rates, confidence intervals, and results; and that the model is being used under the conditions and in the manner intended.