Generative AI


Generative AI uses neural networks to learn patterns from existing data and generate new, original text, image, audio, and video content.

 A stack diagram of generative AI hardware and software solutions

Click to Enlarge

How Generative AI Works

Generative AI models learn by recognizing patterns and structures within massive datasets of text, code, images, audio, video, and other data. These models use neural networks, often transformer networks, to process the information. Developers can then leverage the models to generate new content, improve existing content, or create entirely new applications. This process can be used for tasks like creating realistic images from text descriptions, generating musical compositions, or building chatbots that can engage in human-like conversations.

Explore Generative AI Tools and Technologies

NVIDIA NIM

NVIDIA NIM™ is a set of easy-to-use microservices designed to accelerate the deployment of generative AI models across any cloud or data center.

NVIDIA AI Blueprints

NVIDIA AI Blueprints are comprehensive reference workflows that accelerate AI application development and deployment. They feature NVIDIA acceleration libraries, SDKs, and microservices for AI agents, digital twins, and more.

NVIDIA Cosmos

NVIDIA Cosmos™ is a platform of state-of-the-art generative world foundation models and data processing pipelines that accelerate the development of highly performant physical AI systems such as robots and self-driving cars.

NVIDIA TensorRT

NVIDIA TensorRT™ is an ecosystem of APIs for high-performance deep learning inference. TensorRT includes an inference runtime and model optimizations that deliver low latency and high throughput for production applications.  

NVIDIA Triton Inference Server

NVIDIA Triton™ Inference Server, part of the NVIDIA AI platform and available with NVIDIA AI Enterprise, is open-source software that standardizes AI model deployment and execution across every workload.

NVIDIA Maxine

NVIDIA Maxine™ is a collection of NIM microservices and SDKs for deploying AI features that enhance audio and video for real-time communications platforms and post-production.

NVIDIA Riva

NVIDIA Riva is a GPU-accelerated multilingual speech and translation AI SDK for building and deploying fully customizable, real-time conversational AI pipelines.

Build, Customize, and Deploy, Generative AI With NVIDIA NeMo

NVIDIA NeMo Curator

NVIDIA NeMo™ Curator improves generative AI model accuracy by processing text, image, and video data at scale for training and customization.

It also provides pre-built pipelines for generating synthetic data to customize and evaluate generative AI systems.

NVIDIA NeMo Customizer

NVIDIA NeMo Customizer is a high-performance, scalable microservice that simplifies fine-tuning and alignment of AI models for domain-specific use cases, making it easier to adopt generative AI across industries.

NVIDIA NeMo Evaluator

NVIDIA NeMo Evaluator provides a microservice for assessing generative AI models and pipelines across academic and custom benchmarks on any platform.

NVIDIA NeMo Retriever

NVIDIA NeMo Retriever  is a collection of generative AI microservices that enable organizations to seamlessly connect custom models to diverse business data and deliver highly accurate responses.

NVIDIA NeMo Guardrails

NVIDIA NeMo Guardrails orchestrates dialog management, ensuring accuracy, appropriateness, and security in smart applications with LLMs. It safeguards organizations overseeing generative AI systems.

NVIDIA NeMo Framework

NVIDIA NeMo framework provides developers with an enterprise-grade toolkit with extensive configurability and optimization techniques for training custom generative AI and speech AI models.

It includes tools for pre-training, customization, retrieval-augmented generation, and guardrailing, offering enterprises an easy, cost-effective, and fast way to adopt generative AI.

Generative AI Learning Resources