Get Started With the NVIDIA NeMo Framework
NVIDIA NeMo™ is an end-to-end, cloud-native enterprise framework for developers to build, customize, and deploy generative AI models with billions of parameters.
The NeMo framework provides an accelerated workflow for training with 3D parallelism techniques. It offers a choice of several customization techniques and is optimized for at-scale inference of models for language and image applications, with multi-GPU and multi-node configurations. NeMo makes generative AI model development easy, cost-effective, and fast for enterprises.
Get Started Resources
Collection of Tasks
- Language Models Training (Documentation | Example)
- Prompt Learning (Documentation | Tutorial | Blog)
- Question Answering (Documentation | Tutorial)
- Token Classification (Documentation | Tutorial)
- Punctuation and Capitalization (Documentation | Tutorial)
- Joint Intent and Slot Classification (Documentation | Tutorial)
- Machine Translation (Documentation)
Build Trustworthy, Safe, and Secure LLM Applications
Programmable Guardrails for LLM-Based Applications
NeMo Guardrails is a toolkit for easily developing trustworthy LLM conversational systems. This toolkit allows developers to add easily programmable rails to define desired user interactions within an application. It natively supports LangChain, adding a layer of safety, security, and topical guardrails to existing LLM-based conversational applications.
Read Technical Blog Try NeMo Guardrails Now
Production-Ready Solution for Generative AI
For a secure, optimized full stack solution designed to accelerate enterprises with support, security, and API stability, NeMo is available as part of NVIDIA AI Enterprise which offers enterprises a success path to the leading edge of AI without the potential risks of open-source software.
NVIDIA NeMo framework is available to download from the GitHub repo.