Top Stories

Most Popular NVIDIA Technical Blog Posts of 2023: Generative AI, LLMs, Robotics, and Virtual Worlds Breakthroughs

Toy Jensen generative AI.

As we approach the end of another exciting year at NVIDIA, it’s time to look back at the most popular stories from the NVIDIA Technical Blog in 2023.

Groundbreaking research and developments in fields such as generative AI, large language models (LLMs), high-performance computing (HPC), and robotics are leading the way in transformative AI solutions and capturing the interest of our readers. Other top posts explore advancements in video technology and video conferencing, enhancing the user experience, alongside breakthroughs in AI security.

The following are some of the highlights from 2023.

A group of different animals standing together.

Rapidly Generate 3D Assets for Virtual Worlds with Generative AI

New generative AI technologies on NVIDIA Omniverse enhance 3D asset creation in virtual environments. These advancements aim to make the creation of virtual worlds on the metaverse faster and easier.

Person in a video conference using Eye Contact feature with eye contact directly at the camera.

Improve Human Connection in Video Conferences with NVIDIA Maxine Eye Contact

NVIDIA Maxine Eye Contact revolutionizes video conferencing by using AI to adjust your gaze toward the camera in real time. It also maintains natural eye color and adapts to different head positions and gaze directions, creating a more authentic and connected virtual interaction.

TensorRTLLM illustration.

NVIDIA TensorRT-LLM Supercharges Large Language Model Inference on NVIDIA H100 GPUs

NVIDIA TensorRT-LLM, a component of the NVIDIA NeMo framework, is tailored to boost LLM inference on NVIDIA H100 GPUs. This open-source library offers optimized processing and supports multi-GPU and multi-node setups, enabling efficient and scalable deployment of LLMs in generative AI applications.

NVIDIA Jetson Orin Nano Developer Kit

Develop AI-Powered Robots, Smart Vision Systems, and More with NVIDIA Jetson Orin Nano Developer Kit

The latest NVIDIA Jetson Orin Nano Developer Kit is a powerful tool for developing AI-powered robots and smart vision systems. Offering a huge boost in AI performance over the prior generation, it is compatible with all NVIDIA Jetson Orin Nano and NX modules for prototyping edge AI products.

NeMo Guardrails illustration.

NVIDIA Enables Trustworthy, Safe, and Secure Large Language Model Conversational Systems

A toolkit for developing safe and trustworthy LLM conversational systems, NeMo Guardrails enables developers to implement rules that maintain safe and relevant conversations. It integrates with LLMs like ChatGPT, is built on the NVIDIA Colang language, and is available through NVIDIA AI Foundations.

LLM workflow demo.

An Introduction to Large Language Models: Prompt Engineering and P-Tuning

This introduction to LLMs covers key techniques like prompt engineering and tuning. It discusses how LLMs function, their role in AI applications like text generation, and the significance of creating effective prompts and optimizing performance in various scenarios.

Two men working at a desktop computer in an office.

NVIDIA AI Red Team: An Introduction

The NVIDIA AI Red Team details its approach to assessing and mitigating risks in AI and machine learning systems from an information security standpoint. A group of security professionals and data scientists, they aim to identify and address risks related to technical vulnerabilities, harm and abuse scenarios, and other security challenges in ML systems.

Grace CPU Superchip illustration.

NVIDIA Grace CPU Superchip Architecture In-Depth

Take an in-depth look at the architecture and features of the NVIDIA Grace CPU Superchip. Offering major advancements in compute density and power efficiency, the Grace CPU excels in memory bandwidth and data movement efficiency, making it a powerhouse for HPC and AI workloads.

A side-by-side comparison of two versions of a graphic.

Improving Video Quality and Performance with AV1 and NVIDIA Ada Lovelace Architecture

Improve video quality and performance using AV1 codec and the NVIDIA Ada Lovelace architecture. This integration enhances video encoding and decoding, improving compression efficiency, quality, and increased throughput, making it ideal for various video applications.

TensorRT-LLM improves ease of use and extensibility through an open-source modular Python API for defining, optimizing, and executing new architectures and enhancements as LLMs evolve, and can be customized easily.

NVIDIA TensorRT-LLM Supercharges Large Language Model Inference on NVIDIA H100 GPUs

TensorRT-LLM consists of the TensorRT deep learning compiler and includes optimized kernels, pre– and post-processing steps, and multi-GPU/multi-node communication primitives for groundbreaking performance on NVIDIA GPUs.

Subscribe to the Developer Newsletter and stay in the loop on 2024 content tailored to your interests. Follow us on Instagram, Twitter, YouTube, and Discord for the latest developer news.

Discuss (0)

Tags