Data Center / Cloud

Top Posts of 2024 Highlight NVIDIA NIM, LLM Breakthroughs, and Data Science Optimization

2024 was another landmark year for developers, researchers, and innovators working with NVIDIA technologies. From groundbreaking developments in AI inference to empowering open-source contributions, these blog posts highlight the breakthroughs that resonated most with our readers.

An image showing NVIDIA NIM.

NVIDIA NIM Offers Optimized Inference Microservices for Deploying AI Models at Scale

Introduced in 2024, NVIDIA NIM is a set of easy-to-use inference microservices for accelerating the deployment of foundation models. Developers can optimize inference workflows with minimal configuration changes, making scaling seamless and efficient.

An image showing NVIDIA NIM.

Access to NVIDIA NIM Now Available Free to Developer Program Members

To democratize AI deployment, NVIDIA offers free access to NIM for its Developer Program members, enabling a broader range of developers to experiment with and implement AI solutions.

An image of the GB200 NVL72 and NVLink spine.

NVIDIA GB200 NVL72 Delivers Trillion-Parameter LLM Training and Real-Time Inference

The NVIDIA GB200-NVL72 system set new standards by supporting the training of trillion-parameter large language models (LLMs) and facilitating real-time inference, pushing the boundaries of AI capabilities.

CUDA abstract image.

NVIDIA Transitions Fully Towards Open-Source GPU Kernel Modules

NVIDIA fully transitioned its GPU kernel modules to open-source, empowering developers with greater control, transparency, and adaptability in customizing GPU-related workflows.

Decorative image of multimodal RAG workflow.

An Easy Introduction to Multimodal Retrieval-Augmented Generation

Simplifying the complex world of RAG, the guide demonstrates how combining text and image retrieval enhances AI applications. From chatbots to search systems, multimodal AI is now more accessible than ever.

An image with gears.

Build an LLM-Powered Data Agent for Data Analysis

This step-by-step tutorial showcases how to build LLM-powered agents, enabling developers to improve and automate data analysis using natural language interfaces.

Illustration representing LLMs.

Unlock Your LLM Coding Potential with StarCoder2

The introduction of StarCoder2, an AI coding assistant, aims to boost developers’ productivity by providing high-quality code suggestions and reducing repetitive coding tasks.

Decorative image of two cartoon llamas in sunglasses.

How to Prune and Distill Llama 3.1 8B to an NVIDIA MiniTron 4B Model

Take a deep dive into the methods for pruning and distilling the Llama 3.1 8B model into the more efficient MiniTron 4B, optimizing performance without compromising accuracy.

An image for RAG.

How to Take a RAG Application from Pilot to Production in Four Step

This tutorial outlines a straightforward path to scale Retrieval-Augmented Generation (RAG) applications, emphasizing best practices for production readiness.

Decorative image of a computer screen against a purple background, with a dial on the side.

RAPIDS cuDF Accelerates pandas Nearly 150x with Zero Code Changes

RAPIDS cuDF delivers an astounding 150x acceleration to Pandas workflows—without requiring code changes—transforming data science pipelines and boosting productivity for Python users.

Looking ahead

As we head into 2025, stay tuned for more transformative innovations.

Subscribe to the Developer Newsletter and stay in the loop on 2025 content tailored to your interests. Follow us on Instagram, Twitter, YouTube, and Discord for the latest developer news.

Discuss (0)

Tags