Mixture of Experts (MoE)
Feb 02, 2026
Optimizing Communication for Mixture-of-Experts Training with Hybrid Expert Parallel
In LLM training, Expert Parallel (EP) communication for hyperscale mixture-of-experts (MoE) models is challenging. EP communication is essentially all-to-all,...
11 MIN READ
Jan 08, 2026
Delivering Massive Performance Leaps for Mixture of Experts Inference on NVIDIA Blackwell
As AI models continue to get smarter, people can rely on them for an expanding set of tasks. This leads users—from consumers to enterprises—to interact with...
6 MIN READ
Nov 06, 2025
Democratizing Large-Scale Mixture-of-Experts Training with NVIDIA PyTorch Parallelism
Training massive mixture-of-experts (MoE) models has long been the domain of a few advanced users with deep infrastructure and distributed-systems expertise....
7 MIN READ
Apr 30, 2024
Leverage Mixture of Experts-Based DBRX for Superior LLM Performance on Diverse Tasks
This week’s model release features DBRX, a state-of-the-art large language model (LLM) developed by Databricks. With demonstrated strength in programming and...
3 MIN READ
Mar 14, 2024
Applying Mixture of Experts in LLM Architectures
Mixture of experts (MoE) large language model (LLM) architectures have recently emerged, both in proprietary LLMs such as GPT-4, as well as in community models...
12 MIN READ