Megatron
Mar 09, 2026
Implementing Falcon-H1 Hybrid Architecture in NVIDIA Megatron Core
In the rapidly evolving landscape of large language model (LLM) development, NVIDIA Megatron Core has emerged as the foundational framework for training massive...
9 MIN READ
Jan 28, 2026
Speeding Up Variable-Length Training with Dynamic Context Parallelism and NVIDIA Megatron Core
This post introduces Dynamic Context Parallelism (Dynamic-CP), a scheduling approach in NVIDIA Megatron Core used for LLM post-training or DiT pre-training. It...
12 MIN READ
Aug 20, 2025
Reinforcement Learning with NVIDIA NeMo-RL: Megatron-Core Support for Optimized Training Throughput
The initial release of NVIDIA NeMo-RL included training support through PyTorch DTensor (otherwise known as FSDP2). This backend enables native integration with...
7 MIN READ
Jul 12, 2024
Train Generative AI Models More Efficiently with New NVIDIA Megatron-Core Functionalities
First introduced in 2019, NVIDIA Megatron-LM sparked a wave of innovation in the AI community, enabling researchers and developers to use the underpinnings of...
11 MIN READ