Markel Sanz Ausin is a deep learning algorithm engineer at NVIDIA. In his current role, he works on building and deploying large language models as part of the NeMo-Megatron framework. Markel has developed solutions for data preparation, model training, evaluation, and model checkpoint conversion, targeted to NVIDIA DGX SuperPOD clusters. Markel is the main developer of a hyperparameter search tool that searches the optimal configuration for a given model and decides how to split the model across multiple GPUs. Before NVIDIA, Markel finished his PhD at North Carolina State University, researching how to improve educational systems using deep reinforcement learning.