Deepak Narayanan

Deepak Narayanan is a final-year PhD student in the department of Computer Science at Stanford University. He is interested in designing and building software to improve the runtime performance and efficiency of machine learning applications on modern hardware. His work has looked at improving the scalability of distributed model training and allocating heterogeneous resources to training jobs while optimizing various end-to-end objectives.

Posts by Deepak Narayanan

Technical Walkthrough 0

Scaling Language Model Training to a Trillion Parameters Using Megatron

Natural Language Processing (NLP) has seen rapid progress in recent years as computation at scale has become more available and datasets have become larger. 17 MIN READ