Instructor Qualifications for all current DLI Courses

Follow the links below to ensure you have the qualifications necessary for the workshop you are interested in.


Generative AI with Diffusion Models

Instructor qualifications:
Candidates must demonstrate thorough up-to-date experience with deep learning, computer vision, and diffusion models. Ideal candidates should have background knowledge of surrounding material as well as active roles which expose them to the latest trends, innovations, and emerging intuitions. Qualifying experiences include:

  • A professional role (Ex: Machine Learning Engineer, Data Scientist) architecting deep learning projects that generate images
  • Active open-source contribution or coordination efforts in the area
  • Academic coursework in using AI to generate images.
Candidates should have the following:
  • Proficiency in Python and PyTorch
  • Active intuitive understanding of CLIP and multimodal AEs/VAEs/GANs/Stable Diffusion
  • Intuitive understanding of audio/video/image classification/captioning/transcription
  • Foundation in statistics including the normal distribution and random sampling
Candidates must also demonstrate teaching experience, such as:
  • Classroom teaching experience
  • Significant presentation experience

Rapid Application Development using Large Language Models

Instructor qualifications:
Candidates must demonstrate thorough up-to-date experience with deep learning, large language models, and agent systems. Ideal candidates should have background knowledge of surrounding material as well as active roles which expose them to the latest trends, innovations, and emerging intuitions. Qualifying experiences include:

  • Chat model/multimodal model architecture design experience
  • Experience with the training loop and pipeline assumptions/intuitions
  • Active open-source contribution or coordination efforts in the area
  • Experience orchestrating dialog management and information retrieval systems
Candidates should have the following:
  • Advanced proficiency with Python, sufficient for reading HuggingFace source code
  • HuggingFace comfort, including serialization, model release, HF Transformers, etc
  • Experience designing systems with LLM constituent components
  • Familiarity with PyTorch, deep learning, generative AI, multimodal models, etc
  • Understanding of experimentation/deployment with LLM systems, including hardware requirements, safety considerations, evaluation techniques, etc
  • Intuitive understanding of audio/video/image classification/captioning/transcription
  • Active intuitive understanding of CLIP and multimodal AEs/VAEs/GANs/Stable Diffusion
  • LangChain experience, including intuitions and details of current developments
  • Familiarity with RAG, including LlamaIndex, VDB services, retriever models, etc
  • Comfort with NVIDIA value propositions surrounding LLMs, RAG, NeMo, etc
Candidates must also demonstrate teaching experience, such as:
  • Classroom teaching experience
  • Significant presentation experience

Efficient Large Language Model (LLM) Customization

Instructor qualifications:
Candidates must possess a comprehensive understanding of model customization techniques, especially in the context of large language models (LLMs) such as LLaMA-2 and GPT models. Ideal candidates should have hands-on experience with advanced prompt engineering and fine-tuning methodologies. They must be up-to-date with the latest trends and innovations in this rapidly evolving field. Key qualifications include:

  • Expertise in Prompt Engineering:
    Proficient in designing effective prompts, including multi-step prompts and system context integration.
    Demonstrated ability in leveraging inference parameters to optimize model responses.
    Experience with few-shot learning techniques, both in theoretical understanding and practical applications.
    Ability to use few-shot learning as a method for maintaining chatbot history and context.
  • Model Fine-Tuning Knowledge:
    Practical experience with p-tuning and LoRA techniques for fine-tuning LLMs.
    Familiarity with the customization of various LLMs, including LLaMA-2 and GPT models.
    Understanding of the nuances and challenges in model customization for specific applications.
  • Python Programming Skills:
    Advanced proficiency in Python, with the ability to use classes, imports, loops, and higher-order functions effectively.
    Experience in applying Python skills in the context of LLM customization and prompt engineering.
  • Educational Experience:
    Proven teaching experience, preferably in a classroom setting or through significant online educational content creation.
    Ability to convey complex technical concepts in an understandable and engaging manner.
    Experience in designing and delivering course content related to AI, machine learning, or similar technical fields.
  • Current Involvement in the Field:
    Active engagement with the latest developments and trends in LLMs and AI.
    Participation in open-source projects or professional forums related to LLM customization and prompt engineering.
  • Practical Application Experience:
    Demonstrated ability to apply theoretical knowledge in real-world scenarios.
    Experience in developing and deploying customized LLM solutions in various contexts.

Building RAG Agents with LLMs

Candidates must demonstrate significant experience in data science, machine learning, deep learning, and the telecommunications industry, having worked on at least one significant AI application, either in a commercial or academic capacity, and explain their work. Qualifying experience includes:
  • Active open-source contribution or coordination efforts in the area
  • Experience orchestrating dialog management and information retrieval systems
  • Strong applied software engineering expertise, esp. surrounding microservices and inference server solutions
Candidates should have the following:
  • Strong proficiency in Python, including functional programming and server deployment
  • Expertise in large language models as inference endpoints, including industry use-cases.
  • Strong experience with modern LangChain (including LCEL) and LangServe required; understanding of LangGraph, LlamaIndex, Langsmith, and NeMo Guardrails useful.
  • Experience with microservice/server orchestration, including Docker and FastAPI.
  • Experience with modern RAG, including some derivative formulations and pros/cons.
  • Understanding of agentic behavior, tooling, and modular agent components.
  • Intuition of evaluation metrics and performance expectations.

Candidates must also demonstrate teaching experience, such as:

  • Classroom teaching experience
  • Significant presentation experience

Applications of AI for Anomaly Detection

Candidates must demonstrate significant experience in data science, machine learning, deep learning, and the telecommunications industry, having worked on at least one significant AI application, either in a commercial or academic capacity, and explain their work. Qualifying experience includes:
  • A role as a major contributor to a project that used Deep Learning
  • A role as a major contributor to a project that used other Machine Learning techniques
  • A role as a major contributor to a project that required data science
Candidates must also demonstrate teaching experience, such as:
  • Classroom teaching experience
  • Significant presentation experience

Candidates should also have the following:

  • Professional Data Science Experience using Python
  • A working understanding of NVIDIA RAPIDS
  • Significant experience in machine and deep learning, specifically the use of XG Boost, AutoEncoder, and GAN models
  • Exposure to the telecommunications industry and cybersecurity, specifically networking and the threat of network intrusion.

Applications of AI for Predictive Maintenance

Candidates must demonstrate experience working on at least one Deep Learning application, either in a commercial or academic capacity, and explain their work. Qualifying experience includes:
  • Deep Learning for time-series data, work/research experience with variations of auto-encoder models, recurrent models (LSTMs) and GANs.
  • Measures of model accuracy, preferably in the context of industrial applications.
  • Familiarity with machine learning techniques. Having a thorough understanding of XGBoost algorithm is crucial to the success of course delivery.
  • Minimum of one deep learning library. Keras and TensorFlow are preferred
Candidates must also demonstrate teaching experience, such as:
  • Classroom teaching experience
  • Significant presentation experience

Candidates should also have the following:

  • Familiarity with deep learning concepts (At minimum level, having knowledge of artificial neural networks)
  • Python (and common python libraries used in DL, e.g., numpy, pandas, sklearn,...)
  • Having knowledge of TensorFlow and keras

Building AI Based Cybersecurity Pipelines

Instructor qualifications:
Candidates must have professional experience in the domain of defensive cybersecurity and data analysis. Candidates should be able to discuss their work as it relates to topics such as:

  • Methods and tooling used in service of defensive cybersecurity for data collection, preparation, analysis, storage etc.
  • Approaches to defending and resolving common cybersecurity attacks such as DOS, phishing, hijacked accounts etc.
  • Effective data analysis through the use of machine and deep learning models like XGBoost, autoencoders, transformers etc.
  • The use of GPU-accelerated libraries for use in data analysis, particularly those found in NVIDIA RAPIDS, including CLX.

Candidates must also demonstrate teaching experience, such as:

  • Classroom teaching experience - in person or remote
  • Significant presentation experience

Building Conversational AI Applications

Instructor qualifications:
Candidates must demonstrate experience working on at least one conversational AI application using automatic speech recognition (ASR) and natural language understanding (NLU), either in a commercial or academic capacity, and explain their work. Qualifying experience includes:

  • A professional role (Ex: Engineer, Data Scientist) on a conversational AI project that used an ASR model to transcribe spoken language and process it
  • A completed conversational AI project for a virtual assistant application
  • Academic coursework in conversational AI using neural networks
Candidates should have the following:
  • Basic Python competency including familiarity with variable types, loops, conditional statements, functions, array manipulations, and class objects/methods
  • Experience using TAO Toolkit and Riva
  • Basic Linux command line experience
  • Experience using Docker
  • Experience using Helm Charts and Kubernetes

Candidates must also demonstrate teaching experience, such as:

  • Classroom teaching experience - in person or remote
  • Significant presentation experience

Building Transformer Based Natural Language Processing

Instructor qualifications:
Candidates must demonstrate experience working on at least one Natural Language Processing application using a Transformer-based architecture (such as BERT), either in a commercial or academic capacity, and explain their work. Qualifying experience includes:

  • A professional role (Ex: Engineer, Data Scientist) on an NLP project that used a Transformer-based architecture
  • A completed NLP project that used a Transformer-based architecture
  • Academic coursework in NLP Transformer-based networks
Candidates must also demonstrate teaching experience, such as:
  • Classroom teaching experience - in person or remote
  • Significant presentation experience

Candidates should also have the following:

  • Basic Python competency including familiarity with variable types, loops, conditional statements, functions, array manipulations, and class objects/methods
  • Basic pandas and NeMo competency
  • Experience using NVIDIA Triton Inference Server

Computer Vision for Industrial Inspection

Instructor qualifications:
Candidates must demonstrate experience working on at least one Deep Learning application, either in a commercial or academic capacity, and explain their work. Qualifying experience includes:

  • Using Deep Learning techniques to tackle classification problems, preferably in the context of industrial applications.
  • A professional role on a computer vision project that used Deep Learning techniques.
  • Significant coursework in Deep Learning for computer vision that covers the various stages of the development workflow.

Candidate should have the following:

  • Python (and common python libraries used in DL, e.g., numpy and pandas)
  • Familiarity with end-to-end machine learning workflow
  • Familiarity with manipulating data using pandas DataFrame
  • Familiarity with deep learning concepts including knowledge of convolutional neural networks
  • Familiarity of at least one deep learning framework (Keras and TensorFlow are preferred)
  • Familiarity with metrics such as accuracy and inference performance
  • Familiarity with command-line interface and basic linux commands
  • Familiarity with transfer learning and fine-tuning models
  • Knowledge of NVIDIA’s DALI, TAO Toolkit, TensorRT, and Triton Inference Server

Candidates must also demonstrate teaching experience, such as:

  • Classroom teaching experience
  • Significant presentation experience

Data Parallelism: How to Train Deep Learning Models on Multiple GPUs

Instructor qualifications:
Candidates must demonstrate experience working on at least one Deep Learning application, either in a commercial or academic capacity, and explain their work. Qualifying experience includes:

  • Deploying deep learning training workloads to multiple GPUs and preferably multi-node clusters
  • Data Parallel approaches to distributed Deep Learning
  • Profiling and optimizing the deep learning code
  • Using NGC containers
  • Experience in building neural networks with PyTorch
  • Using PyTorch DDP to deploy distributed training

Candidates must also demonstrate teaching experience, such as:

  • Classroom teaching experience
  • Significant presentation experience

Candidate should have the following:

  • Good understanding of the literature discussing implications of training deep neural networks with large batches. In particular, a good understanding of the LARS/LARC algorithm.
  • Understanding of the process used in training deep neural networks. In particular understanding of the Stochastic Gradient Descent and backpropagation algorithms.

Fundamentals of Deep Learning

Instructor qualifications:
Candidates must demonstrate experience working on a computer vision task - image classification, object detection, etc. - using deep learning in either a professional or academic setting. Foundational knowledge of natural language processing (NLP), reinforcement learning (RL) and other neural network architectures such as RNNs / LSTMs and GANs is required. Qualifying experience includes:

  • A professional role (Ex: Data Engineer, Data Scientist) architecting computer vision projects that use deep learning
  • Academic coursework in computer vision, NLP, RL and neural network architectures

Candidates must also demonstrate teaching experience, such as:

  • Classroom teaching experience - either via in-person or distance setting
  • Significant presentation experience

Candidates should also have the following:

  • Familiarity with basic programming fundamentals such as functions and variables
  • Basic Python competency

Model Parallelism: Building and Deploying Large Neural Networks

Instructor qualifications:
Candidates must demonstrate experience working on a model parallelism related task using deep learning in either a professional or academic setting foundational knowledge of optimization techniques such as activation checkpointing, mixed precision training, and gradient accumulation is required. Qualifying experience includes:

  • A professional role (Ex: Data Engineer, Data Scientist) architecting deep learning projects that use distributed systems such as the cloud or multi-GPU machines.
  • Academic coursework in large neural network architectures such as GPT-3.

Candidate should have the following:

  • Classroom teaching experience - either via in-person or distance setting
  • Significant presentation experience

Candidates must also demonstrate teaching experience, such as:

  • An understanding of the Slurm, NVIDIA Triton and DeepSpeed technologies
  • An understanding of the differences between Model and Data Parallelism

Accelerating CUDA C++ Applications with Multiple GPUs

Instructor qualifications:
Candidates must demonstrate significant experience with multiple CUDA-accelerated applications in the past, either in a professional or meaningful academic scenario, and be able to explain their work with these applications. These applications should involve the use of multiple GPUs and concurrent streams.

  • How your applications provide meaningful acceleration on a problem that could not be addressed as successfully in a CPU-only environment
  • The specifics of optimization strategies that the applications use
  • Specific CUDA-related technical challenges that arose while developing the applications
Candidates should have the following:
  • Advanced CUDA C++ experience.
  • Mastery of multiple techniques for performing copy/compute overlap in single and multiple GPU applications, and the ability to discuss them clearly in detail and at length.
  • Experience with NSight Systems.

Candidates must also demonstrate teaching experience, such as:

  • Classroom teaching experience
  • Significant presentation experience

Fundamentals of Accelerated Computing with CUDA C/C++

Instructor qualifications:
Please provide some evidence of having worked significantly with a CUDA-accelerated application in the past, either in a professional or meaningful academic scenario, and be prepared to talk about your work with others. You should be able to discuss:

  • How your applications provide meaningful acceleration on a problem that could not be addressed as successfully in a CPU-only environment
  • The specifics of optimization strategies that the applications use
  • Specific CUDA-related technical challenges that arose while developing the applications

Candidates should also have the following:

Basic C/C++ competency including familiarity with variable types, loops, conditional statements, functions, and array manipulation.

Fundamentals of Accelerated Computing with CUDA Python

Instructor qualifications:
Please provide some evidence of having worked significantly with a CUDA-accelerated application in the past, either in a professional or meaningful academic scenario, and be prepared to talk about your work with others. You should be able to discuss:

  • How your applications provide meaningful acceleration on a problem that could not be addressed as successfully in a CPU-only environment
  • The specifics of optimization strategies that the applications use
  • Specific CUDA-related technical challenges that arose while developing the applications

Candidates should also have the following:

Basic Python competency including familiarity with variable types, loops, conditional statements, functions, and array manipulations. Basic NumPy competency including familiarity ndarrays and ufuncs.

Scaling CUDA C++ Applications to Multiple Nodes

Instructor qualifications:
Candidates must have professional or academic experience developing CUDA C++ applications in the SPMD paradigm (MPI or SHEMEM derivatives) on compute clusters, and should be able to discuss their work on these applications in detail. In particular, candidates should be able to discuss:

  • The technologies used to scale their application to multiple nodes
  • Details about the compute cluster used to deploy their applications, including details about intra and inter-node networking
  • Inter-GPU and inter-node communication patterns required to successfully run the application
  • The reasoning behind communication patterns employed in the application
Candidates should have the following:
  • Instructors should have either experience, or the ability at least to discuss and describe, prototypical scientific applications such as a Jacobi solver or wave simulation.

Candidates must also demonstrate teaching experience, such as:

  • Classroom teaching experience - in person or remote
  • Significant presentation experience

Accelerating Data Engineering Pipelines

Instructor qualifications:
Candidates must demonstrate experience working on at least one ETL data pipeline, either in a professional or meaningful academic scenario, and explain their work. Qualifying experience includes:

  • A professional role (Ex: Data Engineer, Data Scientist, Business Analyst) on an ETL pipeline
  • A completed data dashboard project from data source to front end visualization
  • Academic coursework in Computer Based Information Systems or Database Systems
Candidates should have the following:
  • Experience using CuDF and CuPy
  • Experience with a dashboarding tool (Plotly, Matplotlib, Tableau)
  • Mastery and the ability to clearly explain MapReduce and DAG frameworks

Candidates must also demonstrate teaching experience, such as:

  • Classroom teaching experience
  • Significant presentation experience

Fundamentals of Accelerated Data Science with RAPIDS

Instructor qualifications:
Candidates must demonstrate significant experience with Data Science in Python and should be able to discuss about their previous work:

  • Specifics about all aspects of their end-to-end workflows, explaining their decisions, and speaking knowledgeably about tools and libraries used
  • The use of many DS/ML algorithms in their work, explaining their decisions
  • Extensive use of Python DS libraries like Pandas, NumPy, scikit-learn
  • Encouraged, previous work with Dask
  • Encouraged, previous work with or on RAPIDS
Candidates must also demonstrate teaching experience, such as:
  • Classroom teaching experience
  • Significant presentation experience

Enhancing Data Science Outcomes with Efficient Workflows

Instructor qualifications:
Candidates must demonstrate significant experience with Data Science in Python using distributed computing for large datasets and should be able to discuss about their previous work:

  • Specifics about all aspects of their end-to-end workflows, explaining their decisions, and speaking knowledgeably about tools and libraries used
  • The use of various data transformations applied on input data for model consumption
  • The use of various Machine Learning algorithms in their work, explaining their decisions
  • Extensive use of Python Data Science libraries like pandas, NumPy, scikit-learn, and xgboost
  • Previous work with or on RAPIDS and Dask
  • Recognition of the iterative nature of Data Science and appreciation of hardware acceleration for rapid experimentation
Candidates should have the following:
  • Python and common Data Science libraries like pandas, NumPy, scikit-learn, and xgboost
  • Proficiency with DataFrame manipulation
  • Familiarity with distributed computing using Dask
  • Familiarity with end-to-end machine learning workflow
  • Proficiency with various Machine Learning models, specifically those of tree-based variant
  • Proficiency with model performance metrics such as accuracy and inference performance
  • Familiarity with model tuning and its benefits
  • Knowledge of NVIDIA’s RAPIDS, NVTabular, and Triton Inference Server
Candidates must also demonstrate teaching experience, such as:
  • Classroom teaching experience
  • Significant presentation experience