Jay Rodge

Jay Rodge is a developer advocate for large language models (LLMs), where he demonstrates how developers can leverage GPU acceleration in their LLM processes, using tools and frameworks that are widely used by the developer community. Previously, Jay was a product marketing manager for data science and deep learning products at NVIDIA, driving launches and product marketing initiatives. Jay received his master’s degree in computer science from Illinois Tech, Chicago. Before joining NVIDIA, Jay was an AI research intern at BMW Group, solving problems using computer vision for BMW’s largest manufacturing plant.
Avatar photo

Posts by Jay Rodge

Generative AI

Generative AI Agents Developer Contest: Top Tips for Getting Started

Join our contest that runs through June 17 and showcase your innovation using cutting-edge generative AI-powered applications using NVIDIA and LangChain... 3 MIN READ
Decorative image of a computer screen against a purple background, with a dial on the side.
Data Science

RAPIDS cuDF Accelerates pandas Nearly 150x with Zero Code Changes

At NVIDIA GTC 2024, it was announced that RAPIDS cuDF can now bring GPU acceleration to 9.5M million pandas users without requiring them to change their code. ... 5 MIN READ
Decorative image.
Data Science

Accelerated Data Analytics: Machine Learning with GPU-Accelerated Pandas and Scikit-learn

If you are looking to take your machine learning (ML) projects to new levels of speed and scalability, GPU-accelerated data analytics can help you deliver... 14 MIN READ
Data Science

Optimizing and Serving Models with NVIDIA TensorRT and NVIDIA Triton

Imagine that you have trained your model with PyTorch, TensorFlow, or the framework of your choice, are satisfied with its accuracy, and are considering... 11 MIN READ
Diagram of Torch-TensorRT and TensorFlow-TensorRT.
Conversational AI

NVIDIA Announces TensorRT 8.2 and Integrations with PyTorch and TensorFlow

Today NVIDIA released TensorRT 8.2, with optimizations for billion parameter NLU models. These include T5 and GPT-2, used for translation and text generation,... 2 MIN READ
Conversational AI

Optimizing T5 and GPT-2 for Real-Time Inference with NVIDIA TensorRT

The transformer architecture has wholly transformed (pun intended) the domain of natural language processing (NLP). Over the recent years, many novel network... 9 MIN READ