Posts by Jay Rodge
Data Science
Mar 18, 2024
RAPIDS cuDF Accelerates pandas Nearly 150x with Zero Code Changes
At NVIDIA GTC 2024, it was announced that RAPIDS cuDF can now bring GPU acceleration to 9.5M million pandas users without requiring them to change their code....
5 MIN READ
Data Science
Jul 11, 2023
Accelerated Data Analytics: Machine Learning with GPU-Accelerated Pandas and Scikit-learn
If you are looking to take your machine learning (ML) projects to new levels of speed and scalability, GPU-accelerated data analytics can help you deliver...
14 MIN READ
Data Science
Jul 20, 2022
Optimizing and Serving Models with NVIDIA TensorRT and NVIDIA Triton
Imagine that you have trained your model with PyTorch, TensorFlow, or the framework of your choice, are satisfied with its accuracy, and are considering...
11 MIN READ
Conversational AI
Dec 02, 2021
NVIDIA Announces TensorRT 8.2 and Integrations with PyTorch and TensorFlow
Today NVIDIA released TensorRT 8.2, with optimizations for billion parameter NLU models. These include T5 and GPT-2, used for translation and text generation,...
2 MIN READ
Conversational AI
Dec 02, 2021
Optimizing T5 and GPT-2 for Real-Time Inference with NVIDIA TensorRT
The transformer architecture has wholly transformed (pun intended) the domain of natural language processing (NLP). Over the recent years, many novel network...
9 MIN READ
Conversational AI
Nov 09, 2021
ICYMI: New AI Tools and Technologies Announced at NVIDIA GTC Keynote
At NVIDIA GTC this November, new software tools were announced that help developers build real-time speech applications, optimize inference for a variety of...
5 MIN READ