Developer Resources For Financial Services
A hub of news, SDKs, technical resources, and more for developers working in the financial services industry.
App Frameworks and SDKs
CUDA
CUDA® is a parallel computing platform and programming model developed by NVIDIA for general computing on graphical processing units (GPUs). With CUDA, developers are able to dramatically speed up computing applications by harnessing the power of GPUs.
In GPU-accelerated applications, the sequential part of the workload runs on the CPU - which is optimized for single-threaded performance - while the compute intensive portion of the application runs on thousands of GPU cores in parallel.
Download Now
RAPIDS
The RAPIDS suite of open source software libraries and APIs gives you the ability to execute end-to-end data science and analytics pipelines entirely on GPUs. Licensed under Apache 2.0, RAPIDS is incubated by NVIDIA® based on extensive hardware and data science science experience. RAPIDS utilizes NVIDIA CUDA® primitives for low-level compute optimization, and exposes GPU parallelism and high-bandwidth memory speed through user-friendly Python interfaces.
Get Started
Automatic Mixed Precision
Deep Neural Network training has traditionally relied on IEEE single-precision format, however with mixed precision, you can train with half precision while maintaining the network accuracy achieved with single precision. This technique of using both single- and half-precision representations is referred to as mixed precision technique.
Learn More
Triton Inference Server
The NVIDIA Triton inference server simplifies the deployment of AI models at scale in production. Triton Server is open-source inference server software that lets teams deploy trained AI models from many frameworks, including TensorFlow, TensorRT, PyTorch, and ONNX.
Learn More
TensorRT
NVIDIA TensorRT™ is a platform for high-performance deep learning inference. It includes a deep learning inference optimizer and runtime that delivers low latency and high-throughput for deep learning inference applications.
Learn More
cuDNN
The NVIDIA CUDA® Deep Neural Network library (cuDNN) is a GPU-accelerated library of primitives for deep neural networks. cuDNN provides highly tuned implementations for standard routines such as forward and backward convolution, pooling, normalization, and activation layers.
Learn More
Jarvis
NVIDIA Jarvis is an SDK for building and deploying AI applications that fuse vision, speech and other sensors. It offers a complete workflow to build, train and deploy GPU-accelerated AI systems that can use visual cues such as gestures and gaze along with speech in context.
Learn More
JetPack
NVIDIA JetPack SDK is the most comprehensive solution for building AI applications. Use NVIDIA SDK Manager to flash your Jetson developer kit with the latest OS image, install developer tools for both host computer and developer kit, and install the libraries and APIs, samples, and documentation needed to jumpstart your development environment.
Learn More
NVIDIA Collective Communications Library
The NVIDIA Collective Communications Library (NCCL) implements multi-GPU and multi-node collective communication primitives that are performance optimized for NVIDIA GPUs. NCCL provides routines such as all-gather, all-reduce, broadcast, reduce, reduce-scatter, that are optimized to achieve high bandwidth over PCIe and NVLink high-speed interconnect.
Learn More
Browse by Resource Type
Financial Data Modeling with RAPIDS
Financial data sets are usually anonymized to protect a customer's privacy. Sometimes even the column name of the tabular data is encoded, which can prevent feature engineering using domain knowledge. Learn how RAPIDS can help to create better models.
Read Blog
GPU-Accelerated Examples for Quantitative Analyst Tasks
Learn about an example to show how simple it is to accelerate the quant workflow in the GPU and visualize the data flow.
Read Blog
Fast Fractional Differencing on GPUs Using Numba and RAPIDS
Fractional differencing is widely used today in the financial services industry for preparing training data for machine learning algorithms to generate signals for stock trading.
Read Blog
Accelerated Python in Banking
RAPIDS is an open-source platform, incubated at NVIDIA, for GPU-accelerated data science. It’s transforming many areas of the financial services industry, including the performance record for a representative benchmark designed to evaluate platforms for backtesting trading strategies. Learn how financial institutions are leveraging the RAPIDS platform.
View Webinar
Advancing Financial Services with Conversational AI
Natural Language Processing (NLP) is a critical part of building better chatbots and AI assistants. Among the numerous language models used in NLP-based applications, BERT has emerged as a leader due to its innovative use of machine learning, rapid iteration and ease of use. Learn How the BERT language model is used for NLP.
View Webinar
Deep Learning in Asset Pricing
Stanford University uses deep neural networks to estimate asset pricing for individual stock returns, taking advantage of a vast amount of conditioning information while keeping a fully flexible form and accounting for time variations. Their key innovations include constructing the most informative test assets with an adversarial approach and extracting the states of the economy from many macroeconomic time series.
View Webinar
NVIDIA DEEP LEARNING INSTITUTE

The NVIDIA Deep Learning Institute (DLI) offers hands-on training in AI and accelerated computing to solve real-world problems. Training is available as self-paced, online courses or in-person, instructor-led workshops.
Fundamentals of Accelerated Data Science with RAPIDS
You’ll learn how to:
- Perform multiple analysis tasks on large datasets using RAPIDS
- Use cuDF, Dask, and BlazingSQL to evaluate datasets
- Utilize cuML algorithms to perform data analysis at massive scale
Fundamentals of Accelerated Computing with CUDA Python
You’ll learn how to:
- Use Numba to compile CUDA kernels from NumPy universal functions (ufuncs)
- Use Numba to create and launch custom CUDA kernels
- Apply key GPU memory management techniques
Building Intelligent Recommender Systems
You’ll learn how to:
- Build a content-based recommender system using the open-source cuDF library and Apache Arrow
- Optimize performance for both training and inference using large, sparse datasets
- Deploy a recommender model as a high-performance web service
Accelerating Data Science Workflows with RAPIDS
You’ll learn how to:
- Use cuDF to manipulate massive datasets directly on the GPU
- Utilize a wide variety of GPU-accelerated machine learning algorithms including XGBoost and several other cuML algorithms to perform data analysis
- Perform end to end analysis tasks utilizing several realistic data sets
NVIDIA Financial Services News

March 18, 2020
Developer Blog: Accelerating Python for Exotic Option Pricing

July 16, 2019
Introduction to GPU Accelerated Python for Financial Services

May 13, 2019
NVIDIA DGX-2 Helps Accelerates Key Algorithm for Hedge Funds by 6,000x

March 6, 2018
On Demand Webinar: Deep Learning Demystified
View all financial services news