Developer Resources for Financial Services
CUDA® is a parallel computing platform and programming model developed by NVIDIA for general computing on graphical processing units (GPUs). With CUDA, developers are able to dramatically speed up computing applications by harnessing the power of GPUs.
In GPU-accelerated applications, the sequential part of the workload runs on the CPU - which is optimized for single-threaded performance - while the compute intensive portion of the application runs on thousands of GPU cores in parallel.
The RAPIDS suite of open source software libraries and APIs gives you the ability to execute end-to-end data science and analytics pipelines entirely on GPUs. Licensed under Apache 2.0, RAPIDS is incubated by NVIDIA® based on extensive hardware and data science science experience. RAPIDS utilizes NVIDIA CUDA® primitives for low-level compute optimization, and exposes GPU parallelism and high-bandwidth memory speed through user-friendly Python interfaces.
Deep Neural Network training has traditionally relied on IEEE single-precision format, however with mixed precision, you can train with half precision while maintaining the network accuracy achieved with single precision. This technique of using both single- and half-precision representations is referred to as mixed precision technique.
The NVIDIA Triton inference server simplifies the deployment of AI models at scale in production. Triton Server is open-source inference server software that lets teams deploy trained AI models from many frameworks, including TensorFlow, TensorRT, PyTorch, and ONNX.
NVIDIA TensorRT™ is a platform for high-performance deep learning inference. It includes a deep learning inference optimizer and runtime that delivers low latency and high-throughput for deep learning inference applications.
The NVIDIA CUDA® Deep Neural Network library (cuDNN) is a GPU-accelerated library of primitives for deep neural networks. cuDNN provides highly tuned implementations for standard routines such as forward and backward convolution, pooling, normalization, and activation layers.
NVIDIA Jarvis is an SDK for building and deploying AI applications that fuse vision, speech and other sensors. It offers a complete workflow to build, train and deploy GPU-accelerated AI systems that can use visual cues such as gestures and gaze along with speech in context.
NVIDIA JetPack SDK is the most comprehensive solution for building AI applications. Use NVIDIA SDK Manager to flash your Jetson developer kit with the latest OS image, install developer tools for both host computer and developer kit, and install the libraries and APIs, samples, and documentation needed to jumpstart your development environment.
The NVIDIA Collective Communications Library (NCCL) implements multi-GPU and multi-node collective communication primitives that are performance optimized for NVIDIA GPUs. NCCL provides routines such as all-gather, all-reduce, broadcast, reduce, reduce-scatter, that are optimized to achieve high bandwidth over PCIe and NVLink high-speed interconnect.
Financial data sets are usually anonymized to protect a customer's privacy. Sometimes even the column name of the tabular data is encoded, which can prevent feature engineering using domain knowledge. Learn how RAPIDS can help to create better models.
Learn about an example to show how simple it is to accelerate the quant workflow in the GPU and visualize the data flow.
Fractional differencing is widely used today in the financial services industry for preparing training data for machine learning algorithms to generate signals for stock trading.
RAPIDS is an open-source platform, incubated at NVIDIA, for GPU-accelerated data science. It’s transforming many areas of the financial services industry, including the performance record for a representative benchmark designed to evaluate platforms for backtesting trading strategies. Learn how financial institutions are leveraging the RAPIDS platform.
Natural Language Processing (NLP) is a critical part of building better chatbots and AI assistants. Among the numerous language models used in NLP-based applications, BERT has emerged as a leader due to its innovative use of machine learning, rapid iteration and ease of use. Learn How the BERT language model is used for NLP.
Carriers need to move beyond traditional "after the fact" claims management by embracing digital opportunities and adopting a fully analytics-driven approach. This approach should include automating claims handling for simple and clean cases, implementing AI-aided services to fast-track complex cases, and creating new digital services to increase customer satisfaction. Learn about the types of customer journeys that can be re-imagined with AI.
Machine Learning at Bloomberg: Building on Kubernetes
The Bloomberg Terminal provides data, analytics, news, information, and communication for professionals in business and finance. Learn how Bloomberg is using their internal machine learning platform to apply advanced AI and GPU-accelerated compute to dozens of domains such as NLP, computer vision, time-series analysis, and personalization. Discover how they evaluated and designed the core components of the ML platform.
Deep Learning to Predict Regime Changes in Financial Markets
Applying deep learning to identify market regimes can be valuable in helping anticipate and position a portfolio for significant structural shifts in the market. Learn how Cohen & Steers develops deep neural networks, including time delay and recurrent neural networks, and train them to identify and target intervals that delineate market state changes such as factor-based trends (e.g. growth vs. value), volatility regimes, and economic cycles.
How GPUs Speed the Analysis of Risk, Fraud Detection and Trader Surveillance
Financial companies analyse data using NVIDIA GPU-acceleration, impacting real-time risk management, regulatory reporting, fraud detection and cybersecurity, anti-money laundering, and trader surveillance. Learn about real-world examples from Kinetica, including how a specific multinational bank uses a real-time risk management engine running on GPU cloud instances.
NVIDIA DEEP LEARNING INSTITUTE
The NVIDIA Deep Learning Institute (DLI) offers hands-on training in AI and accelerated computing to solve real-world problems. Training is available as self-paced, online courses or in-person, instructor-led workshops.
Fundamentals of Accelerated Data Science with RAPIDS
You’ll learn how to:
- Perform multiple analysis tasks on large datasets using RAPIDS
- Use cuDF, Dask, and BlazingSQL to evaluate datasets
- Utilize cuML algorithms to perform data analysis at massive scale
Fundamentals of Accelerated Computing with CUDA Python
You’ll learn how to:
- Use Numba to compile CUDA kernels from NumPy universal functions (ufuncs)
- Use Numba to create and launch custom CUDA kernels
- Apply key GPU memory management techniques
NVIDIA Finance News
Introduction to GPU Accelerated Python for Financial Services
Quantitative finance is commonly defined as the use of mathematical models and large datasets to analyze financial markets and securities. This field requires massive computational effort to extract knowledge from raw data.
NVIDIA DGX-2 Helps Accelerate Key Algorithm for Hedge Funds by 6000x
Using an NVIDIA DGX-2 system running accelerated Python libraries, which uses NVIDIA CUDA-X AI software along with NVIDIA RAPIDS and Numba machine learning software, NVIDIA just broke the previous benchmarks of a key algorithm used by hedge funds to backtest trading strategies.
Sign up for the latest developer news from NVIDIA.