The CUDA and GPU Computing Ecosystem extends beyond the CUDA Toolkit, Libraries and Tools. These pages contain links to key technologies from NVIDIA and Partners which help developers be more effiecient and produce better products.
|GPUDirect enables 3rd party network adapters, solid-state drives (SSDs) and other devices to directly read and write CUDA host and device memory resulting in significant performance improvements in data transfer times on NVIDIA Tesla™ and Quadro™ products. GPUDirect technology also includes direct transfers between GPUs.|
|LLVM is the open source compiler infrastructure on which NVIDIA's CUDA Compiler (NVCC) is based on. Developers can create or extend programming languages with support for GPU acceleration using the CUDA Compiler SDK.|
|MPI Solutions for GPUs. MPI is the industry standard API to enable applications threads to communicate across compute nodes. The latest implementations of this technology now support GPU accelerated nodes.|
Other Key Technology Links
Have a problem with your application or want to share some tips?
Try posting on the CUDA Developer forums and benefit from the collective wisdom of thousands of GPU developers.
Check out the rest of the CUDA Tools and Ecosystem