Accelerated Computing, .NET, C++, CUDA
Hybridizer is a compiler from Altimesh that lets you program GPUs and other accelerators from C# code or .NET Assembly.
Artificial Intelligence, Autonomous Vehicles, DP4A, Inference, Mixed Precision, TensorRT
Autonomous driving demands safety, and a high-performance computing solution to process sensor data with extreme accuracy.
Artificial Intelligence, C++, CUBLAS, CUDA, Deep Learning, Libraries, Linear Algebra
Matrix multiplication is a key computation within many scientific applications, particularly those in deep learning. Many operations in modern deep neural networks are either defined as matrix multiplications or can be cast as such.
Artificial Intelligence, Containers, Docker, Inference, NVIDIA GPU Cloud, REST, TensorRT
You’ve built, trained, tweaked and tuned your model. Finally, you have a Caffe, ONNX or TensorFlow model that meets your requirements.
Artificial Intelligence, Deep Learning, Inference, TensorFlow, TensorRT, Volta
NVIDIA TensorRT™ is a high-performance deep learning inference optimizer and runtime that delivers low latency, high-throughput inference for deep learning applications.