NVIDIA SDK Updated With New Releases of TensorRT, CUDA, and More

Accelerated Computing, Artificial Intelligence, Features, Robotics, Cloud, CUDA, cuDNN, Higher Education/Academia, Machine Learning & Artificial Intelligence, TensorRT, Tesla

Nadeem Mohammad, posted Dec 06 2017

At NIPS 2017, NVIDIA announced new software releases for deep learning and HPC developers.  The latest SDK updates include new capabilities and performance optimizations to TensorRT, CUDA toolkit and the new project CUTLASS library.

Read more

CUTLASS: Fast Linear Algebra in CUDA C++

Features, C++, CUBLAS, CUDA, Deep Learning, Libraries, Linear Algebra

Nadeem Mohammad, posted Dec 05 2017

Matrix multiplication is a key computation within many scientific applications, particularly those in deep learning. Many operations in modern deep neural networks are either defined as matrix multiplications or can be cast as such.

Read more

RESTful Inference with the TensorRT Container and NVIDIA GPU Cloud

Artificial Intelligence, Features, Cloud, Data Center, Machine Learning & Artificial Intelligence, TensorRT

Nadeem Mohammad, posted Dec 05 2017

Once you have built, trained, tweaked and tuned your deep learning model, you need an inference solution that you need to deploy to a datacenter or to the cloud, and you need to get the maximum possible performance.

Read more

RESTful Inference with the TensorRT Container and NVIDIA GPU Cloud

Features, Containers, Docker, Inference, NVIDIA GPU Cloud, REST, TensorRT

Nadeem Mohammad, posted Dec 05 2017

You’ve built, trained, tweaked and tuned your model. Finally, you have a Caffe, ONNX or TensorFlow model that meets your requirements.

Read more

NVIDIA at NIPS 2017

Artificial Intelligence, Features, Cloud, GeForce, Machine Learning & Artificial Intelligence

Nadeem Mohammad, posted Dec 04 2017

NVIDIA is headed to NIPS (Neural Information Processing Systems) and we can’t wait to show you our latest AI innovations.

Read more