Fast INT8 Inference for Autonomous Vehicles with TensorRT 3

Artificial Intelligence, Autonomous Vehicles, Automotive, Machine Learning & Artificial Intelligence, TensorRT

Nadeem Mohammad, posted Dec 12 2017

Autonomous driving demands safety, and a high-performance computing solution to process sensor data with extreme accuracy.

Read more

Fast INT8 Inference for Autonomous Vehicles with TensorRT 3

Features, Autonomous Vehicles, DP4A, Inference, Mixed Precision, TensorRT

Nadeem Mohammad, posted Dec 11 2017

Autonomous driving demands safety, and a high-performance computing solution to process sensor data with extreme accuracy.

Read more

NVIDIA SDK Updated With New Releases of TensorRT, CUDA, and More

Accelerated Computing, Artificial Intelligence, Features, Robotics, Cloud, CUDA, cuDNN, Higher Education/Academia, Machine Learning & Artificial Intelligence, TensorRT, Tesla

Nadeem Mohammad, posted Dec 06 2017

At NIPS 2017, NVIDIA announced new software releases for deep learning and HPC developers.  The latest SDK updates include new capabilities and performance optimizations to TensorRT, CUDA toolkit and the new project CUTLASS library.

Read more

RESTful Inference with the TensorRT Container and NVIDIA GPU Cloud

Artificial Intelligence, Features, Cloud, Data Center, Machine Learning & Artificial Intelligence, TensorRT

Nadeem Mohammad, posted Dec 05 2017

Once you have built, trained, tweaked and tuned your deep learning model, you need an inference solution that you need to deploy to a datacenter or to the cloud, and you need to get the maximum possible performance.

Read more

RESTful Inference with the TensorRT Container and NVIDIA GPU Cloud

Features, Containers, Docker, Inference, NVIDIA GPU Cloud, REST, TensorRT

Nadeem Mohammad, posted Dec 05 2017

You’ve built, trained, tweaked and tuned your model. Finally, you have a Caffe, ONNX or TensorFlow model that meets your requirements.

Read more

NVIDIA Deep Learning Inference Platform Performance Study

Artificial Intelligence, Cloud, Cluster/Supercomputing, CUDA, Machine Learning & Artificial Intelligence, TensorRT, Tesla

Nadeem Mohammad, posted Dec 04 2017

The NVIDIA deep learning platform spans from the data center to the network’s edge.

Read more

TensorRT 3: Faster TensorFlow Inference and Volta Support

Features, Deep Learning, Inference, TensorFlow, TensorRT, Volta

Nadeem Mohammad, posted Dec 04 2017

NVIDIA TensorRT™ is a high-performance deep learning inference optimizer and runtime that delivers low latency, high-throughput inference for deep learning applications.

Read more

Speed to Safety: Autonomous RC Car Aids Emergency Evacuation

Research, Automotive, Autonomous, cuDNN, Image Recognition, Jetson, Machine Learning & Artificial Intelligence, Quadro, TensorRT, Tesla

Nadeem Mohammad, posted Aug 04 2017

By Abhinav Ayalur, Isaac Wilcove, Lynn Dang, Ricky Avina The alarm is ringing. You smell smoke and see people running for the exit, but you don’t do the same. Why? Because you’re the fire marshal.

Read more

Coming Right Up! High Schoolers Build Indoor Delivery Robot with NVIDIA Jetson TX2

Research, Autonomous, DIGITS, Jetson, Machine Learning & Artificial Intelligence, Robotics, TensorRT

Nadeem Mohammad, posted Aug 02 2017

By Grace Lam, Mokshith Voodarla, Nicholas Liu How long does it take to program an office delivery robot? Apparently, less than seven weeks.

Read more

Inferencing Images 100x Faster with GPUs and TensorRT

Research, Higher Education/Academia, Image Recognition, Machine Learning & Artificial Intelligence, Robotics, TensorRT, Tesla

Nadeem Mohammad, posted Jul 25 2017

At this week’s Computer Vision and Pattern Recognition conference, NVIDIA demonstrated how one Tesla V100 running NVIDIA TensorRT can perform a common inferencing task 100X faster than a system without GPUs.

Read more