Jetson Community Projects
Explore and learn from Jetson projects created by us and our community. These have been created for Jetson developer kits. Scroll down to see projects with code, videos and more.
This is a collection of cool projects, applications, and demos that use NVIDIA Jetson platform. For more inspiration, code and instructions, scroll below.
Open-source project for learning AI by building fun applications. It’s easy to set up and use, is compatible with many accessories and includes interactive tutorials showing you how to harness the power of AI to follow objects, avoid collisions and more. The kit includes the complete robot chassis, wheels, and controllers along with a battery and 8MP camera. Supports AI frameworks such as TensorFlow and PyTorch.
Hello AI World
Start using Jetson and experiencing the power of AI. In a couple of hours you can have a set of deep learning inference demos up and running for realtime image classification and object detection using pretrained models on your Jetson Developer Kit with JetPack SDK and NVIDIA TensorRT. We'll focus on networks related to computer vision and includes the use of live cameras. You also code your own easy-to-follow recognition program in C++.
Autonomous AI racecar using NVIDIA Jetson Nano. With JetRacer, you will:
Real-time Human Pose Estimation
This project features multi-instance pose estimation accelerated by NVIDIA TensorRT. It is ideal for applications where low latency is necessary. It includes:
Have a Jetson project to share? Post it on our forum for a chance to be featured here too. Every month, we’ll award one Jetson AGX Xavier Developer Kit to a project that’s a cut above the rest for its application, inventiveness and creativity.
Real-Time 3D Traffic Cone Detection for Autonomous Driving
A Dhall, D Dai, L Van Gool, AMZFormulaStudent
Considerable progress has been made in semantic scene understanding of road scenes with monocular cameras [although, it generally focuses] on certain specific classes such as cars, bicyclists and pedestrians. This work investigates traffic cones, an object category crucial for traffic control in the context of autonomous vehicles. 3D object detection using images from a monocular camera is intrinsically an ill-posed problem. [We] propose a pipelined approach, [...] method [...] [which] runs efficiently on the low-power Jetson TX2, providing accurate 3D position estimates, allowing a race-car to map and drive autonomously on an unseen track indicated by traffic cones. With the help of robust and accurate perception, our race-car won both Formula Student Competitions held in Italy and Germany in 2018, cruising at a top speed of 54 km/h on our driverless platform "gotthard driverless".
A.I. Activated Wolverine Claws
A.I. Activated Wolverine Claws - quite a few YouTubers have made mechanical extending wolverine claws, but I want to make some Wolverne Claws that extend when I'm feeling like it - just like in the X-Men movies. I've trained a Deep Learning AI Neural network on NVIDIA Jetson Nano with Jetson Inference to recognise when I'm pulling the right face, and activate the Cosplay Wolverine Claws. Is this the future of Cosplay - you can decide!
Simple A.I. Demo with Jetson Nano
I'm trying out training a really simple AI machine learning model using transfer learning on the NVIDIA Jetson Nano with Jetson Inference. I used a very minimal data set of images captured and trained using scripts provided by NVIDIA. I wrote a simple script to make the robot look for high contrast markers in turn.
Energy Prediction System
Energy Prediction System with a neural network (CNN-LSTM) in a Jetson Nano. In this project [we're building] an active power meter with an Arduino Uno. The data will be sent to the Jetson with the Python script
arduino_serial.py to establish the communication between the Jetson and the Arduino. The second script,
neural_training.py is to start the training for the hybrid neural network and visualize the data. Use
visualize.py to visualiaze your predictions of the .h5 file saved after the deep learning training.
A Jetson based DeepStream application to identify areas of high risk through intuitive heat maps. In other words, a heatmap will be generated continuously representing regions where faces have been detected recently. Allowing us to see through the time. The application is containerized and uses DeepStream as the backbone to run TensorRT optimized models for the maximum throughput. Built on top of deepstream-imagedata-multistream sample app.
Project of the Month October 2020
Advanced driver-assistance system using Jetson Nano
An ADAS system that uses Jetson Nano as the hardware with four main functions: forward collision warning, lane departure warning, traffic sign recognition and overspeed warning. I trained and optimized three deep neural networks to run simultaneously on Jetson Nano (CenterNet-ResNet18 for object detection, U-Net for lane line segmentation and ResNet-18 for traffic sign classification).
Human Pose Estimation & Posture Corrector App
This app uses pose estimation to help users correct their posture by alerting them when they are slouching, leaning, or tilting their head down. [You'll] learn how to set up the Human Pose model and how to deploy the Posture Corrector app on the NVIDIA Jetson Nano.
Fire Detecting Drone
An autonomous drone to combat wildfires running on an NVIDIA Jetson Nano Developer Kit. This project uses a camera and a GPU-accelerated Neural Network as a sensor to detect fires.
Project of the Month September 2020
DR-SPAAM: Person Detection in 2D Range Data
D Jia, A Hermans, B Leibe
DR-SPAAM: A Spatial-Attention and Auto-regressive Model for Person Detection in 2D Range Data to appear in IROS'20. We’ve built a deep learning-based person detector from 2D range data. It runs on a Jetson AGX at 20+ Hz, or on a laptop with RTX 2080 at 90+ Hz. Checkout links below for more information
DeepStream ❤️ OSC
I'm using DeepStream SDK for Jetson Nano as an instrument to sonify and visualize detected objects in real time. My idea [...] was to turn public spaces into interactive-playable places where I can use people or vehicles as input to make performances or installations. Any software that accepts OSC as input can use this data to control their parameters. It can be sound / visual programming frameworks, videogames / emulators, whatever you can imagine. It is also possible to translate OSC to HID or MIDI messages to extend the amount of softwares DeepStream can communicate with.
Mommybot: Sleeping Assistant
J Hyuk, S Park
These days, more and more people are suffering from sleep deprivation. Mommybot is a system using Jetson Nano that helps manages a user's sleeping hours. Mommybot has 4 functions: (1) detect with a camera and register the time of different user events, (2) determine whether a user is asleep using TensorFlow, (3) with sklearn suggest optimal bedtime hours based on previous sleeping habit predictions, and (4) wake up the user with a preferred sleeping hour schedule.
Project of the Month August 2020
DBSE Monitor: Drowsiness, Blindspot & Emotion Monitor
L Arevalo Oliver, V Altamirano Izquierdo, A Sanchez Gutierrez
Drowsiness, emotion and attention monitor for driving. Detects objects in blindspots via CV. Jetson Nano [takes] care of running through both of the Pytorch-powered Computer Vision applications using a plethora of libraries in order to perform certain tasks. The two webcams serve as the main sensors to carry out Computer Vision and Pytorch [identifies] faces and eyes for one application and objects for the other and [sends the] information through MQTT in order to emmit a sound or show an image in the display. [We] added geolocation and crash detection with SMS notifications [through] Twilio with an accelerometer.
Fever Control with Jetson Nano & Lepton3
A useful application for the COVID19 era to control the human temperature and issue alarms in case of fever. This year, the year of COVID19, I decided to get that project out of the drawer and to adapt it to Nvidia Jetson Nano to realize an application to control human body temperature and issue alerts in case of fever.
Leela Chess Zero
As a chess player, I usually find myself using a chess engine for game analysis or opening preparation. Recently, I’ve noticed that chess engines have grown to be super powerful. Consider Leela Chess Zero (aka lc0), the open-source implementation of Google DeepMind’s AlphaZero. It has played so many amazing games that it’s hard for me to pinpoint the best one! This video demonstrates how to load a frontend UCI engine in ChessBase and connect it to a Leela Chess Zero engine running backend in a Nvidia Jetson device (which can be either Jetson Xavier NX or Jetson AGX Xavier).
RB-0: Jetson Nano Rover
RB-0 is a hobby sized rover that uses the same suspension method as NASA's newer differential-bar rovers. It uses a Jetson Nano, a camera, 15 servos, a Circuit Playground Express, and Wi-Fi for lots of fun with manuevering and running AI. It can climb small obstacles, move its camera in different directions, and steer all 6 wheels. I wanted to make it open source so anyone can have fun and learn from it!
DC-Gan Guitar Effector
Jetson Nano DC-GAN Guitar Effector is a Python app that modifies and adds effects to your electric guitar's raw sound input in real time. The Jetson module captures the instrument's sound through a Roland DUO-CAPTURE mk2 audio interface and outputs the resulting audio of the DC-GAN inference. The one-dimension pix2pix inference model is optimized and run on TensorRT at FP16 precision.
Project of the Month June 2020
AI device for mass fever screening. I combine Thermal and Visible Spectrum cameras in order to detect people in the scene and measure their skin temperature in a contactless manner [...], automatically [detecting] people in the scene - there's no need for a human operator! You can test multiple people at the time, [...] on-the-fly, without interrupting the flow. I decided to use Raspberry Pi Camera Module v2 [because it] works out-of-the-box with NVIDIA Jetson Nano. In my first approach, I used a SingleShot MultiBox Detector trained on COCO dataset. This lets me detect objects across 91 classes from COCO. The algorithm runs on Jetson Nano's embedded GPU at 9FPS.
A smart, fast and metrically accurate GPU-accelerated 3D scanner with Jetson Nano and Intel depth sensor for instant 3D reconstruction. This system design makes on-the-go 3D scanning modules without external computing power affordable by any creator/maker around the world, giving users HD 3D models of scanned objects or environments instantly. Using RGBD stereo mapping, render 3D models of people, objects and environments with JetScan.
Vision alerting system with IoT Edge, Azure Custom Vision and Jetson Nano
[...] Create your own object alerting system running on an edge device. For this we will use an NVIDIA Jetson Nano, the Azure Custom Vision service and Azure IoT Edge. The goal is to process the camera frames locally on the Jetson Nano and only send a message to the cloud when the detected object hits a certain confidence threshold.
Safe Meeting keeps an eye on you during your video conferences, and if it sees your underwear, the video is immediately muted. A camera is connected to an NVIDIA Jetson Nano. This camera is positioned immediately next to a webcam that is used for video conferences, such that it captures the same region.
[Due to] the Covid-19 pandemic, people cannot drink outside [and] are looking for alternatives such as drinking with friends through videocall. Our team thought that enjoying time wisely with fun interaction is what people need. We focus on the problem that drinking through videocall can fill visual and auditory elements, but not physical interaction. Also, since you are drinking alone, it is important to know your drinking status. The model is made from the TensorFlor ObjectDetector API. Once [...] built, TensorRT can optimize it for real-time execution [...] on Jetson Nano.
The Tale of the Bee-Saving Christmas Tree
We used [64 NVIDIA Jetson Nano Devkits] to build the Jetson tree with a total of 8.192 CUDA cores and 256 CPU cores. We'll use its power to analyze bee videos [and] investigate [...] the perishing of insects. At apic.ai, we believe technology can help us create a better understanding of nature. [...] We [...] [analyse] bee behavior like motion patterns and pollen intake. Our monitoring system visually detects bees as they enter and leave their hives. Through their level of activity, mortality and food abundance we gain insights into the well-being of the insects and the plant diversity in the environment [...], thus [enabling] us to evaluate regional living conditions for insects, detect problems and propose measures to improve the situation.
youfork: a Fully ROS 2 Homemade Mobile Manipulator running on Jetson AGX Xavier. youfork is a mobile manipulator for home tidy-up. Perform home tidy-up by teleoperation. All components are driven by ROS 2 Eloquent + Ubuntu 18.04 on Jetson Xavier.
Originally envisioned as a demonstrator for the Bosch AI CON 2019, the platooning system consists of two cars, a leading car and a following car. The leading car can be driven manually using a PS4 controller and the following car will autonomously follow the leading car. The system currently is also capable of Object Tracking, Velocity Estimation by Optical Flow Visual Odometry and Monocular Depth Estimation.
Project of the Month May 2020
Qrio: A Bot That Plays Videos for My Toddler
[Use] an object detection AI model, a game engine, an Amazon Polly and a Selenium automation framework running on an NVIDIA Jetson Nano to build Qrio, a bot which can speak, recognise a toy and play a relevant video on YouTube.
Narwhal-AI: Ultrasonic Classifier
Listen, record and classify the sounds coming from a natural environment. Microphones capture audio data which is then processed using machine learning to identify the animal species, whether it be bird, bat, rodent, whale, dolphin or anything that makes a distinct noise. The key advantages over other existing technology is that: the audio data is filtered at source saving both disc space and human intervention. Previously recordings could easily generate many hours of footage per day, consuming up to 5 Gb per hour of disc space and adversely affecting the zoologist's golfing handicap and social life.
Deep Reinforcement Learning with JetBot
AI RC Car Agent using deep reinforcement learning on Jetson Nano. This software is capable of self-learning for your AI RC car in a matter of minutes. In the demo video, the Jetbot does deep reinforcement learning in the real world using a SAC (soft actor critic). The DRL process runs on the Jetson Nano. This project refers to this great post by Antonin Raffin.
Project of the Month April 2020
Smart Social Distancing
As a response to the COVID-19 pandemic, Neuralet released an open-source application to help people practice physical distancing rules in […] retail spaces, construction sites, factories, healthcare facilities, etc. […] Our approach uses […] edge AI devices such as Jetson Nano to track people in different environments and measure adherence to social distancing guidelines, and can give notifications each time social distancing rules are violated.
Deep Clean watches a room and flags all surfaces as they are touched for special attention on the next cleaning to prevent disease spread. […] A stereo camera detects the depth (z-coordinate) of an object of interest (e.g. a hand) in the video frame. OpenPose is used to detect hand location (x, y-coordinates). When a hand is at the same position and depth as another object in view (i.e. touching), that location is tracked.
NVIDIA / Hackster AI at the Edge Challenge 1st Place
Reading Eye For The Blind
Allows the reading-impaired to hear both printed and handwritten text by converting recognized sentences into synthesized speech. […] Place some text under the camera, toggle the power switch […], and click the start button. Using the IAM Database, with more than 9,000 pre-labeled text lines from 500 different writers, we trained a handwritten text recognition.
NVIDIA / Hackster AI at the Edge Challenge 1st Place
[With] MixPose, we are building a streaming platform to empower fitness professionals, yoga instructors and dance teachers through power of AI. [Instructors] can choose anywhere they feel comfortable, [and] users can watch the stream in the comfort of their own TV.
NVIDIA / Hackster AI at the Edge Challenge 1st Place
Nindamani the Weed Removal Robot
Nindamani, the AI based mechanically weed removal robot, which autonomously detects and segment the weeds from crop using artificial intelligence. The whole robot modules natively build on ROS2. Nindamani can be used in any early stage of crops for autonomous weeding.
NVIDIA / Hackster AI at the Edge Challenge 2nd Place
Easy-to-implement and low-cost modular framework for complex navigation tasks. Visual-based autonomous navigation systems typically require visual perception, localization, navigation, and obstacle avoidance. We propose [a] single RGB camera [and] techniques such as semantic segmentation with deep neural networks (DNNs), simultaneous localization and mapping (SLAM), path planning algorithms, as well as deep reinforcement learning (DRL) to implement the four functionalities mentioned above.
NVIDIA / Hackster AI at the Edge Challenge 2nd Place
Bandwidth Reduction with Anomaly Detection
We experiment with visual anomaly detection to develop techniques for reducing bandwidth consumption in streaming IoT applications. There seems to be no avoiding the tradeoff of spending compute to save bandwidth but we also want to spend it intelligently so we want to take advantage of the context. [With] visual anomaly detection, we stream ONLY infrequent anomalous images [and] explore unsupervised methods of reducing bandwidth by learning the context of a scene in order to filter redundant content from streaming video.
NVIDIA / Hackster AI at the Edge Challenge 2nd Place
AIoT - Artificial Intelligence on Thoughts
[Learn] how to read in and signal process brainwaves, build and train an Autoencoder to compress the EEG data to a latent representation, [use] the k-means machine learning algorithm to classify the data to determine brain-state, and [use] the information to control physical hardware! And along the way pick up tips on creating GUIs and real-time graphics in Python!
NVIDIA / Hackster AI at the Edge Challenge 3rd Place
Tracked vehicle made with Lego Technic parts and motors, enhanced with LiDAR and controlled by a Jetson Nano board running the latest Isaac SDK. Issue voice commands and get the robot to move autonomously. Create missions: navigate [and] set where the tank should go. If [the camera] detects the target object, it will get closer and shoot it with... the camera. It'll just take a picture, no real weapons :)
NVIDIA / Hackster AI at the Edge Challenge 3rd Place
Deep Eye - DeepStream Based Video Analytics
Hardware platform combined with DeepLib: an easy to use but powerful Python library, and a Web IDE [for rapid prototyping of video analytics projects] with the Jetson Nano. It supports up to 2 MIPI CSI cameras, which are mounted on a rotating platform. The project consists of 3 main components:
NVIDIA / Hackster AI at the Edge Challenge 3rd Place
Clean Water AI is an IoT device powered by NVIDIA Jetson that classifies and detects dangerous bacteria and harmful particles. The system can run in real time, [with] cities [installing] IoT devices across different water sources and […] monitoring water quality as well as contamination continuously. We utilize Tensorflow Object Detection Method to detect the contaminants and WebRTC to let users check water sources the same way they check security cameras.
ActionAI: Custom Tracking & MultiPerson Activity Recognition
We introduce an IVA pipeline to enable the development and prototyping of AI social applications. ActionAI is a Python library for training machine learning models to classify human action. It is a generalization of our yoga smart personal trainer, which is included in this repo as an example. This makes an ideal prototyping and data gathering platform for Human Activity Recognition, Human Object Interaction, and Scene Understanding tasks with ActionAI, a Jetson Nano, a USB Camera and the PS3 controller's rich input interface.
WebRTC Native Client Momo
Momo is a Native Client that can distribute video and audio via WebRTC from browser-less devices, such as wearable devices or Raspberry Pi. Using Jetson Nano's hardware encoder, it is possible to deliver 30fps video at 4K to a browser with a delay of less than 1 second. Momo is released on GitHub as open source under Apache License 2.0, and anyone can use it freely under the license. You must try 4K / 30fps video distribution on WebRTC at Momo!
Project of the Month February 2020
ROS node for real-time FCNN-based depth reconstruction. The platforms are NVIDIA Jetson TX2 and x86_64 PC with GNU/Linux (aarch64 should work as well, but not tested).
Shoot Your Shot!
This computer vision booth analyzes users throwing darts from multiple cameras, scoring each dart before logging data to the cloud. To analyze the player's form, we use pose estimation to track body parts through a throwing session. This demo uses two cameras to view the thrower and view the dartboard and track poses and dart placement.
Tipper predicts if a pitch will be in or out of the strike zone in real time. The batter will see a green or red light illuminate in their peripheral vision if the pitch will be in or out of the strike zone, respectively. [...] A convolutional neural network running on an NVIDIA Jetson AGX Xavier rapidly classifies these images against a model built during the training phase of the project. If the images are classified as in the strike zone, a green LED on a pair of glasses (in the wearer's peripheral vision) is lit. Conversely, if the ball is predicted to be out of the strike zone, a red LED is lit.
Project of the Month January 2020
Point-Voxel CNN for Efficient 3D Deep Learning
In our NeurIPS’19 paper, we propose Point-Voxel CNN (PVCNN), an efficient 3D deep learning method for various 3D vision applications. Here we show the 3D object segmentation demo which runs at 20 FPS on Jetson Nano. Note that the most efficient previous model, PointNet, runs at only 8 FPS. We also show the performance of 3D indoor scene segmentation with our PVCNN and PointNet on Jetson AGX Xavier. Remarkably, our network takes just 2.7 seconds to process more than one million points, while the PointNet takes more than 4.1 seconds and achieves around 9% worse mIoU comparing with our method.
Robaka 2: Self-Driving Hoverboad with ROS
My first mobile robot, Robaka v1 was a nice experience, but the platform was too weak to carry the Jetson Nano. The next milestone was building a robot ready to carry the real payload and drive outdoors. I stumbled upon the repo of Niklas Fauth’s repo, [who] summarized the reverse-engineering efforts on hoverboards, shared the opensource firmware, [and] instructions on reprogramming the controller. Another project, Bipropellant, extends his firmware, enabling hoverboard control via serial protocol. I built the platform around [this] and added a ROS-enabled controller for the motors.
Project of the Month December 2019
Jetbot Gazebo Football Soccer Simulation
Gazebo reduces the inconvenience of having to test a robot in a real environment by controlling in a simulated environment. Deep Learning makes robots play games [more] like a human. My goal with this project [to] combine these two benefits so that the robot [can] play soccer without human support. […] Two Jetbots are placed in the field, one tries to make a goal and [the other one] tries to defend the goal. In cases of multiple agents as [such as this], [it can use] self-play reinforcement learning tools.
Deepstream SDK + Azure IoT Edge on Jetson Nano
[Do] realtime video analytics with Deepstream SDK on a Jetson Nano connected to Azure via Azure IoT Edge. Deepstream is a highly-optimized video processing pipeline capable of running deep neural networks. It's a must-have tool [for] complex video analytics requirements, whether realtime or with cascading AI models. IoT Edge gives you the possibility to run this pipeline next to your cameras, where the video data is being generated, thus lowering your bandwitch costs and enabling scenarios with poor internet connectivity or privacy concerns. [Transform] cameras into sensors to know when there is an available parking spot, a missing product on a retail store shelf, an anomaly on a solar panel, a worker approaching a hazardous zone, etc.
Real-time Pupil Detection with DeepLabCut
Realtime pupil and eyelid detection with DeepLabCut running on a Jetson Nano. In neuroscience research, this provides a realtime readout of animal and human cognitive states, as pupil size is an excellent indicator of attention, arousal, locomotion, and decision-making processes. As one example application, you could use this setup to trigger a reward when the experimentee is alert.
Multimedia Sharing Tool with Jetson Nano
Share video, screen, camera and audio with an RTSP stream through LAN or WAN supporting CUDA computations in a high-performance embedded environment (NVIDIA Jetson Nano), applying real-time AI techiques [such as] intrusion detection with bounding boxes, localization and frame manipulation.
BatBot: An Experimental AI-Vision Robot
[…] AI research robot created from commodity parts. Lower half is an Elegoo Robot Car v3.0. The upper half is a Jetson Nano. An Android app controls it with spoken English translated and sent over Bluetooth. The robot has a camera, an ultrasonic distance sensor, and 40 pin GPIO available for expansion. […] High-level spoken commands like 'WHAT ARE YOU LOOKING AT?' instruct the robot photograph and identify objects. The command 'GO FIND SOME-OBJECT' instructs the robot to locate, identify and photograph an object. Low-level spoken commands like 'WHAT IS YOUR IP-ADDRESS?' […] or 'LOOK TO THE LEFT' will obtain information and/or control the robot directly. Teach BatBot to identify new objects by using voice commands.
FastDepth: Fast Monocular Depth Estimation on Embedded Systems
[…] There has been a significant and growing interest in depth estimation from a single RGB image, due to the relatively low cost and size of monocular cameras. [We] explore learning-based monocular depth estimation, targeting real-time inference on embedded systems. We propose an efficient and lightweight encoder-decoder network architecture and apply network pruning to further reduce computational complexity and latency. We deploy our proposed network, FastDepth, on the Jetson TX2 platform, where it runs at 178fps on the GPU and at 27fps on the CPU, with active power consumption under 10W. FastDepth achieves close to state-of-the-art accuracy on the NYU Depth v2 dataset.
SINTEF Self-Driving Truck with Induction Charger
This small-scale self-driving truck using Jetson TX2 and ROS Kinetic was built to demonstrate the principle of a wireless inductive charging system developed by Norwegian research institute SINTEF for road use. Navigate using one of two modes; SLAM/Pure Pursuit path tracking and supervised deep learning based on NVIDIA DAVE-2.
Autonomous drone using ORBSLAM2 on the Jetson Nano
Run ORBSLAM2 and implement close-loop position control in real time on Jetson Nano using recorded rosbags (e.g., EUROC) or live footage from a Bebop2 Drone. Tested with [realtime] monocular camera using OrbSLAM2 and Bebop2. In the Autonomous Drones Lab at Tel Aviv University, we research, develop and implement solutions for autonomous navigation in GPS-denied environments. [To] validate our solution, we work mainly on prototype drones to achieve a quick integration between hardware, software and the algorithms.
GPU-enabled Kubernetes Cluster for Machine Learning with Jetson Nano
Jetson Nano is a fully-featured GPU compatible with NVIDIA CUDA libraries. CUDA is the de-facto standard for modern machine learning computation. Having […] a cheap, CUDA-equipped device, we thought let’s build [a] machine learning cluster. If you think “cluster”, you typically think “Kubernetes”, […] commonly used to manage distributed applications running on […] hundreds of thousands of machines. […] Ours is composed of four; [though] it is applicable to any number of Jetson Nanos.
Project of the Month November 2019
Temporal Shift Module for Efficient Video Understanding
TSM is an efficient and light-weight operator for video recognition [on edge devices]. [...] Conventional methods using 3D convolution for temporal modeling are computationally expensive, making it difficult to be deployed on embedded devices which have a tight power constraint. In this ICCV’19 paper, we propose Temporal Shift Module (TSM) that can achieve the performance of 3D CNN but maintain 2D CNN’s complexity by shifting the channels along the temporal dimension. TSM enables real-time low-latency online video recognition and video object detection. [...] On NVIDIA Jetson Nano, it achieves a low latency of 13ms (76fps) for online video recognition.
This is an implementation for Rock-Paper-Scissors game with a machine. The Jetson Nano developer kit is used for AI recognition of hand gestures.
With Jetson-FFMpeg, use FFmpeg on Jetson Nano via the L4T Multimedia API, supporting hardware-accelerated encoding of H.264 and HEVC. FFMpeg is a highly portable multimedia framework, able to decode, encode, transcode, mux, demux, stream, filter and play pretty much any format. It supports the most obscure ancient formats up to the cutting edge.
Jetson-Stats is a package for monitoring and controlling your NVIDIA Jetson [Nano, Xavier, TX2i, TX2, TX1] embedded board. When you install jetson-stats, the following are included:
This software was written for monitoring the security of my home using single or multiple Picameras. The cameras perform motion detection and record video. The video is sent in an email. After recording video, an object detection model running on Jetson Nano checks if a person is present in the video. A set of 4 raspi zeros stream video over Wi-Fi to a Jetson TX2, which combines inputs from all sources, performs object detection and displays the results on a monitor.
Tiny YOLO v2 Inference with NVIDIA TensorRT
This application downloads a tiny YOLO v2 model from Open Neural Network eXchange (ONNX) Model Zoo, converts it to an NVIDIA TensorRT plan and then starts the object detection for camera captured image.
Quantify the world—monitor urban landscapes with this offline lightweight DIY solution. The simple setup allows you to become an urban data miner. Install on an NVIDIA Jetson board + Logitech webcam and count cars, pedestrians, and motorbikes from your livestream, running yolo and a tracking software we built. Access via smart devices, define areas to track, count and export data once you're finished. You can use this system for surveying without saving video data—not intruding data privacy of counted objects. Where data goes and what happens during the counting algo is transparent.
JetsonSky: Electronically Assisted Astronomy
With Electronically Assisted Astronomy, the camera replaces your eye. With a telescope, simply observe the deep sky on a screen or even record videos of your observations, using AI to enhance your images. I wanted to make a fully autonomous system I could control from my computer at home using a VNC client, instead of being outside during very cold nights.
Build a scalable attention-based speech recognition platform in Keras/Tensorflow for inference on the NVIDIA Jetson Platform for AI at the Edge. This real-world application of automatic speech recognition was inspired by my previous career in mental health. This project begins a journey towards building a platform for real-time therapeutic intervention inference and feedback. The ultimate intent was to build a tool to give therapists real-time feedback on the efficacy of their interventions, but on-device speech recognition has many applications in mobile, robotics, or other areas where cloud-based deep learning is not desirable.
Transfer Learning with JetBot & Traffic Cones
[When] driving [around] construction areas, I [think] how challenging it would be for self driving cars to navigate [around] traffic cones. It turns out it's not so difficult with NVIDIA's JetBot-with only a couple hundred images, you can train a state-of-the-art deep learning model to teach your robot how to [navigate] a maze of toy traffic cones using only an onboard camera and no other sensors.
Multi-agent System for non-Holonomic Racing (MuSHR)
The Unversity of Washington's Personal Robotics Lab has recently open-sourced the MuSHR Racecar Project. A robotic racecar equipped with lidar, a D435i Realsense Camera, and an NVIDIA Jetson Nano. The car can be used for machine learning, vision, autonomous driving, and robotics education. Build instructions and tutorials can all be found on the MuSHR website!
Project of the Month October 2019
My AI is so bright, I gotta wear shades. Effect change in your surroundings by wearing these AI-enabled glasses. ShAIdes is a transparent UI for the real world. A camera is attached to the frames of a pair of glasses, capturing what the wearer sees. It feeds realtime images to an NVIDIA Jetson Nano, which runs two separate image classification CNN models, one to detect objects, and another to detect gestures made by the wearer. When combinations of known objects and gestures are detected, actions are fired that manipulate the wearer’s environment.
OCR Tesseract Docker App on BalenaCloud
Upload images using Flask — a lightweight development-purposes server framework — preprocess and reduce image noise using OpenCV, and perform OCR using Python-tesseract. Originally deployed on a Docker container on AWS, this version is deployed using BalenaCloud to a Jetson Nano.
P.A.N.T.H.E.R.: Powerful Autonomous eNTity High-End Robot
Using its two tracks, ZED stereo camera and the NVIDIA Jetson TX2, this robot explores the outdoors and interacts with its surroundings. Weighing 9kg (20lbs), with 7cm (2.7in) of ground clearance, and a track system composed of three different dampers to absorb vibrations when drifting on grass, P.A.N.T.H.E.R. can climb little rocks and bumps. P.A.N.T.H.E.R. is built with plexiglass, aluminium, plastic, and other materials, is integrated with ROS, and all code is available on GitHub.
OpenALPR License Plate Recognition
The parking garage [of my apartment] upgraded to a license plate recognition system. […] I expected [it] to fail and hinder me from entering or exiting […]. I was wrong and [it] has worked with 100% success. Even [without] having a license plate on my front bumper or following good car hygiene. Being a flatfooder, […] [built] my own License Plate Detector using OpenALPR and Jetson Nano.
Project of the Month September 2019
Recognizing Sign Language with Jetson Nano
The Jetson Nano caches this model into memory and uses its 128 core GPU to recognize live images at up to 60fps. That high fps live recognition is what sets the Nano apart from other IoT devices. I have been hearing recommendations toward \"Train in the cloud, deploy at the edge\" and this seemed like a good reason to test that concept. Mission accomplished.
The IntelligentEdgeHOL walks through the process of deploying an IoT Edge module to an NVIDIA Jetson Nano device to allow for detection of objects in YouTube videos, RTSP streams, or an attached web cam.
Detecting Minifigures with Jetson Nano
For this project I had to build a rotating platform and I decided to use [an interlocking block set] for it. My idea was to place [the set's mini figures] on top of the platform, fix the Raspberry Pi camera in front of it and rotate the platform at different speeds to test how Jetson Nano recognition works.
Fruit Classification with Jetson Nano
Classification of fruits on the Nvidia Jetson Nano using Tensorflow. Tested on Jetson Nano but should work on other platforms as well. [...] For classifying anything we need a proper dataset. [...] I made my own dataset, a small one with 6 classes and a total of 600 images (100 for each class). I used the camera-capture utility in the Hello AI World example to capture images.
Donkey Car 3.0 with Jetson Nano
Having read some amazing books on machine learning, I had been looking for opportunities to apply ML from first principles in the real world. That was what got me curious about the wonderful Donkey® Car project. The project is essentially a how-to guide to building your own RC car which can drive itself around a track using classical control theory, computer vision or in my case machine learning. I wanted to experiment with more sophisticated models. As I was constrained by the CPU on the Asus Tinkerboard S, I decided to level-up using the NVIDIA Jetson Nano.
Run real-time, multi-person pose estimation on Jetson Nano using a Raspberry Pi camera to detect human skeletons, just like Kinect does. With this setup environment, obtain about 7–8fps performance.
Jetson Nano Detection and Tracking
This repository is my set of install tools to get [Jetson] Nano up and running with a convincing and scalable demo for robot-centric uses. In particular, using detection and semantic segmentation models capable at running in real-time on a robot for $100. By convincing, I mean not using NVIDIA's 2-day startup model you just compile and have magically working without having control. This gives you full control of which model to run and when.
Fast Object Detector for the Jetson Nano
MobileDetectNet is an object detector which uses MobileNet feature extractor to predict bounding boxes. It was designed to be computationally efficient for deployment on embedded systems and easy to train with limited data. It was inspired by the simple yet effective design of DetectNet and enhanced with the anchor system from Faster R-CNN.
OpenCV with CUDA for Jetson Nano
A small script to build OpenCV 4.1.0 on a barebones system. The script installs build dependencies, clones a requested version of OpenCV, builds it from source, tests it, and installs it.
Jetson Nano Insulator Detection: TensorFlow & TensorRT
Detection insulator with
ssd_mobilenet_v1 custom trained network. Testing with tensorflow frozen graph gives about 0.07sec per one image (~15FPS). I have recieved better result (about 20fps) with TensorRT library.
Interface Touch Sensor, Accelerometer, IV Sensor, OLED
Grove is an open source, modulated, and ready-to-use toolset. It takes a building block approach to assembling electronics, […] [simplifying] the learning process. If you want to use Grove sensors with Jetson Nano, the best way is to grab the grove.py Python library and get your sensors up in running in minutes! Currently there are more than 20 Grove modules supported on Jetson Nano […].
Project of the Month August 2019
Smart Doorbell Camera
We’ll create a simple version of a doorbell camera that tracks everyone that walks up to the front door of your house. With face recognition, it will instantly know whether the person at your door has ever visited you before—even if they were dressed differently. And if they have visited, it can tell you exactly when and how often.
Open Source Autocar (1/10th scale) with Jetson Nano
With this open-source autocar powered by Jetson Nano, you can seamlessly toggle between your remote-controlled manual input and your AI-powered autopilot mode!
Donkey Car with Jetson Nano
Open source hardware and software platform to build a small scale self driving car. Donkeycar is minimalist and modular self driving library for Python. It is developed for hobbyists and students with a focus on allowing fast experimentation and easy community contributions.