Jetson Community Projects
Explore and learn from Jetson projects created by us and our community. These have been created for Jetson developer kits. Scroll down to see projects with code, videos and more.
This is a collection of cool projects, applications, and demos that use NVIDIA Jetson platform. For more inspiration, code and instructions, scroll below.
Open-source project for learning AI by building fun applications. It’s easy to set up and use, is compatible with many accessories and includes interactive tutorials showing you how to harness the power of AI to follow objects, avoid collisions and more. The kit includes the complete robot chassis, wheels, and controllers along with a battery and 8MP camera. Supports AI frameworks such as TensorFlow and PyTorch.
Hello AI World
Start using Jetson and experiencing the power of AI. In a couple of hours you can have a set of deep learning inference demos up and running for realtime image classification and object detection using pretrained models on your Jetson Developer Kit with JetPack SDK and NVIDIA TensorRT. We'll focus on networks related to computer vision and includes the use of live cameras. You also code your own easy-to-follow recognition program in C++.
Autonomous AI racecar using NVIDIA Jetson Nano. With JetRacer, you will:
Real-time Human Pose Estimation
This project features multi-instance pose estimation accelerated by NVIDIA TensorRT. It is ideal for applications where low latency is necessary. It includes:
Have a Jetson project to share? Post it on our forum for a chance to be featured here too. Every month, we’ll award one Jetson AGX Xavier Developer Kit to a project that’s a cut above the rest for its application, inventiveness and creativity.
Furthermore, you can earn an AI Certification by submitting the Jetson project that you created. Learn more about Jetson AI Certification Programs.
Handwriting ML Classifier with Docker, Jetson Nano and Flask
A machine-learning handwriting classifier. Try out your handwriting on a web interface that will classify characters you draw as alphanumeric characters. Use the EMNIST Balanced character dataset to train a PyTorch model to deploy on Jetson Nano using Docker, with a web interface served by Flask.
Ellee: A Talking Teddy Bear Powered by GPT-3 & Computer Vision
Ellee is a teddybear robot running on Jetson Nano that can see, recognize people, and use their name in natural conversation. With servo motors, they can turn their head and create eye contact with those they talk with. The object detection and facial recognition system is built on MobileNetSSDV2 and Dlib, while conversation is powered by a GPT-3 model, Google Speech Recognition and Amazon Polly.
BirdCam is a framework on Jetson Nano that classifies urban fauna. It uses traditional image processing and machine learning to perform real-time classification of the animals that visit the feeder. The hardware setting involves a camera and an optional LED illuminator. This project can also respond to unwanted visitors such as rats in real time by activating a stream of water.
R1mini ROS2 SLAM Mapping and Navigation
This is a demonstration of OmoRobot's autonomous driving platform R1mini and its ROS2-based SLAM-mapping and indoor autonomous driving. Using Jetson Nano and YD LiDAR sensors on the R1mini Pro, you can try SLAM-mapping and indoor autonomous driving with just a few simple commands.
Green Iguana Detection and Surveillance
Green iguanas can damage residential and commercial landscape vegetation. Detect and monitor their location in real time, receiving notifications and using a live dashboard to identify trends. Following this project, you can build a training set using Selenium and MakeSense.ai, then follow NVIDIA TAO Toolkit to adapt, optimize and tretrain a pre-trained model before exporting for edge device deployment. This output can be converted for TensorRT and finally run with DeepStream SDK to power the video to analytics pipeline.
Automated supervision and warning system for lab equipment using Jetson and MQTT. This system monitors equipment from the '90s running on x86 computers. Because of the lack of software updates or modern OS support, the equpiment can't integrate into modern monitoring solutions or monitoring at all. Originally human operators or technicians monitored them 24/7 and waited for red led warning messages. Jetson addresses this in a cost-effective manner: attaching an HDMI Grabber with necessary adapters (for the equipment's VGA or DVI outputs, etc) and training a classification model to recognize "good" and "bad" states and alert supervisors or even turn off the power supply if something goes really wrong.
Cat Laser Turret Robot
To make sure my cat gets lots of exercise inside [the house] over the winter, I added object detection (YOLOv5) to find him [and with] a ZED2 stereo camera, I located his location and used a robot arm (NED) to point a laser pointer just out of his reach. All of the processing was done on a Jetson AGX Xavier, and the arm was controlled using ROS.
For newborn babies, turning over and lying on their stomachs can be risk suffocation, so it is key to make sure they can sleep or stay in prone position. For their well-being, BabyWatcher monitors your newborn's position and detects if they are in prone or supine position. To create the transfer learning model, based on SSD-Mobilenet, training material was annotated with CVAT, exported into Pascal VOC format, merged into a single dataset, and automatically split into training/validation. The final transfer learning model is then converted into ONNX format.
JetMax is an AI vision open-source robotic arm powered by Jetson Nano, with source for a multitude of projects and AI tutorials. The API is completely opened for customization and supports Python, C++ and JAVA. The camera brackets are adaptably designed to fit different angles according to your own operation setup needs.
Self-Driving-ish Computer Vision System
This project does object detection, lane detection, road segmentation and depth estimation. These deep learning models run on Jetson Xavier NX and are built on TensorRT. In order to get nice-looking visual output, this project employs tracking, curve-fitting and transforms using projective geometry and a pinhole camera model.
Go Motion simplifies stop motion animation with machine learning. A CSI camera is connected to a Jetson Xavier NX. This camera continually captures images of a scene. Using the trt_pose_hand hand pose detection model, the Jetson is able to determine when a hand is in the image frame. When all hands leave the frame, an image is saved as part of the stop motion sequence. It is possible to continually manipulate the scene, momentarily removing one’s hands from the camera’s view after each adjustment, and have a stop motion sequence automatically generated that contains only the relevant image frames.
Weed Killing Robot
A tracked mobile robot called a Bunker that moves around a yard, with a Gen3 arm from Kinova mounted on top. A built-in camera on the arm sends a video feed to a Jetson AGX Xavier inside of a Rudi-NX Embedded System, with a trained neural network for detecting garden weeds. The arm moves a propane-fuelled flamethrower to kill the weeds.
Project of the Month December 2021
A standalone AI-based synthesizer in the Eurorack format. Neurorack envisions the next generation of music instruments, providing AI tools to enhance musician creativity, thinking about and composing music. Its real-time capabilities rely on the Jetson Nano's processing power and Ninon Devis' research into crafting trained models that are lightweight in computation and memory footprint. Neurorack uses Pytorch deep audio synthesis models to produce sounds that are impossible to achieve without samples while being easy to manipulate, all without requiring a separate computer.
AI for Healthcare with Jetson Nano 2GB
This project uses deep learning concepts and builds upon the NVIDIA Hello AI World demo in order to detect various deadly diseases. It can currently detect lung cancer, COVID-19, tuberculosis, and pneumonia. It uses chest/lung CT-Scans and X-ray images from two Kaggle training datasets and has an accuracy between 50% and 80%. It can take live video input or images in several formats to provide accurate output.
Project of the Month November 2021
LiveChess2FEN: a CNN-based Chess Piece-Classifying Framework
Predict live chess games into FEN notation. LiveChess2FEN is a fully functional framework that automatically digitizes the configuration of a chessboard and is optimized for execution on Jetson Nano. Our first contribution has been accelerating the chessboard's detection algorithm. Subsequently, we have analyzed different Convolutional Neural Networks for chess piece classification and how to map them efficiently on our embedded platform. Notably, we have implemented a functional framework that automatically digitizes a chess position from an image in less than 1 second, with 92% accuracy when classifying the pieces and 95% when detecting the board.
Project of the Month October 2021
Acute Lymphoblastic Leukemia Classification with Jetson Nano
This research-only Jetson Nano classifier for Acute Lymphoblastic Leukemia (ALL) was developed using Intel® oneAPI AI Analytics Toolkit and Intel Optimization for Tensorflow for training acceleration. The TensorRT Model achieves an inference time average of 0.07 seconds. You can create custom trained models in TFRT, ONNX & TensorRT formats using the Acute Lymphoblastic Leukemia Image Database for Image Processing, test on your development machine and deploy to run on your Jetson Nano. For more Acute Lymphoblastic Leukemia information please visit this Leukemia Information page.
Our goal is to build a research platform that can be used to develop state estimation, mapping and scene understanding applications. Our sensor suite consists of stereo RGB cameras, an RGB-Depth camera, a thermal camera, an ultrasonic range finder, a GNSS (Global Navigation Satellite System) receiver, IMUs (Inertial Measurement Unit), a pressure sensor, a temperature sensor and a power sensor. Our embedded processing platform consists of an Arduino Zero microcontroller and [a] Jetson Xavier NX. Our embedded power source consists of a USB-C power bank.
This open-source, standalone 3D-printed robot hand contains a mimicking demo that allows it to copy one of five hand gestures it sees through a camera which is fixed into its palm. The hand's servos are capabe a rotation range of about 270° and each finger has two: one for curling by pulling on a string “tendon” and one for wiggling sideways. A wrist servo swings the hand back and forth. The hand is mounted onto a base with a Jetson Nano Developer Kit.
Jetson Multicamera Pipelines
Jetson Multicamera Pipelines is a Python package that facilitates multi-camera pipeline composition and building custom logic on top of the detection pipeline, all while heping reduce CPU usage by using different hardware accelerators on the Jetson platform. With relatively simple Python code, custom logic can involve capture, batching, HW inference and encoding with multiple cameras.
Vision-Based Gesture-Controlled Drone
This project augments a drone's computer vision capabilities and allows gesture control using a Jetson Nano's computational power. In Guided Mode, the system transmits to the drone's flight controller the output of the gesture control system that currently supports a few essential commands. Pose Classification Kit is the deep learning model employed, and it focuses on pose estimation/classification applications toward new human-machine interfaces. This trained model has been tested on datasets that simulate less-than-ideal video with partial inputs, achieving high accuracy and low inference times.
AI-Powered Shop Defense Robot
I wanted a fun way to keep the kids out of the shop. Made a defense system using a Rudi-NX (rugged system from Connecttech containing a Jetson Xavier NX), a Zed2 stereo camera from StereoLabs, a Kuka IIWA robot arm, and a hose. At the very least, they had fun.
Dart Score Detector
This application uses an SSD-Mobilenet neural network for object detection to automatically calculate the score in a game of darts. The application detects the Bull (the dartboard's center) and arrows placed on the dartboard. One challenge with SSD-Mobilenet was determining the score accurately since there are 61 different patterns of dart scores, which are combinations of numbers 1-20 and multiples (Single, Double, Triple) + Bull. To overcome this, the project uses an original dataset for the trained model to estimate the scores more accurately.
Your Dingo Mobile Robot can now fetch beer and deliver it to you at home. It receives commands via an IFTTT-connected Google Assistant and a destination location is sent to ROS running in a Rudi-NX Embedded System with Jetson Xavier NX. An IMU and 2D lidars help navigate the planned path and a Gen3 lite robot arm opens the fridge door which is localized using aruco markers.
Sheepdog Whistle Neural Network
Control this robot in the same way that shepherds influence their sheepdogs! This project explores a whistle control mechanism for a custom-built Jetbot powered by Jetson Nano and a Storm32 motor controller control board. A PyTorch neural net is trained with the sounds of different whistles or click sounds represented spectrographically as images. Live Predictions against this trained model are interpretted as sequences of command sent to the bot so it can move in different directions or stop.
Project of the Month August 2021
Home automation with Jetson and Deepstack
Home Assistant custom component for Deepstack object detection. Deepstack is a service which runs in a docker container and exposes various computer vision models via a REST API. Deepstack object detection can identify 80 different kinds of objects, including people, vehicles and animals. Alternatively a custom object detection model can be used. There is no cost for using Deepstack and it is fully open source. To run Deepstack you will need a machine with 8 GB RAM, or an NVIDIA Jetson.
Autonomous 1/10th car with ROS
G Troulis, A Dassier, D Gulati, M Arsani, 2021SpringTeam
This Jetson Nano-based project is capabe of driving a 1/10 scale autonomous car on a real or simulated track using a ROS package using OpenCV. The vehicle can follow yellow lines and stay within lanes delineated by two white lines which are provided by a calibrated camera. Adafruit's Servokit controls is used to control the car's physical functions and CV Bridge helps interface between ROS and OpenCV. The project includes a PCB designed in KiCad that arranges WS2812b individually addressable RGB LEDs in a rectangle underneath a Jetson Nano to "give it a swank gaming-PC aesthetic".
Fingers Gesture Robot Control
With this project, control a Cobotta robot arm managed via Isaac SDK through the use of finger gestures coming from a USB camera and detected by a Resnet18 deep neural network. The output of the neural network is used to command pre-stored positions (in joint space) to the robotic arm. The software is connected to both a simulated environment running in Isaac Sim as well as the physical robot arm.
Openpilot Advanced Driver Assistance System
Use a Jetson Xavier NX and an Arducam IMX camera mounted on a car's dashboard to run dragonpilot, an open source driver assistance system based on openpilot. It supports adaptive cruise control, automated lane centering, forward collision warning and lane departure warnings, while alerting distracted or sleeping users.
This autonmous robot is powered by 6 planetary geared motors and its design is based on the Rocker-Bogie mechanism employed by NASA/JPS for interplanetary rovers. Controlled by a Jetson Nano 2GB, this robot uses 2 camera sensors (front and back) for navigation and weeding. Autonomous navigation through crop lanes is achieved using a probabilistic Hough transform on OpenCV and crop and weed detection is powered by tiny-YOLOv4.
Control a Personal Robot Assistant with Eye Tracking
I created a personal robot assistant that can be easily controlled with eye movements. This robot has the capabilities of replacing a caretaker's responsibility while keeping the people it is caring for safe as well. For example, it can pick up and give medicine, feed, and provide water to the user; sanitize the user's surroundings, and keep a constant check on the user's wellbeing. It is able to drive in any direction, rotate its crane, raise its arm over high surfaces or lower the arm under low surfaces, and finally grasp on to objects. It is also fully controllable by just the user's gaze!
Project of the Month June 2021
The Spaghetti Detective
The Spaghetti Detective (TSD) is an AI-powered 3D printer remote management and monitoring tool for detecting 3D printing failures. Originally, this open-source project ran on general-purpose PCs and NVIDIA GPU VMs, and in response to the interest in the community, it now also runs on Jetson Nano (4GB). TSD runs a super fast detection model built with YOLO.
Space Fighters Rocket Game with Jetson Nano)
In this AI-powered game, use hand gestures to control a rocket's position and shooting, and destroy all the enemy space ships. The game client is built on the
pygame library and mqtt. Once you start the
main.py script on your laptop and and the server running on your Jetson Nano, play by using a number of pretrained hand gestures to control the player.
Jetbot Road Following and Collision Avoidance tasks
Combine optimized Road Following and Collision Avoidance models to enable Jetbot to move freely around the track and also avoid collisions with obstacles at the same time. [...] Combination of Road Following and Collision Avoidance models to allow the Jetbot to follow a specific path on the track and at the same time also be able to avoid collisions with obstacles that come on it's way in real-time by bringing the Jetbot into a complete halt!
For the spread of COVID-19 around the world, there were many consequences. The photos you casually take with your smartphone are no exception to this. The photos taken after the spread of COVID-19 show family and friends wearing masks. Now, it is difficult to go out without a mask. A mask is important to prevent infection and transmission of COVID-19, but on the other hand, wearing a mask makes it impossible for AI to recognize your face. It's not just the AI. MaskEraser uses a Jetson Developer Kit to automatically use deep learning on video feed from a webcam to remove only the masked portions of detected faces. The removed parts are then predicted and drawn in the AI's imagination.
CudaCam runs on a Nvidia Jetson Nano giving your home or small office a bespoke well-filtered AI camera event generator & recording appliance on a budget. The neighbourhood cats, dogs and other more interesting wildlife are now more transparent. Can record all incoming video as well in case something goes down. Uses a very network efficient RTSP proxy so that you can do the above and also live monitoring with something like VLC media player.
Project of the Month May 2021
The MaVIS (Machine Vision Security) system sends real-time email notifications when it detects humans in visual scenes, in order to alert property owners and identify and provide records of potential intrusions. A camera on-board the Jetson Nano Developer Kit monitors the scene and uses DeepStream SDK for the object detection pipeline. Data is processed using AWS Lambda functions and users can view images and video of of the detected moment, hosted on Amazon Web Services RDS.
Real Super Resolution with ncnn on Jetson Nano
Real SuperResolution (RealSR) on the Jetson Nano. RealSR is an award-winning deep-learning algorithm which enlarges images while maintaining as much detail as possible. Blurred areas are smoothed out while high-detail and contrast areas are enlarged with sharp edges. This implementation uses Vulkan drivers and executable files based on ncnn, which do not need to be preinstalled.
Project of the Month April 2021
Portable Neuroprosthetic Hand with Deep Learning-based Finger Control
J Nguyen Ph.D., Prof Zhi Yang's lab, Prof Qi Zhao's lab, Dr. E Keefer, Dr. J Cheng
This portable neuroprosthetic hand features a deep learning-based finger control neural decoder deployed on Jetson Nano. It is a self-contained unit with real-time control of individual finger movements. This system was evaluated on a transradial amputee using periveral nerve signals with implanted electrodes, with a finger control accuracy of 95-99% and latency of 50-120ms.
Posture Analysis Application using Jetson Nano
Y Özkan, M Sami Ertekin, Dr. Y Akgul. (Gebze Technical University, BeCure Global Gmbh)
A program OpenPose based for posture analysis. Program provide to watch patient's movement until the right position and save the new outline of body and angle values using Jetson Nano. The
server.py can be used on any Developer Kit. And at least 1 camera must be integrated to the Kit. The
client.py is for your personal computer, you can remote here all of operations on Kit.
Efficient RGB-D Semantic Segmentation for Indoor Scene Analysis
D Seichter, M Köhler, B Lewandowski, T Wengefeld, H-M Gross
Our network architecture for efficient scene analysis ESANet enables real-time semantic segmentation with up to 29.7 FPS on Jetson AGX Xavier. [...] ESANet achieves a mean intersection over union of 50.30 and 48.17 on [indoor datasets NYUv2 and SUNRGB-D]. Our models are trained with PyTorch, [...] exported to ONNX [and] converted to TensorRT engines. During network design, we [...] only use operations [...] supported and highly optimized by TensorRT, [enabling] up to 5× faster inference compared to pure PyTorch. ESANet is well suited as a common initial processing step in a complex system for real-time scene analysis on mobile robots.
The main idea is to implement a prototype AI system that can describe in real time what the camera observes. This project implements an automatic image captioning using the latest Tensorflow on a Jetson Nano edge computing device. A hybrid deep neural network will be implemented to provide captioning of each frame in real time using a simple USB cam and the Jetson Nano.
Monitoring with Jetson Nano
An example development repository for using Nvidia Jetson Nano or Xavier as health monitor using computer vision. It show case the Open Pose, and Face Recognition, and Emotion Analysis (all GPU code) running in real-time on the Jetson Nano platform.
Pose Estimation on Jetson with OpenPifPaf
TensorRT OpenPifPaf Pose Estimation is a Jetson-friendly application that runs inference using a TensorRT engine to extract human poses. The provided TensorRT engine is generated from an ONNX model exported from OpenPifPaf version 0.10.0 using ONNX-TensorRT repo.
Transform any wall or surface into an interactive whiteboard using an ordinary RGB camera, your hand and Jetson. This project crops the captured images from the camera to identify user's hands using a YOLO deep neural network. Once a hand is detected, the cropped image of the hand is fed to a Fingertip Detector model, in order to find fingertip coordinates which will then interact with the whiteboard. Works best on simple dark/light surfaces. This demo runs on Jetson Xavier NX with JetPack 4.4, and is compatible with Jetson Nano and Jetson TX2.
MaskCam is a prototype reference design for a Jetson Nano-based smart camera system that measures crowd face mask usage in real-time, with all AI computation performed at the edge. MaskCam detects and tracks people in its field of view and determines whether they are wearing a mask via an object detection, tracking, and voting algorithm. It uploads statistics (not videos) to the cloud, where a web GUI can be used to monitor face mask compliance in the field of view. It saves interesting video snippets to local disk (e.g., a sudden influx of lots of people not wearing masks) and can optionally stream video via RTSP.
A-Eye for the Blind
Help visually-impaired users keep themselves safe when travelling around. The hardware interface passes pictures of the user's surroundings in real time through a 2D-image-to-depth-image machine learning model. The software analyzes the depths of objects in the images to provide users with audio feedback if their left, center, or right is blocked. Images and timestamps are uploaded to a secured Firebase database so that friends and family can view its website for live images and check-up on them to see if they're okay. The setup uses a Jetson Nano 2GB, a fan, a Raspberry Pi Camera V2, a wifi dongle, a power bank, and wired headphones.
Track and Count People with Jetson Nano
This repository provides a real-time people tracking and counting system. It detects people based on SSD-Mobilenetv1-coco and uses SORT to track and count.
Project of the Month March 2021
Robot Arm Playing Cornhole
Throw the perfect cornhole throw everytime with Susan, a Kuka KR20 robot arm with an attached webcam. A Jetson AGX Xavier attached to Susan detects the ring around the board's hole using OpenCV, calculates the angular position of the hole relative to the camera, its rough position in space, and the throw the arm needs to do. The Jetson communicates over ethernetKRL with Susan in order to make the throw.
BrowZen correlates your emotional states with the websites you visit to give you actionable insights about how you spend your time browsing the web. A webcam attached to a Jetson Xavier NX captures periodic images of the user as a background process. These images are classified by a VGG19 convolutional neural network pre-trained to recognize emotional states. These observations are correlated with browsing history and presented in a web dashboard as a simple way to visualize, on average, how each site one visits impacts their emotional state.
Helmet Detection with Deepstream
Intelligent video analytics solution of Helmet detection using DeepStream SDK. This project is a proof-of-concept, trying to show surveillance of roads for the safety of motorcycle and bicycle riders can be done with a surveillance camera and an onboard Jetson platform. Helmet detection application consists of an Intelligent Video Analytics pipeline powered by Deepstream and NVIDIA Jetson Xavier NX.
Project of the Month February 2021
Dragon-eye is a real-time electronic judging system with Jetson Nano for F3F, which is a radio-control aeromodelling sport using slope-soaring glider planes. Video stream from a camera is sent to Dragon-eye, which identifies the gliders using computer vision and continuously tracks their flight. When a tracked aircraft crosses the central vertical line, Dragon-eye triggers a signal to indicate that lap has been completed.
An autonomous mobile robot project using Jetson Nano, implemented in ROS2, currently capable of teleoperation through websockets with live video, use of Intel Realsense cameras for depth estimation and localization, 2D SLAM with cartographer and C3D SLAM with rtabmap. The base platform is the Xiaor Geek Jetbot, modified it to include a wide-angle camera, as well as the Intel Realsense d435 and t265.
Food Container Identifier
Speech description of food containers for the blind or visually impaired people using Jetson Nano. Re-train a ResNet-18 neural network with PyTorch for image classification of food containers from a live camera feed and use a Python script for speech description of those food containers.
Hermes — Wildfire Detection
An computer vision application powered by NVIDIA Deepstream 5.0 and Ryze Tello to detect wildfires using YOLO. Hermes consists of two parts: an Intelligent Video Analytics pipeline powered by Deepstream and NVIDIA Jetson Xavier NX and a reconnaissance drone, for which I have used a Ryze Tello. This project is a proof-of-concept, trying to show that surveillance and mapping of wildfires can be done with a drone and an onboard Jetson platform.
Project of the Month January 2021
Autonomous navigation for blind people, running on a Jetson Nano edge device. Haptic touch is used to provide the blind person with information, as a way to keep their other senses, such as their hearing, from being occupied, which blind people generally develop very well. This project cost about RS 10000 which is less than USD $200.
DeepWay v1 was based on keras — v2 employs Pytorch.
Self Driving COVID-19 Detecting Robot
P Kim, H Jeon, T Park, Y Kim, Team EOEO
We made a self-driving roboot that patrols inside [buildings] and detects people with high temperatures or without masks, [in order to] diagnose the possibility of COVID-19 in advance. If [the self-driving finds] someone who's not wearing a mask, [it] will warn them until they wear it properly and then it will say thank you. Hardware comprises a Jetson AGX Xavier, 3D and 2D LiDARs, one thermal camera, two cameras and a Raspberry Monitor.
We propose YolactEdge, the first competitive instance segmentation approach that runs on small edge devices at real-time speeds. Specifically, YolactEdge runs at up to 30.8 FPS on a Jetson AGX Xavier with a ResNet-101 backbone on 550x550 resolution images. It produces a 3-5x speed up over existing real-time methods while producing competitive mask and box detection accuracy. There are two key aspects that make our model fast and accurate on edge devices: (1) TensorRT optimization while carefully trading off speed and accuracy, and (2) a novel feature warping module to exploit temporal redundancy in videos.
We developed a flight controller and vision-based state estimator for controlling quadrotor drones after losing a motor. The state estimator (Visual Inertial Odometry) uses FAST feature detector and KLT feature tracker as frontend and OKVIS as the backend. [Despite] fast yaw spinning at 20rad/s after motor failure, the vision-based estimator is still reliable. [Testing] an event-based camera as the visual input, [we show that it outperforms] a standard global shutter camera, especially in low-light conditions.
Defect Detection with SSD Network in Ultrasonic Inspection
This project aims to develop a system using convolutional neutral networks (CNNs) to detect defects in composite laminate materials automatically in order to increase ultrasonic inspection accuracy and efficiency. For inspectors, ultrasonic testing is a labor-intensive and time-consuming manual task. This approach improves their efficiency, accuracy and reduces their workload when when interpreting ultrasonic scanning images to identify defects. Discontinuities and defects in materials are usually not specific shapes, positions, and orientations. A Jetson TX2 Developer Kit runs in real time an image analysis function using a Single Shot MultiBox Detector (SSD) network and computer vision trained on images of delamination defects. The SSD network can also evaluate components and specimens with other methods, such as thermography inspection.
IKNet: Inverse Kinematics Neural Networks for Open Manipulator X
IKNet is an inverse kinematics estimation with simple neural networks. IKNet can be trained on tested on Jetson Nano 2GB, Jetson family or PC with/without NVIDIA GPU. The training needs 900MB of GPU memory under default options. This repository also contains the training and test dataset by manually moving the 4 DoF manipulator ROBOTIS Open Manipulator X.
A Bricq, L Mussa, L Jacqueroud
Robottle is an autonomous Robot that is able to collect bottles in a random environment with obstacles by constructing a map of its environment using SLAM with a RPLidar, and detecting bottles using Deep Neural Network ran on the GPU of a Jetson Nano Board. Robottle was designed for an academic competition at EPFL. For 10 minutes, the robot must autonomsouly collect bottle in an arena filled with bottles and bring them back to one of the corner of the arena, the recycling arena.
Smart Face Shield with Jetson Nano
I made a face shield deployment system using Jetson Nano 2GB, 2 SG90 servos, a PCA9685 servo driver, a face shield and a 3D-printed custom face shield frame. Thanks to the Jetson Community and other developers I could create a simple program. The current version of the code is tested and works alright for a short run.
Using a pose estimation model, an object detection model built using Amazon SageMaker JumpStart, a gesture recognition system and a 3D game engine written in OpenGL running on a Jetson AGX Xavier, I built Griffin, a game that let my toddler use his body to fly as an eagle in a fantasy 3D world.
Jetson and DeepStream Integration with Azure IoT Central
This project contains a set of IoT PnP apps to enable remote interaction and telemetry for DeepStream SDK on Jetson devkces for use with Azure IoT Central. The
nvidia-jetson-dcs application accomplishes this using a device connection string for connecting to an Azure IoT Hub instance, while the
nvidia-jetson-dps application leverages the Azure IoT Device Provisioning Service within IoT Central to create a self-provisioning device.
YOLOv4 with TensorRT engine
YOLOv4 object detector using TensorRT engine, running on Jetson AGX Xavier with ROS Melodic, Ubuntu 18.04, JetPack 4.4 and TensorRT 7. To optimise models for deployment on Jetson devices, models were serialised into TensorRT engine files for inference. As ROS is one of the most popular middleware used for robots, this project performs inference on camera/video input and publishes detection in ROS-supported message formats. This allows anyone to easily modify and use this package in their own projects.
Real-time Auto License Plate Recognition with Jetson Nano
This repository provides you with a detailed guide on how to build a real-time license plate detection and recognition system. The source code of the repository implemented on Jetson Nano reached 40 FPS. The license plate data set for this repository was collected in Vietnam. You can train your model to detect and recognize number plates.
Self-driving AI toy car
Self-driving AI toy car built with Jetson Nano. Currently capable of path following, stopping and taking correct crossroad turns. It contains an end-to-end CNN system built in Pytorch.
Project of the Month December 2020
I used transfer learning to retrain ssd-mobilenet to recognise my hand gestures so I could drive a large robot dog without a controller. This works pretty well if the confidence rating is set high enough, and there is also some filtering on the output to smooth out the dog’s movement. I’m just using 5 GPIO pins on a Jetson Nano to control the existing dog hardware.
Really Useful AI Robot
A reliable, robust ROS robot for ongoing robot development, using NVIDIA Deep learning models to do intelligent things. Eventually, it will have a linear body and arm which travels up and down its utility stick. The Robot runs ROS Melodic on a Jetson Xavier NX developer kit runing Ubuntu 18.04. The
rur_description ROS packages are installed on the robot, and everything is launched with the
Real-Time 3D Traffic Cone Detection for Autonomous Driving
A Dhall, D Dai, L Van Gool, AMZFormulaStudent
Considerable progress has been made in semantic scene understanding of road scenes with monocular cameras [although, it generally focuses] on certain specific classes such as cars, bicyclists and pedestrians. This work investigates traffic cones, an object category crucial for traffic control in the context of autonomous vehicles. 3D object detection using images from a monocular camera is intrinsically an ill-posed problem. [We] propose a pipelined approach, [...] method [...] [which] runs efficiently on the low-power Jetson TX2, providing accurate 3D position estimates, allowing a race-car to map and drive autonomously on an unseen track indicated by traffic cones. With the help of robust and accurate perception, our race-car won both Formula Student Competitions held in Italy and Germany in 2018, cruising at a top speed of 54 km/h on our driverless platform "gotthard driverless".
A.I. Activated Wolverine Claws
A.I. Activated Wolverine Claws - quite a few YouTubers have made mechanical extending wolverine claws, but I want to make some Wolverne Claws that extend when I'm feeling like it - just like in the X-Men movies. I've trained a Deep Learning AI Neural network on NVIDIA Jetson Nano with Jetson Inference to recognise when I'm pulling the right face, and activate the Cosplay Wolverine Claws. Is this the future of Cosplay - you can decide!
Simple A.I. Demo with Jetson Nano
I'm trying out training a really simple AI machine learning model using transfer learning on the NVIDIA Jetson Nano with Jetson Inference. I used a very minimal data set of images captured and trained using scripts provided by NVIDIA. I wrote a simple script to make the robot look for high contrast markers in turn.
Energy Prediction System
Energy Prediction System with a neural network (CNN-LSTM) in a Jetson Nano. In this project [we're building] an active power meter with an Arduino Uno. The data will be sent to the Jetson with the Python script
arduino_serial.py to establish the communication between the Jetson and the Arduino. The second script,
neural_training.py is to start the training for the hybrid neural network and visualize the data. Use
visualize.py to visualiaze your predictions of the .h5 file saved after the deep learning training.
A Jetson based DeepStream application to identify areas of high risk through intuitive heat maps. In other words, a heatmap will be generated continuously representing regions where faces have been detected recently. Allowing us to see through the time. The application is containerized and uses DeepStream as the backbone to run TensorRT optimized models for the maximum throughput. Built on top of deepstream-imagedata-multistream sample app.
Project of the Month October 2020
Advanced driver-assistance system using Jetson Nano
An ADAS system that uses Jetson Nano as the hardware with four main functions: forward collision warning, lane departure warning, traffic sign recognition and overspeed warning. I trained and optimized three deep neural networks to run simultaneously on Jetson Nano (CenterNet-ResNet18 for object detection, U-Net for lane line segmentation and ResNet-18 for traffic sign classification).
Human Pose Estimation & Posture Corrector App
This app uses pose estimation to help users correct their posture by alerting them when they are slouching, leaning, or tilting their head down. [You'll] learn how to set up the Human Pose model and how to deploy the Posture Corrector app on the NVIDIA Jetson Nano.
Fire Detecting Drone
An autonomous drone to combat wildfires running on an NVIDIA Jetson Nano Developer Kit. This project uses a camera and a GPU-accelerated Neural Network as a sensor to detect fires.
Project of the Month September 2020
DR-SPAAM: Person Detection in 2D Range Data
D Jia, A Hermans, B Leibe
DR-SPAAM: A Spatial-Attention and Auto-regressive Model for Person Detection in 2D Range Data to appear in IROS'20. We’ve built a deep learning-based person detector from 2D range data. It runs on a Jetson AGX at 20+ Hz, or on a laptop with RTX 2080 at 90+ Hz. Checkout links below for more information
DeepStream ❤️ OSC
I'm using DeepStream SDK for Jetson Nano as an instrument to sonify and visualize detected objects in real time. My idea [...] was to turn public spaces into interactive-playable places where I can use people or vehicles as input to make performances or installations. Any software that accepts OSC as input can use this data to control their parameters. It can be sound / visual programming frameworks, videogames / emulators, whatever you can imagine. It is also possible to translate OSC to HID or MIDI messages to extend the amount of softwares DeepStream can communicate with.
Mommybot: Sleeping Assistant
J Hyuk, S Park
These days, more and more people are suffering from sleep deprivation. Mommybot is a system using Jetson Nano that helps manages a user's sleeping hours. Mommybot has 4 functions: (1) detect with a camera and register the time of different user events, (2) determine whether a user is asleep using TensorFlow, (3) with sklearn suggest optimal bedtime hours based on previous sleeping habit predictions, and (4) wake up the user with a preferred sleeping hour schedule.
Project of the Month August 2020
DBSE Monitor: Drowsiness, Blindspot & Emotion Monitor
L Arevalo Oliver, V Altamirano Izquierdo, A Sanchez Gutierrez
Drowsiness, emotion and attention monitor for driving. Detects objects in blindspots via CV. Jetson Nano [takes] care of running through both of the Pytorch-powered Computer Vision applications using a plethora of libraries in order to perform certain tasks. The two webcams serve as the main sensors to carry out Computer Vision and Pytorch [identifies] faces and eyes for one application and objects for the other and [sends the] information through MQTT in order to emmit a sound or show an image in the display. [We] added geolocation and crash detection with SMS notifications [through] Twilio with an accelerometer.
Fever Control with Jetson Nano & Lepton3
A useful application for the COVID19 era to control the human temperature and issue alarms in case of fever. This year, the year of COVID19, I decided to get that project out of the drawer and to adapt it to Nvidia Jetson Nano to realize an application to control human body temperature and issue alerts in case of fever.
Leela Chess Zero
As a chess player, I usually find myself using a chess engine for game analysis or opening preparation. Recently, I’ve noticed that chess engines have grown to be super powerful. Consider Leela Chess Zero (aka lc0), the open-source implementation of Google DeepMind’s AlphaZero. It has played so many amazing games that it’s hard for me to pinpoint the best one! This video demonstrates how to load a frontend UCI engine in ChessBase and connect it to a Leela Chess Zero engine running backend in a Nvidia Jetson device (which can be either Jetson Xavier NX or Jetson AGX Xavier).
RB-0: Jetson Nano Rover
RB-0 is a hobby sized rover that uses the same suspension method as NASA's newer differential-bar rovers. It uses a Jetson Nano, a camera, 15 servos, a Circuit Playground Express, and Wi-Fi for lots of fun with manuevering and running AI. It can climb small obstacles, move its camera in different directions, and steer all 6 wheels. I wanted to make it open source so anyone can have fun and learn from it!
DC-GAN Guitar Effector
Jetson Nano DC-GAN Guitar Effector is a Python app that modifies and adds effects to your electric guitar's raw sound input in real time. The Jetson module captures the instrument's sound through a Roland DUO-CAPTURE mk2 audio interface and outputs the resulting audio of the DC-GAN inference. The one-dimension pix2pix inference model is optimized and run on TensorRT at FP16 precision.
Project of the Month June 2020
AI device for mass fever screening. I combine Thermal and Visible Spectrum cameras in order to detect people in the scene and measure their skin temperature in a contactless manner [...], automatically [detecting] people in the scene - there's no need for a human operator! You can test multiple people at the time, [...] on-the-fly, without interrupting the flow. I decided to use Raspberry Pi Camera Module v2 [because it] works out-of-the-box with NVIDIA Jetson Nano. In my first approach, I used a SingleShot MultiBox Detector trained on COCO dataset. This lets me detect objects across 91 classes from COCO. The algorithm runs on Jetson Nano's embedded GPU at 9FPS.
A smart, fast and metrically accurate GPU-accelerated 3D scanner with Jetson Nano and Intel depth sensor for instant 3D reconstruction. This system design makes on-the-go 3D scanning modules without external computing power affordable by any creator/maker around the world, giving users HD 3D models of scanned objects or environments instantly. Using RGBD stereo mapping, render 3D models of people, objects and environments with JetScan.
Vision alerting system with IoT Edge, Azure Custom Vision and Jetson Nano
[...] Create your own object alerting system running on an edge device. For this we will use an NVIDIA Jetson Nano, the Azure Custom Vision service and Azure IoT Edge. The goal is to process the camera frames locally on the Jetson Nano and only send a message to the cloud when the detected object hits a certain confidence threshold.
Safe Meeting keeps an eye on you during your video conferences, and if it sees your underwear, the video is immediately muted. A camera is connected to an NVIDIA Jetson Nano. This camera is positioned immediately next to a webcam that is used for video conferences, such that it captures the same region.
[Due to] the Covid-19 pandemic, people cannot drink outside [and] are looking for alternatives such as drinking with friends through videocall. Our team thought that enjoying time wisely with fun interaction is what people need. We focus on the problem that drinking through videocall can fill visual and auditory elements, but not physical interaction. Also, since you are drinking alone, it is important to know your drinking status. The model is made from the TensorFlor ObjectDetector API. Once [...] built, TensorRT can optimize it for real-time execution [...] on Jetson Nano.
The Tale of the Bee-Saving Christmas Tree
We used [64 NVIDIA Jetson Nano Devkits] to build the Jetson tree with a total of 8.192 CUDA cores and 256 CPU cores. We'll use its power to analyze bee videos [and] investigate [...] the perishing of insects. At apic.ai, we believe technology can help us create a better understanding of nature. [...] We [...] [analyse] bee behavior like motion patterns and pollen intake. Our monitoring system visually detects bees as they enter and leave their hives. Through their level of activity, mortality and food abundance we gain insights into the well-being of the insects and the plant diversity in the environment [...], thus [enabling] us to evaluate regional living conditions for insects, detect problems and propose measures to improve the situation.
youfork: a Fully ROS 2 Homemade Mobile Manipulator running on Jetson AGX Xavier. youfork is a mobile manipulator for home tidy-up. Perform home tidy-up by teleoperation. All components are driven by ROS 2 Eloquent + Ubuntu 18.04 on Jetson Xavier.
Originally envisioned as a demonstrator for the Bosch AI CON 2019, the platooning system consists of two cars, a leading car and a following car. The leading car can be driven manually using a PS4 controller and the following car will autonomously follow the leading car. The system currently is also capable of Object Tracking, Velocity Estimation by Optical Flow Visual Odometry and Monocular Depth Estimation.
Project of the Month May 2020
Qrio: A Bot That Plays Videos for My Toddler
[Use] an object detection AI model, a game engine, an Amazon Polly and a Selenium automation framework running on an NVIDIA Jetson Nano to build Qrio, a bot which can speak, recognise a toy and play a relevant video on YouTube.
Narwhal-AI: Ultrasonic Classifier
Listen, record and classify the sounds coming from a natural environment. Microphones capture audio data which is then processed using machine learning to identify the animal species, whether it be bird, bat, rodent, whale, dolphin or anything that makes a distinct noise. The key advantages over other existing technology is that: the audio data is filtered at source saving both disc space and human intervention. Previously recordings could easily generate many hours of footage per day, consuming up to 5 Gb per hour of disc space and adversely affecting the zoologist's golfing handicap and social life.
Deep Reinforcement Learning with JetBot
AI RC Car Agent using deep reinforcement learning on Jetson Nano. This software is capable of self-learning for your AI RC car in a matter of minutes. In the demo video, the Jetbot does deep reinforcement learning in the real world using a SAC (soft actor critic). The DRL process runs on the Jetson Nano. This project refers to this great post by Antonin Raffin.
Project of the Month April 2020
Smart Social Distancing
As a response to the COVID-19 pandemic, Neuralet released an open-source application to help people practice physical distancing rules in […] retail spaces, construction sites, factories, healthcare facilities, etc. […] Our approach uses […] edge AI devices such as Jetson Nano to track people in different environments and measure adherence to social distancing guidelines, and can give notifications each time social distancing rules are violated.
Deep Clean watches a room and flags all surfaces as they are touched for special attention on the next cleaning to prevent disease spread. […] A stereo camera detects the depth (z-coordinate) of an object of interest (e.g. a hand) in the video frame. OpenPose is used to detect hand location (x, y-coordinates). When a hand is at the same position and depth as another object in view (i.e. touching), that location is tracked.
NVIDIA / Hackster AI at the Edge Challenge 1st Place
Reading Eye For The Blind
Allows the reading-impaired to hear both printed and handwritten text by converting recognized sentences into synthesized speech. […] Place some text under the camera, toggle the power switch […], and click the start button. Using the IAM Database, with more than 9,000 pre-labeled text lines from 500 different writers, we trained a handwritten text recognition.
NVIDIA / Hackster AI at the Edge Challenge 1st Place
[With] MixPose, we are building a streaming platform to empower fitness professionals, yoga instructors and dance teachers through power of AI. [Instructors] can choose anywhere they feel comfortable, [and] users can watch the stream in the comfort of their own TV.
NVIDIA / Hackster AI at the Edge Challenge 1st Place
Nindamani the Weed Removal Robot
Nindamani, the AI based mechanically weed removal robot, which autonomously detects and segment the weeds from crop using artificial intelligence. The whole robot modules natively build on ROS2. Nindamani can be used in any early stage of crops for autonomous weeding.
NVIDIA / Hackster AI at the Edge Challenge 2nd Place
Easy-to-implement and low-cost modular framework for complex navigation tasks. Visual-based autonomous navigation systems typically require visual perception, localization, navigation, and obstacle avoidance. We propose [a] single RGB camera [and] techniques such as semantic segmentation with deep neural networks (DNNs), simultaneous localization and mapping (SLAM), path planning algorithms, as well as deep reinforcement learning (DRL) to implement the four functionalities mentioned above.
NVIDIA / Hackster AI at the Edge Challenge 2nd Place
Bandwidth Reduction with Anomaly Detection
We experiment with visual anomaly detection to develop techniques for reducing bandwidth consumption in streaming IoT applications. There seems to be no avoiding the tradeoff of spending compute to save bandwidth but we also want to spend it intelligently so we want to take advantage of the context. [With] visual anomaly detection, we stream ONLY infrequent anomalous images [and] explore unsupervised methods of reducing bandwidth by learning the context of a scene in order to filter redundant content from streaming video.
NVIDIA / Hackster AI at the Edge Challenge 2nd Place
AIoT - Artificial Intelligence on Thoughts
[Learn] how to read in and signal process brainwaves, build and train an Autoencoder to compress the EEG data to a latent representation, [use] the k-means machine learning algorithm to classify the data to determine brain-state, and [use] the information to control physical hardware! And along the way pick up tips on creating GUIs and real-time graphics in Python!
NVIDIA / Hackster AI at the Edge Challenge 3rd Place
Tracked vehicle made with Lego Technic parts and motors, enhanced with LiDAR and controlled by a Jetson Nano board running the latest Isaac SDK. Issue voice commands and get the robot to move autonomously. Create missions: navigate [and] set where the tank should go. If [the camera] detects the target object, it will get closer and shoot it with... the camera. It'll just take a picture, no real weapons :)
NVIDIA / Hackster AI at the Edge Challenge 3rd Place
Deep Eye - DeepStream Based Video Analytics
Hardware platform combined with DeepLib: an easy to use but powerful Python library, and a Web IDE [for rapid prototyping of video analytics projects] with the Jetson Nano. It supports up to 2 MIPI CSI cameras, which are mounted on a rotating platform. The project consists of 3 main components:
NVIDIA / Hackster AI at the Edge Challenge 3rd Place
Clean Water AI
S Han, I Sotani, P Ma, N Wojcik, J Shenk
Clean Water AI is an IoT device powered by NVIDIA Jetson that classifies and detects dangerous bacteria and harmful particles. The system can run in real time, [with] cities [installing] IoT devices across different water sources and […] monitoring water quality as well as contamination continuously. We utilize Tensorflow Object Detection Method to detect the contaminants and WebRTC to let users check water sources the same way they check security cameras.
ActionAI: Custom Tracking & MultiPerson Activity Recognition
We introduce an IVA pipeline to enable the development and prototyping of AI social applications. ActionAI is a Python library for training machine learning models to classify human action. It is a generalization of our yoga smart personal trainer, which is included in this repo as an example. This makes an ideal prototyping and data gathering platform for Human Activity Recognition, Human Object Interaction, and Scene Understanding tasks with ActionAI, a Jetson Nano, a USB Camera and the PS3 controller's rich input interface.
Momo is a Native Client that can distribute video and audio via WebRTC from browser-less devices, such as wearable devices or Raspberry Pi. Using Jetson Nano's hardware encoder, it is possible to deliver 30fps video at 4K to a browser with a delay of less than 1 second. Momo is released on GitHub as open source under Apache License 2.0, and anyone can use it freely under the license. You must try 4K / 30fps video distribution on WebRTC at Momo!
Project of the Month February 2020
A Bokovoy, K Muravyev, K Yakovlev
ROS node for real-time FCNN-based depth reconstruction. The platforms are NVIDIA Jetson TX2 and x86_64 PC with GNU/Linux (aarch64 should work as well, but not tested).
Shoot Your Shot!
This computer vision booth analyzes users throwing darts from multiple cameras, scoring each dart before logging data to the cloud. To analyze the player's form, we use pose estimation to track body parts through a throwing session. This demo uses two cameras to view the thrower and view the dartboard and track poses and dart placement.
Tipper predicts if a pitch will be in or out of the strike zone in real time. The batter will see a green or red light illuminate in their peripheral vision if the pitch will be in or out of the strike zone, respectively. [...] A convolutional neural network running on an NVIDIA Jetson AGX Xavier rapidly classifies these images against a model built during the training phase of the project. If the images are classified as in the strike zone, a green LED on a pair of glasses (in the wearer's peripheral vision) is lit. Conversely, if the ball is predicted to be out of the strike zone, a red LED is lit.
Project of the Month January 2020
Point-Voxel CNN for Efficient 3D Deep Learning
In our NeurIPS’19 paper, we propose Point-Voxel CNN (PVCNN), an efficient 3D deep learning method for various 3D vision applications. Here we show the 3D object segmentation demo which runs at 20 FPS on Jetson Nano. Note that the most efficient previous model, PointNet, runs at only 8 FPS. We also show the performance of 3D indoor scene segmentation with our PVCNN and PointNet on Jetson AGX Xavier. Remarkably, our network takes just 2.7 seconds to process more than one million points, while the PointNet takes more than 4.1 seconds and achieves around 9% worse mIoU comparing with our method.
Robaka 2: Self-Driving Hoverboard with ROS
My first mobile robot, Robaka v1 was a nice experience, but the platform was too weak to carry the Jetson Nano. The next milestone was building a robot ready to carry the real payload and drive outdoors. I stumbled upon the repo of Niklas Fauth’s repo, [who] summarized the reverse-engineering efforts on hoverboards, shared the opensource firmware, [and] instructions on reprogramming the controller. Another project, Bipropellant, extends his firmware, enabling hoverboard control via serial protocol. I built the platform around [this] and added a ROS-enabled controller for the motors.
Project of the Month December 2019
Gazebo reduces the inconvenience of having to test a robot in a real environment by controlling in a simulated environment. Deep Learning makes robots play games [more] like a human. My goal with this project [to] combine these two benefits so that the robot [can] play soccer without human support. […] Two Jetbots are placed in the field, one tries to make a goal and [the other one] tries to defend the goal. In cases of multiple agents as [such as this], [it can use] self-play reinforcement learning tools.
Deepstream SDK + Azure IoT Edge on Jetson Nano
[Do] realtime video analytics with Deepstream SDK on a Jetson Nano connected to Azure via Azure IoT Edge. Deepstream is a highly-optimized video processing pipeline capable of running deep neural networks. It's a must-have tool [for] complex video analytics requirements, whether realtime or with cascading AI models. IoT Edge gives you the possibility to run this pipeline next to your cameras, where the video data is being generated, thus lowering your bandwitch costs and enabling scenarios with poor internet connectivity or privacy concerns. [Transform] cameras into sensors to know when there is an available parking spot, a missing product on a retail store shelf, an anomaly on a solar panel, a worker approaching a hazardous zone, etc.
Real-time Pupil Detection with DeepLabCut
Realtime pupil and eyelid detection with DeepLabCut running on a Jetson Nano. In neuroscience research, this provides a realtime readout of animal and human cognitive states, as pupil size is an excellent indicator of attention, arousal, locomotion, and decision-making processes. As one example application, you could use this setup to trigger a reward when the experimentee is alert.
Multimedia Sharing Tool with Jetson Nano
Share video, screen, camera and audio with an RTSP stream through LAN or WAN supporting CUDA computations in a high-performance embedded environment (NVIDIA Jetson Nano), applying real-time AI techiques [such as] intrusion detection with bounding boxes, localization and frame manipulation.
BatBot: An Experimental AI-Vision Robot
[…] AI research robot created from commodity parts. Lower half is an Elegoo Robot Car v3.0. The upper half is a Jetson Nano. An Android app controls it with spoken English translated and sent over Bluetooth. The robot has a camera, an ultrasonic distance sensor, and 40 pin GPIO available for expansion. […] High-level spoken commands like 'WHAT ARE YOU LOOKING AT?' instruct the robot photograph and identify objects. The command 'GO FIND SOME-OBJECT' instructs the robot to locate, identify and photograph an object. Low-level spoken commands like 'WHAT IS YOUR IP-ADDRESS?' […] or 'LOOK TO THE LEFT' will obtain information and/or control the robot directly. Teach BatBot to identify new objects by using voice commands.
FastDepth: Fast Monocular Depth Estimation on Embedded Systems
[…] There has been a significant and growing interest in depth estimation from a single RGB image, due to the relatively low cost and size of monocular cameras. [We] explore learning-based monocular depth estimation, targeting real-time inference on embedded systems. We propose an efficient and lightweight encoder-decoder network architecture and apply network pruning to further reduce computational complexity and latency. We deploy our proposed network, FastDepth, on the Jetson TX2 platform, where it runs at 178fps on the GPU and at 27fps on the CPU, with active power consumption under 10W. FastDepth achieves close to state-of-the-art accuracy on the NYU Depth v2 dataset.
SINTEF Self-Driving Truck with Induction Charger
This small-scale self-driving truck using Jetson TX2 and ROS Kinetic was built to demonstrate the principle of a wireless inductive charging system developed by Norwegian research institute SINTEF for road use. Navigate using one of two modes; SLAM/Pure Pursuit path tracking and supervised deep learning based on NVIDIA DAVE-2.
Autonomous drone using ORBSLAM2 on the Jetson Nano
Run ORBSLAM2 and implement close-loop position control in real time on Jetson Nano using recorded rosbags (e.g., EUROC) or live footage from a Bebop2 Drone. Tested with [realtime] monocular camera using OrbSLAM2 and Bebop2. In the Autonomous Drones Lab at Tel Aviv University, we research, develop and implement solutions for autonomous navigation in GPS-denied environments. [To] validate our solution, we work mainly on prototype drones to achieve a quick integration between hardware, software and the algorithms.
GPU-enabled Kubernetes Cluster for Machine Learning with Jetson Nano
Jetson Nano is a fully-featured GPU compatible with NVIDIA CUDA libraries. CUDA is the de-facto standard for modern machine learning computation. Having […] a cheap, CUDA-equipped device, we thought let’s build [a] machine learning cluster. If you think “cluster”, you typically think “Kubernetes”, […] commonly used to manage distributed applications running on […] hundreds of thousands of machines. […] Ours is composed of four; [though] it is applicable to any number of Jetson Nanos.
Project of the Month November 2019
Temporal Shift Module for Efficient Video Understanding
TSM is an efficient and light-weight operator for video recognition [on edge devices]. [...] Conventional methods using 3D convolution for temporal modeling are computationally expensive, making it difficult to be deployed on embedded devices which have a tight power constraint. In this ICCV’19 paper, we propose Temporal Shift Module (TSM) that can achieve the performance of 3D CNN but maintain 2D CNN’s complexity by shifting the channels along the temporal dimension. TSM enables real-time low-latency online video recognition and video object detection. [...] On NVIDIA Jetson Nano, it achieves a low latency of 13ms (76fps) for online video recognition.
This is an implementation for Rock-Paper-Scissors game with a machine. The Jetson Nano developer kit is used for AI recognition of hand gestures.
With Jetson-FFMpeg, use FFmpeg on Jetson Nano via the L4T Multimedia API, supporting hardware-accelerated encoding of H.264 and HEVC. FFMpeg is a highly portable multimedia framework, able to decode, encode, transcode, mux, demux, stream, filter and play pretty much any format. It supports the most obscure ancient formats up to the cutting edge.
Jetson-Stats is a package for monitoring and controlling your NVIDIA Jetson [Nano, Xavier, TX2i, TX2, TX1] embedded board. When you install jetson-stats, the following are included:
This software was written for monitoring the security of my home using single or multiple Picameras. The cameras perform motion detection and record video. The video is sent in an email. After recording video, an object detection model running on Jetson Nano checks if a person is present in the video. A set of 4 raspi zeros stream video over Wi-Fi to a Jetson TX2, which combines inputs from all sources, performs object detection and displays the results on a monitor.
Tiny YOLO v2 Inference with NVIDIA TensorRT
This application downloads a tiny YOLO v2 model from Open Neural Network eXchange (ONNX) Model Zoo, converts it to an NVIDIA TensorRT plan and then starts the object detection for camera captured image.
Quantify the world—monitor urban landscapes with this offline lightweight DIY solution. The simple setup allows you to become an urban data miner. Install on an NVIDIA Jetson board + Logitech webcam and count cars, pedestrians, and motorbikes from your livestream, running yolo and a tracking software we built. Access via smart devices, define areas to track, count and export data once you're finished. You can use this system for surveying without saving video data—not intruding data privacy of counted objects. Where data goes and what happens during the counting algo is transparent.
JetsonSky: Electronically Assisted Astronomy
With Electronically Assisted Astronomy, the camera replaces your eye. With a telescope, simply observe the deep sky on a screen or even record videos of your observations, using AI to enhance your images. I wanted to make a fully autonomous system I could control from my computer at home using a VNC client, instead of being outside during very cold nights.
Build a scalable attention-based speech recognition platform in Keras/Tensorflow for inference on the NVIDIA Jetson Platform for AI at the Edge. This real-world application of automatic speech recognition was inspired by my previous career in mental health. This project begins a journey towards building a platform for real-time therapeutic intervention inference and feedback. The ultimate intent was to build a tool to give therapists real-time feedback on the efficacy of their interventions, but on-device speech recognition has many applications in mobile, robotics, or other areas where cloud-based deep learning is not desirable.
Transfer Learning with JetBot & Traffic Cones
[When] driving [around] construction areas, I [think] how challenging it would be for self driving cars to navigate [around] traffic cones. It turns out it's not so difficult with NVIDIA's JetBot-with only a couple hundred images, you can train a state-of-the-art deep learning model to teach your robot how to [navigate] a maze of toy traffic cones using only an onboard camera and no other sensors.
Multi-agent System for non-Holonomic Racing (MuSHR)
The Unversity of Washington's Personal Robotics Lab has recently open-sourced the MuSHR Racecar Project. A robotic racecar equipped with lidar, a D435i Realsense Camera, and an NVIDIA Jetson Nano. The car can be used for machine learning, vision, autonomous driving, and robotics education. Build instructions and tutorials can all be found on the MuSHR website!
Project of the Month October 2019
My AI is so bright, I gotta wear shades. Effect change in your surroundings by wearing these AI-enabled glasses. ShAIdes is a transparent UI for the real world. A camera is attached to the frames of a pair of glasses, capturing what the wearer sees. It feeds realtime images to an NVIDIA Jetson Nano, which runs two separate image classification CNN models, one to detect objects, and another to detect gestures made by the wearer. When combinations of known objects and gestures are detected, actions are fired that manipulate the wearer’s environment.
OCR Tesseract Docker App on BalenaCloud
Upload images using Flask — a lightweight development-purposes server framework — preprocess and reduce image noise using OpenCV, and perform OCR using Python-tesseract. Originally deployed on a Docker container on AWS, this version is deployed using BalenaCloud to a Jetson Nano.
P.A.N.T.H.E.R.: Powerful Autonomous eNTity High-End Robot
Using its two tracks, ZED stereo camera and the NVIDIA Jetson TX2, this robot explores the outdoors and interacts with its surroundings. Weighing 9kg (20lbs), with 7cm (2.7in) of ground clearance, and a track system composed of three different dampers to absorb vibrations when drifting on grass, P.A.N.T.H.E.R. can climb little rocks and bumps. P.A.N.T.H.E.R. is built with plexiglass, aluminium, plastic, and other materials, is integrated with ROS, and all code is available on GitHub.
OpenALPR License Plate Recognition
The parking garage [of my apartment] upgraded to a license plate recognition system. […] I expected [it] to fail and hinder me from entering or exiting […]. I was wrong and [it] has worked with 100% success. Even [without] having a license plate on my front bumper or following good car hygiene. Being a flatfooder, […] [built] my own License Plate Detector using OpenALPR and Jetson Nano.
Project of the Month September 2019
Recognizing Sign Language with Jetson Nano
The Jetson Nano caches this model into memory and uses its 128 core GPU to recognize live images at up to 60fps. That high fps live recognition is what sets the Nano apart from other IoT devices. I have been hearing recommendations toward \"Train in the cloud, deploy at the edge\" and this seemed like a good reason to test that concept. Mission accomplished.
The IntelligentEdgeHOL walks through the process of deploying an IoT Edge module to an NVIDIA Jetson Nano device to allow for detection of objects in YouTube videos, RTSP streams, or an attached web cam.
Detecting Minifigures with Jetson Nano
For this project I had to build a rotating platform and I decided to use [an interlocking block set] for it. My idea was to place [the set's mini figures] on top of the platform, fix the Raspberry Pi camera in front of it and rotate the platform at different speeds to test how Jetson Nano recognition works.
Fruit Classification with Jetson Nano
Classification of fruits on the Nvidia Jetson Nano using Tensorflow. Tested on Jetson Nano but should work on other platforms as well. [...] For classifying anything we need a proper dataset. [...] I made my own dataset, a small one with 6 classes and a total of 600 images (100 for each class). I used the camera-capture utility in the Hello AI World example to capture images.
Donkey Car 3.0 with Jetson Nano
Having read some amazing books on machine learning, I had been looking for opportunities to apply ML from first principles in the real world. That was what got me curious about the wonderful Donkey® Car project. The project is essentially a how-to guide to building your own RC car which can drive itself around a track using classical control theory, computer vision or in my case machine learning. I wanted to experiment with more sophisticated models. As I was constrained by the CPU on the Asus Tinkerboard S, I decided to level-up using the NVIDIA Jetson Nano.
Run real-time, multi-person pose estimation on Jetson Nano using a Raspberry Pi camera to detect human skeletons, just like Kinect does. With this setup environment, obtain about 7–8fps performance.
Jetson Nano Detection and Tracking
This repository is my set of install tools to get [Jetson] Nano up and running with a convincing and scalable demo for robot-centric uses. In particular, using detection and semantic segmentation models capable at running in real-time on a robot for $100. By convincing, I mean not using NVIDIA's 2-day startup model you just compile and have magically working without having control. This gives you full control of which model to run and when.
Fast Object Detector for the Jetson Nano
MobileDetectNet is an object detector which uses MobileNet feature extractor to predict bounding boxes. It was designed to be computationally efficient for deployment on embedded systems and easy to train with limited data. It was inspired by the simple yet effective design of DetectNet and enhanced with the anchor system from Faster R-CNN.
OpenCV with CUDA for Jetson Nano
A small script to build OpenCV 4.1.0 on a barebones system. The script installs build dependencies, clones a requested version of OpenCV, builds it from source, tests it, and installs it.
Jetson Nano Insulator Detection: TensorFlow & TensorRT
Detection insulator with
ssd_mobilenet_v1 custom trained network. Testing with tensorflow frozen graph gives about 0.07sec per one image (~15FPS). I have recieved better result (about 20fps) with TensorRT library.
Interface Touch Sensor, Accelerometer, IV Sensor, OLED
Grove is an open source, modulated, and ready-to-use toolset. It takes a building block approach to assembling electronics, […] [simplifying] the learning process. If you want to use Grove sensors with Jetson Nano, the best way is to grab the grove.py Python library and get your sensors up in running in minutes! Currently there are more than 20 Grove modules supported on Jetson Nano […].
Project of the Month August 2019
Smart Doorbell Camera
We’ll create a simple version of a doorbell camera that tracks everyone that walks up to the front door of your house. With face recognition, it will instantly know whether the person at your door has ever visited you before—even if they were dressed differently. And if they have visited, it can tell you exactly when and how often.
Open Source Autocar (1/10th scale) with Jetson Nano
With this open-source autocar powered by Jetson Nano, you can seamlessly toggle between your remote-controlled manual input and your AI-powered autopilot mode!
Donkey Car with Jetson Nano
Open source hardware and software platform to build a small scale self driving car. Donkeycar is minimalist and modular self driving library for Python. It is developed for hobbyists and students with a focus on allowing fast experimentation and easy community contributions.