Most smart city applications today rely on analyzing large amounts of video data from cameras.
The ability to identify and reason over the most relevant events within a video is essential to build efficient and scalable applications.
In the recently concluded AI at the Edge Challenge, team SmellslikeML proposed an NVIDIA Jetson Nano-based application and won the second prize in the Intelligent Video Analytics and Smart Cities category.
At the heart of the application is an auto-encoder model trained and running on a Jetson Nano using TensorFlow and Keras. This model learns the context of a scene with each oncoming frame of the video and develops the ability to flag anomalous events. The team proposes that these anomalous events can be processed using DeepStream SDK for additional inference–for example, to identify and track objects in the scene.
In a scene with continuous activity such as a busy road, this method of tracking the structure of the image and flagging major changes to the image as anomalies is better than a simple motion detection algorithm. In the video below, the application correctly flags anomalous events and in the process reduces the image feed by 100X.
The team suggests that this model can be used within a video analytics pipeline to build smart city applications that make optimal use of network and cloud resources.
For sample code and more details, visit the project page: Saving Bandwidth with Anomaly Detection.