Data Center / Cloud

Deploying Edge AI in NVIDIA Headquarters

Since its inception, artificial intelligence (AI) has transformed every aspect of the global economy through the ability to solve problems of all sizes in every industry. NVIDIA has spent the last decade empowering companies to solve the world’s toughest problems such as improving sustainability, stopping poachers, and bettering cancer detection and care.

What many don’t know is that behind the scenes, NVIDIA has also been leveraging AI to solve day-to-day issues, such as improving efficiency and user experiences. 

Improving efficiency and user experiences 

NVIDIA recently adopted AI to ease the process of entering the company’s headquarters while maintaining security. The IT department thought they could improve the traditional badge-based access control entry through turnstiles.

Using AI, NVIDIA designed a new experience where employees could sign up for contactless and hands-free entry to headquarters. Especially during the COVID-19 pandemic, the contactless access program has proven to be convenient, quick, secure, and safe. 

Watch the following video and read on to learn the steps NVIDIA implemented and the challenges that were overcome to deploy this scalable computer vision application.

Video 1. Learn the steps involved in deploying AI at the edge using NVIDIA Fleet Command

In the video, employees walk through the security turnstiles at NVIDIA headquarters. The center turnstile showcases the AI solution that does not require swiping a badge.

Unique considerations of edge environments

The objective of this project was to deliver a contactless access control solution that increased efficiency, security, and convenience for NVIDIA employees and could be scaled across multiple NVIDIA offices around the world. 

The solution had to be one that fit around existing infrastructure in the entranceway and office, conformed to policies put forth by the facilities and the security teams, and could be updated, upgraded, and scaled remotely across NVIDIA offices worldwide. 

Most edge AI solutions are deployed in environments where existing applications and systems already exist. Extra care needs to be taken to make sure that all constraints and requirements of the environment are taken into consideration.  

Below are the steps that NVIDIA took to deploy a vision AI application for the entrance of the NVIDIA Endeavor building. The process took six months to complete. 

  1. Understand the problem and goal: The goal of the project was for the IT team to build a solution that could be remotely managed, updated frequently, scalable to hundreds of sites worldwide, and compatible with NVIDIA processes. 
  2. Identify teams involved: The NVIDIA facilities, engineering, and IT teams were involved with the project, along with a third-party software vendor who provided the application storage and training data.  
  3. Set constraints and requirements: Each team set their respective constraints and requirements for the application. For example, the facilities team determined the accuracy, security, and redundancy requirements. Engineering determined latency and performance and dictated which parts of the solution could be solved with engineering and which parts IT needed to solve. 
  4. Set up a test environment to validate proof of concept (POC): The process of an employee entering headquarters through turnstiles was simulated in a lab setting. During this process, the requirements set forth by engineering were met, such as model accuracy and latency. 
  5. Run the pilot: The pilot program consisted of enabling nine physical turnstiles to run in real time and onboarding 200 employees into the program to test it. 
  6. Put the AI model into production: 3,000 employees were onboarded into the program and the application was monitored for three months. After operating successfully for three months, the application was ready to be scaled to other buildings at NVIDIA headquarters and eventually, to offices worldwide. 

Challenges implementing AI in edge environments 

The biggest challenge was to create a solution that fit within the existing constraints and requirements of the environment. Every step of the process also came with unique implementation challenges.

Deliver the solution within the requirements 

Latency: The engineering team identified the application latency requirements of detection and entrance at 700 ms. At every step of the POC and pilot, this benchmark was tested and validated. Due to the heavy requests that were sent to the server to identify each person, there were infrastructure load issues. To mitigate this, we placed the server within 30-40 feet of the turnstiles. When the physical distance that data has to travel decreases, latency also decreases. 

Operations: When the pilot program was scaled to 200 employees and the turnstiles were admitting multiple users into the building at a time, we found memory leaks that caused the application to crash after four hours. It was a simple engineering fix, but the issue was not experienced during the initial pilot and POC phases. 

Keep application operational: Due to memory leaks, the application crashed after four hours of operation. It’s important to remember that, during the POC and pilot phase, the application should be run for as long as it would be necessary during production. For example, if the application should run for 12 hours, it may successfully run for four hours during the POC, but that is not a good indicator of whether the application will work for the requisite 12 hours.

Secure the application 

Physical security: Edge locations are unique in that they are often physically accessible by individuals that can tamper with the solution. To avoid this, edge servers were placed in a nearby telecommunications room with access control. 

Secure software supply chain: The application was developed with security in mind, which is why we used enterprise-grade software like NVIDIA Fleet Command, which can automatically create an audit log of all actions. Using software from a trusted source ensures that organizations have a line of support to speak to when needed. A common mistake from organizations deploying edge applications is downloading software online without researching whether it is from a trusted source and accidentally downloading malware. 

Manage the application

An edge management solution is essential. The NVIDIA IT team needed a tool that allowed for easy updates when bug fixes arose or model accuracy needed to be improved. Due to the global plans, having an edge management solution update hardware and software with minimal accuracy was also a priority. 

Addressing all these functions is NVIDIA Fleet Command, a managed platform for AI container orchestration. Fleet Command streamlines the provisioning and deployment of systems at the edge. It also simplifies the management of distributed computing environments by enabling remote system provisioning, over-the-air updates, remote application, system access, monitoring and alerting, and application logging. IT can ensure that these widely distributed environments are operational at all times.

A successful edge AI deployment 

When edge infrastructure is in place, enterprises can easily add more models to the application.

In this case, IT can add more applications to the server and even roll the solution out to NVIDIA offices worldwide, thanks to Fleet Command. It is the glue that holds our stack together and provides users with turnkey AI orchestration that keeps organizations from having to build, maintain, and secure edge AI deployments from the ground up themselves. 

For organizations that are deploying AI in edge environments, get prepared with NVIDIA LaunchPad, a free program that provides short-term access to a large catalog of hands-on labs. To learn more, see Managing Edge AI with the NVIDIA LaunchPad Free Trial.

Discuss (0)

Tags