Cybersecurity

NVIDIA AI Red Team: Machine Learning Security Training

Picture of the ML security training classroom at Black Hat USA

At Black Hat USA 2023, NVIDIA hosted a two-day training session that provided security professionals with a realistic environment and methodology to explore the unique risks presented by machine learning (ML) in today’s environments. 

In this post, the NVIDIA AI Red Team shares what was covered during the training and other opportunities to continue learning about ML security. 

Black Hat USA training

This has been a banner year for AI. Many security teams are being asked to evaluate and secure AI-enabled products without the skills and knowledge to appropriately evaluate their potential vulnerabilities. 

By providing this training at one of the world’s leading security conferences, we were able to share the NVIDIA AI Red Team’s experience and knowledge across many different industry verticals. We helped ensure that these organizations can begin securely using and developing AI solutions. We also come from that community, so it was a comfortable space to teach in.  

The two-day training consisted of over 20 Jupyter notebooks and 200 slides organized into the following modules: 

  • Introduction
  • Evasion
  • Extraction
  • Assessments
  • Inversion 
  • Membership Inference
  • Poisoning
  • And everything applied to large language models

It was a lot. Attendees took slides and notebooks home to continue iterating on the coursework at their own pace.

The course attempted to take students from all backgrounds and give them a solid foundation in the intersection of machine learning and security. It took students all the way from the basics of NumPy mechanics to algorithmic attacks against large language models. Each module gave some theory and then explored applied scenarios. 

Students were given a basic methodology based on our own framework. (NVIDIA AI Red Team’s assessment framework). They were given an environment and code that they could take back to their organizations and iterate on. There’s a lot of cool work still to be done. 

Attendee questions and concerns

Outside of reshaping questions, there were a lot of questions about the effect and likelihood of attacks against machine learning systems. Machine learning has been in defensive products for a number of years at this point. “ML bypasses” happen every day. 

Our goal was to help attendees understand the threat models, techniques, and attack vectors so that they could design and calibrate their security controls appropriately. Security isn’t one-size-fits-all, but having a working knowledge of the systems inside an organization is a prerequisite for building meaningful defenses. 

Machine learning security has an established history in academia. This course gave students experience applying those techniques to familiar security scenarios.

Key lessons from the training

People are super smart and creative. It’s easy to look at security and point out flaws (and we do, many of them in ML), but security has matured significantly over the last decade. We think the security industry will rise to the challenges presented by ML as well.

The training was also a good exercise in industry baselining. We had a mix of professionals from all industries. Understanding where those industries are in adopting machine learning and securing systems was really interesting. It would be fair to say the majority of people are just beginning their journey. 

In general, we came away happy that people who are interested in this field have a base camp from which to operate. It’s a fun space to be in at the moment and we enjoyed sharing our material with peers.

Opportunities to learn more about ML security

The next iteration of the NVIDIA Machine Learning Security course will be at Black Hat EU on December 4 and 5. 

We are also researching alternative delivery modalities and mechanisms. If you have a request, contact the AI Red Team. Students who took the course will receive updates as we make them available!

Discuss (5)

Tags