Machine learning (ML) security is a new discipline focused on the security of machine learning systems and the data they are built upon. It exists at the intersection of the information security and data science domains.
While the state-of-the-art moves forward, there is no clear onboarding and learning path for securing and testing machine learning systems. How, then, should interested practitioners begin developing machine learning security skills? You could read related articles on arXiv, but how about practical steps?
Competitions offer one promising opportunity. NVIDIA recently helped run an innovative ML security competition at the DEF CON 30 hacking and security conference. Hosted by AI Village, the competition drew more than 3,000 participants. It aimed to introduce attendees to each other and to the field of ML security. The competition proved to be a valuable opportunity for participants to develop and improve their machine learning security skills.
NVIDIA AI Red Team and AI Village
To proactively test and assess the security of NVIDIA machine learning offerings, the NVIDIA AI Red Team has been expanding. While this team consists of experienced security and data professionals, they recognized a need to develop ML security talent across the industry. With more exposure and education, data and security practitioners are likely to improve the security of their deployed machine learning systems.
AI Village is a community of data scientists and hackers working to educate on the topic of artificial intelligence (AI) in security and privacy. The community holds events at DEF CON each year.
The NVIDIA AI Red Team and AI Village joined together at DEF CON 30 to engage the information security community with a machine learning security competition. The topic was potentially new to many attendees. Members of the AI Village created challenges designed to teach and test elements of ML security knowledge. In addition to NVIDIA, these members represented AWS Security, Orang Labs, and NetSec Explained.
AI Village Capture the Flag Competition
Capture the Flag (CTF) competitions include multiple challenges. Competitors play through the challenges and collect flags for those that are completed successfully. These flags are assigned various point values based on the level of challenge. Competitors win by collecting the most points.
With this familiar format in mind, the AI Village and NVIDIA AI Red Team built The AI Village CTF @ DEFCON. Organizers partnered with Kaggle to use a platform familiar to the machine learning community. Similar to information security CTFs, Kaggle competitions provide a format for ML researchers to compete on discrete problems.
Partnering with Kaggle provided the competition with a flexible and scalable platform that paired compute and data hosting with documentation and scoring. Although the challenge servers are no longer active, you can view the challenge descriptions.
Competitors reported onboarding and moving through the challenges with ease, with minimal additional infrastructure required from the AI Village. Furthermore, Kaggle has a large audience of skilled data scientists and machine learning engineers who were excited to explore the security domain. Kaggle also generously offered ongoing support and $25,000 in prizes. We could not have asked for a better partner for this event.
Over the month-long competition, over 3,000 competitors hacked their way through 22 challenges. This far exceeded expectations and included participants from over 70 countries, from first-time Kagglers to Grandmasters. The event succeeded in bringing the traditional information security and machine learning communities together to tackle a range of challenges from this new domain of ML security.
Competitors used publicly available tools and innovative technique applications such as open source research, masking, and dimensionality reduction. In the process, they often reimplemented attacks from academic literature as well.
Because one challenge remained unsolved, there was always a chance for someone to rise to the top of the leaderboard. For the final two weeks of the competition, the Kaggle Discussion Board and AI Village Discord were abuzz with theories and explorations of the remaining unsolved challenge. The organizers were checking hourly for a buzzer-beating leaderboard shift. Check out the challenge solutions.
Inference Challenge
In the Inference Challenge, participants had to execute a membership inference attack to identify training samples. They only had API access to an image classifier. When done successfully, the competitors would identify images that showed characters of the flag.
Some competitors chose to randomly generate the images by permuting pixel values, effectively brute-forcing the problem. Other competitors assumed that the training data may have included a standard dataset and used EMNIST as their source data, leveraging open source data. Others made use of the Adversarial Robustness Toolbox, producing output similar to what is shown in Figure 2.
Whatever method used, a successful challenger would be rewarded with the flag, spelling D3FC0N. This leetspeak encoding of the DEF CON conference name was used in several places on the conference website.
Crop2 Challenge
Research and out-of-the-box thinking often help to solve CTF challenges. For instance, the one unsolved challenge in the competition was Crop2. In Crop2, participants were given a poisoned cropping model and had to create the poisoned sample (within some error bounds). They had one training data example to work with (Figure 3).
This is a difficult problem without an efficient, standard algorithmic solution. When you think about all of the pixels in an image and all of the possible pixel values across three color channels, the search space explodes to over 800 billion options. Instead, competitors could combine reverse engineering, open source research, and assumptions to reduce the number of combinations.
After the competition ended, organizers gave hints to help competitors solve the Crop2 Challenge. Some of the key hints included using open source research to determine that pixel colors likely were generated by matplotlib default colormaps. This greatly reduces the search space into the hundreds of thousands.
By making these informed assumptions, one competitor was eventually able to reach the Crop2 Challenge solution. One trait of great hackers is tenacity: still working tirelessly after the competition ended, this competitor diligently worked through the provided hints. The competitor reported that a hint “helped me realize that we only needed to use nine colors. Mate, I’d been fiddling around with 16 million. This made the search space manageable.”
Competitor notebooks
Check out some of our favorite notebooks from competitors:
- Chris Deotte – From a member of the Kaggle Grandmasters of NVIDIA (KGMoN), these solutions are very well organized and documented. We recommend Secret Sloth in particular.
- Eric Bouteillon – Watch the flag appear character-by-character in Excuse Me. Also notice the different solve techniques for the MATH challenges. Have you heard of silhouette score?
- John MacGillivray – John deduced that the Hotterdog model was based on MobileNet, enabling an offline attack. Great tradecraft.
- Fournierp – A comprehensive writeup about model inversion for the Inference Challenge, written from scratch. You can also check out the MIFace in the Adversarial Robustness Toolbox.
- Eoin O – Learn how you could have solved the Crop2 Challenge. More than 3,000 competitors tried to solve it for the greater part of a month. The day after the competition ended, organizers released several hints. Within a few hours, it was solved. It was great to see all of the competitors collaborating in Discord and the Kaggle Discussion Board after the competition ended.
Summary
The AI Village CTF @ DEF CON 30 competition showed that there is a significant appetite in both the security and data professions to improve machine learning security skills. As ML systems are deployed in increasingly security-critical contexts, it will become imperative to train professionals and develop tools and methods for security development, deployment, and testing.
NVIDIA will continue driving innovation with a robust and secure ecosystem for AI, from embedded devices and laptops to supercomputers and the cloud. As part of this effort, our AI Red Team will empower ML security research and testing internally and establish security practices across the industry. We will host competitions, workshops, and release research and security tools in the future. If you’re interested in participating, contact us at threatops@nvidia.com.