Posts by Joseph Lucas
Cybersecurity
Oct 02, 2025
Practical LLM Security Advice from the NVIDIA AI Red Team
Over the last several years, the NVIDIA AI Red Team (AIRT) has evaluated numerous and diverse AI-enabled systems for potential vulnerabilities and security...
8 MIN READ
Cybersecurity
Sep 26, 2025
Why CVEs Belong in Frameworks and Apps, Not AI Models
The Common Vulnerabilities and Exposures (CVE) system is the global standard for cataloging security flaws in software. Maintained by MITRE and backed by CISA,...
7 MIN READ
Cybersecurity
Apr 29, 2025
Structuring Applications to Secure the KV Cache
When interacting with transformer-based models like large language models (LLMs) and vision-language models (VLMs), the structure of the input shapes the...
11 MIN READ
Cybersecurity
Dec 16, 2024
Sandboxing Agentic AI Workflows with WebAssembly
Agentic AI workflows often involve the execution of large language model (LLM)-generated code to perform tasks like creating data visualizations. However, this...
7 MIN READ
Cybersecurity
Jul 11, 2024
Defending AI Model Files from Unauthorized Access with Canaries
As AI models grow in capability and cost of creation, and hold more sensitive or proprietary data, securing them at rest is increasingly important....
6 MIN READ
Data Science
Jun 27, 2024
Secure LLM Tokenizers to Maintain Application Integrity
This post is part of the NVIDIA AI Red Team’s continuing vulnerability and technique research. Use the concepts presented to responsibly assess and increase...
6 MIN READ