Trustworthy AI

Aug 03, 2023
Securing LLM Systems Against Prompt Injection
Prompt injection is a new attack technique specific to large language models (LLMs) that enables attackers to manipulate the output of the LLM. This attack is...
15 MIN READ

Jul 27, 2023
Modeling Earth’s Atmosphere with Spherical Fourier Neural Operators
Machine learning-based weather prediction has emerged as a promising complement to traditional numerical weather prediction (NWP) models. Models such as NVIDIA...
8 MIN READ

Apr 25, 2023
NVIDIA Enables Trustworthy, Safe, and Secure Large Language Model Conversational Systems
Large language models (LLMs) are incredibly powerful and capable of answering complex questions, performing feats of creative writing, developing, debugging...
7 MIN READ

Sep 19, 2022
Enhancing AI Transparency and Ethical Considerations with Model Card++
An AI model card is a document that details how machine learning (ML) models work. Model cards provide detailed information about the ML model’s metadata...
10 MIN READ

Jan 13, 2022
Accelerating Trustworthy AI for Credit Risk Management
On April 21, 2021, the EU Commission of the European Union issued a proposal for a regulation to harmonize the rules governing the design and marketing of AI...
13 MIN READ