Developer Resources for the Public Sector
A hub of news, SDKs, technical resources, and more for developers working in the public sector.
Frameworks and SDKs
High Performance Computing (HPC)
The NVIDIA HPC SDK is a comprehensive toolbox for GPU-accelerated HPC modeling and simulation applications.
The NVIDIA RAPIDS™ suite of open-source software libraries, which includes the RAPIDS Accelerator for Apache Spark, makes it possible to execute end-to-end data pipelines for analytics, machine learning, and data visualization entirely on GPUs.
NVIDIA® CUDA-X™, built on top of NVIDIA CUDA®, is a collection of libraries, tools, and technologies that delivers dramatically higher performance—compared to CPU-only alternatives—across multiple application domains, from AI to HPC.
Computer vision empowers devices to perceive and understand the world around us. NVIDIA software, which is scalable and tested, enables the end-to-end computer vision workflow—from model development to deployment—for individual developers, the public sector, higher education, research, and enterprises.
NVIDIA Riva is a GPU-accelerated SDK for building speech AI applications that are customized for your use case and deliver real-time performance. Riva offers pretrained speech models in the NVIDIA NGC™ catalog that can be fine-tuned with the NVIDIA TAO Toolkit on a custom dataset, accelerating the development of domain-specific models by 10X.
NVIDIA Omniverse™ is a powerful, multi-GPU, real-time simulation and collaboration platform for 3D production pipelines based on Pixar's Universal Scene Description (USD) and NVIDIA RTX™.
NVIDIA Jetson™ is used by developers to create breakthrough AI products across all industries. The platform includes small, power-efficient developer kits and production modules that offer high-performance acceleration of the NVIDIA CUDA-X software stack.
NVIDIA NeMo is a framework for building, training, and fine-tuning GPU-accelerated speech and natural language understanding (NLU) models with a simple Python interface. With NeMo, you can create new model architectures and train them using mixed-precision compute on Tensor Cores in NVIDIA GPUs through easy-to-use application programming interfaces (APIs).
High-Performance Sensor Architectures at the Edge
Get an outline of a deployment architecture that promotes the GPU to a networked device, bringing data center-grade high-performance, high-data-rate GPU processing to the edge using the NVIDIA data processing unit (DPU).
High-Performance Geospatial Image Processing at the Edge
Learn how real-time geospatial image processing can be accomplished on a size, weight, and power (SWaP)-constrained edge system and how common geospatial processing workloads, such as pansharpening and orthorectification, can be GPU-accelerated and launched from an NVIDIA BlueField®-2 edge system.
Simulation and Collaboration for Wireless Communications
Explore NVIDIA Omniverse use cases and how an Omniverse-based digital twin can enable communications’ research in 5G and 6G from the physical layer to the network layer. You’ll also learn about the future role of Omniverse in wireless networks, beyond a research capability and as part of an operational capability.
Fast Data Preprocessing with DALI, NPP, and nvJPEG
Learn about the latest optimizations in NVIDIA's image/signal processing libraries like NPP, nvJPEG, and DALI—a fast, flexible data loading and augmentation library. This video will also show you how to use various data processing solutions.
Speeding up Numerical Computing in C++ with a Python-like Syntax in NVIDIA MatX
Libraries such as CuPy and PyTorch allow developers of interpreted languages to leverage the speed of optimized CUDA libraries from other languages. These interpreted languages have many excellent properties, including easy-to-read syntax and automatic memory management. Learn how you can achieve the maximum performance using C++ while still reaping all the benefits from the interpreted languages.
Boosting Inline Packet Processing Using DPDK and GPUdev with GPUs
The inline processing of network packets using GPUs is a packet-analysis technique useful to a number of different application domains: signal processing, network security, information gathering, input reconstruction, and so on. See how an effective application workflow involves creating a continuous asynchronous pipeline coordinated between the following player components using lockless communication mechanisms.
DeepSig: Deep Learning for Wireless Communications
Communications engineering strives to further improve metrics like throughput and interference robustness while scaling to support the explosion of low-cost wireless devices. These often-competing needs make system complexity intractable. Furthermore, algorithmic and hardware components are designed separately, optimized, and then integrated to form complete systems. Learn how to overcome this barrier.
Accelerated Signal Processing with cuSignal
Learn how NVIDIA offers a plethora of C- and CUDA-accelerated libraries that target common signal processing operations and how cuFFT GPU accelerates the Fast Fourier Transform while cuBLAS, cuSOLVER, and cuSPARSE speed up matrix solvers and decompositions essential to a myriad of relevant algorithms.
Scalable Speech and Text Analytics for Large Audio Collections
Learn how NVIDIA Jarvis can help you rapidly gain actionable insight from your speech audio data, especially impactful in large collections for which manual listening and annotation would be slow or intractable.
Designing an Intelligent Assistant for Hands-free Applications
Learn the initial steps to creating an example intelligent assistant (IA) for a hands-free application: performing an upgrade to an aircraft while keeping maintenance notes. You’ll also learn what the enablers are for an IA based on the NVIDIA Riva SDK.
The Latest Research in Speech Synthesis at NVIDIA
Learn about the latest research in speech synthesis at NVIDIA, along with our latest state-of-the-art models for speech enhancement and denoising, generative modeling of speech attributes, and emotional, multi-lingual, and multi-accent text-to-speech synthesis. In addition to showcasing demos, we’ll describe our approach to each task, highlighting the problems involved and how we solved them.
Using NLP in a Performance Assessment Reporting System
Learn about a variety of natural language processing methods and state-of-the-art NLP techniques and how we employ sentiment analysis to identify past contract assessment comments that have potentially been graded incorrectly.
Get Started on NLP and Conversational AI with NVIDIA DLI Courses
Learn about several breakthroughs in conversational AI for building and deploying automatic speech recognition and natural language processing, and check out DLI courses to learn how to quickly create conversational AI and NLP GPU-accelerated applications with modern tools.
Create Speech AI Applications in Multiple Languages with Riva
Learn about NVIDIA’s world-class speech-to-text models for Spanish, German, and Russian in Riva, powering enterprises to deploy speech AI applications globally.
Build Speech AI in Multiple Languages and Train Large Language Models with Riva and NeMo Megatron
Explore the major updates to Riva, an SDK for building speech AI applications, a paid Riva Enterprise offering, and several key updates to NVIDIA NeMo Megatron, a framework for training large language models.
Building Transformer-Based NLP Applications
Learn how to use Transformer-based NLP models for text classification tasks. You’ll also explore how to leverage Transformer-based models for named-entity recognition tasks and how to analyze various model features, constraints, and characteristics to determine which model is best suited for a particular use case.
Accelerated Zero-trust Architectures for the Public Sector
Applying accelerated computing hardware and frameworks to cybersecurity relaxes constraints that prevent use of more complex monitoring and detection capabilities. Learn how GPUs and DPUs are used for real-time cyber decision-making and how to leverage them to deploy scalable defensive capabilities built on zero-trust principles within the data center.
Cyber Intrusion Detection Using NLP on Windows Event Logs
Learn how to apply deep learning and natural language processing to Windows event logs for the purpose of detecting cyber attacks, and explore incorporating the existing model into the NVIDIA Morpheus framework, with an aim toward pre-processing data with a DPU.
Cybersecurity: Real-time AI-enabled SOAR for Edge Networks
Watch a demonstration of a solution that combines the Splunk Security, Orchestration, Automation, and Response (SOAR) suite with NVIDIA Morpheus running on HPE Edgeline to provide AI-enabled, real-time monitoring and remediation of an edge network. This approach features the ability to inspect real-time IP traffic and node telemetry data with deep and machine learning inference results.
Preventing Fraud and Waste in Public Sector Agencies
Explore this HPC analytics solution, now available on NVIDIA-Certified Systems™ on a scalable platform to see how it integrates NVIDIA'S RAPIDS Accelerator for Apache Spark 3.0 to accelerate data pipelines and push the performance boundaries of data and machine learning workflows.
Build Speech AI in Multiple Languages and Train Large Language Models with Riva and NeMo Megatron
Explore the major updates to Riva, an SDK for building speech AI applications, as well as several key updates to NVIDIA NeMo Megatron, a framework for training large language models.
Building a Foundation for Zero-Trust Security with NVIDIA DOCA 1.2
Learn about the new NVIDIA DOCA 1.2 software framework for NVIDIA BlueField, the world’s most advanced DPU. Designed to enable the NVIDIA BlueField ecosystem and developer community, DOCA is the key to unlocking the potential of the DPU by offering services to offload, accelerate, and isolate infrastructure applications services from the CPU.
Accelerating Data Center Security with BlueField-2 DPU
DPUs are the new foundation for a comprehensive and innovative security offering. Hyperscale giants and telecom providers have adopted this strategy for building and securing highly efficient cloud data centers. Learn how this strategy has revolutionized the approach to minimize risks and enforce security policies inside the data center.
Supercharging AI-Accelerated Cybersecurity Threat Detection
Cybercrime worldwide is costing more than $1 trillion annually, and data centers face staggering increases in users, data, devices, and apps. See how NVIDIA Morpheus enables cybersecurity developers and independent software vendors to build high-performance pipelines for security workflows with minimal development effort.
Developing Inference Models for Autonomous Marine Navigation
The Mayflower Autonomous Ship (MAS) is an exploratory ship with full autonomous capability to operate in even the most remote areas of the world’s oceans. See how the MarineAI team uses inference models deployed on NVIDIA Jetson edge devices and developed using the DeepStream SDK for, not just navigation hazard object detection and classification, but also novel ocean research.
Measuring AI-Enabled Video Analytics Performance: The Benefits of GPU Acceleration
Explore the results of thorough benchmarking of video analytics performance that compares GPU-accelerated and CPU-only compute platforms. Learn about performance metrics using the Deepstream SDK and NVIDIA TensorRT™ optimizations on both NVIDIA T4 Tensor Core GPUs and Jetson against unaccelerated CPU machines.
AR for First Responders: Seeing Through the Smoke
There's a pressing need for a wearable, autonomous navigation system to aid in both search-and-rescue and evacuation operations. This system must be robust to the real-world dynamics of human motion and the chaotic dynamics of disaster zones. Discover the key innovations powering such a system.
Introducing the Jetson AGX Orin Series and the Jetson Orin NX Series
Learn about the NVIDIA Jetson platform for deploying AI at the edge for advanced robotics and autonomous machines in the fields of manufacturing, logistics, retail, service, agriculture, smart city, and healthcare and life sciences, as well as the key hardware features of the Jetson family and our newest addition to the family, NVIDIA Jetson AGX Orin™.
Supercharge AI-Powered Robotics Prototyping and Edge AI Applications with the Jetson AGX Orin Developer Kit
The NVIDIA Jetson AGX Orin Developer is now available. Learn how the platform is the world’s most powerful, compact, and energy-efficient AI supercomputer for advanced robotics, autonomous machines, and next-generation embedded and edge computing.
Getting the Best Performance on MLPerf Inference 2.0
Models like Megatron 530B are expanding the range of problems AI can address and what’s needed is a versatile AI platform that can deliver the needed performance on a wide variety of models for both training and inference. See how MLPerf is the only industry-standard AI benchmark that tests data center and edge platforms across a half-dozen applications.
Detecting Objects in Point Clouds with NVIDIA CUDA-Pointpillars
A point cloud is a data set of points in a coordinate system. Points contain a wealth of information, including three-dimensional coordinates X, Y, Z; color; classification value; intensity value; and time. Learn about NVIDIA CUDA-accelerated PointPillars model for Jetson developers.
Trimble Explores Acceleration of Autonomous Robot Training with Synthetic Data Generation and NVIDIA Isaac Sim
Trimble needed to tune the machine learning (ML) models to the exact indoor environments so Spot could autonomously operate in these different indoor settings. Learn how you gain confidence that your robot’s perception capabilities are robust enough, so it performs safely and as planned.
The NVIDIA RTX platform fuses ray tracing, deep learning, and rasterization to fundamentally transform the creative process for content creators and developers through the NVIDIA Turing™ GPU architecture and support for industry-leading tools and APIs.
NVIDIA CloudXR SDK
NVIDIA CloudXR™ is a solution for streaming virtual reality (VR), augmented reality (AR), and mixed reality (MR) content from any OpenVR XR application on a remote server—cloud, data center, or edge.
New Era of Digital Twins with Omniverse
Learn how, by combining AI, real-time ray tracing, and physics, Omniverse enables a new era of highly realistic digital twins that are used to improve the real world. Also hear about current digital twin efforts and explore their visionary approaches in this new era.
Best Practices and Tools for Training and Simulation
Explore the wide range of GPU-accelerated tools and SDKs that can be leveraged for generating more realistic high-performance synthetic environments. Learn some best practices for getting the most out of the GPU for training and simulation use cases.
Training and Simulation Applications with Project Anywhere
Project Anywhere is a cloud-based demo that allows you to get high-fidelity imagery from any distance, explore high-resolution 3D terrain, and build data in real time from any device. Project Anywhere is deployed on the strength of Cesium 3D Tiles, Microsoft Azure, NVIDIA GPUs, and Unreal Engine.
Best Practices: Using NVIDIA RTX Ray Tracing
Learn about the actionable insights and practical tips for developers working on ray tracing. You’ll get a broad picture on what kind of solutions lead to performance increases, how to build and manage ray-tracing acceleration structures, and more.
Deploying Real-Time Object Detection Models
Take a look at how the NVIDIA Isaac™ SDK can be used to generate synthetic datasets from simulation and then use this data to fine-tune an object detection deep neural network (DNN) using the NVIDIA Transfer Learning Toolkit (TLT).
Optimizing Video Memory Usage with the NVIDIA Video Codec SDK
This blog demonstrates which decoder configuration parameters impact the video memory usage and how to configure them optimally. Note that this post assumes basic familiarity with the NVDECODE API.
Programs For You
The NVIDIA Developer Program provides the advanced tools and training needed to successfully build applications on all NVIDIA technology platforms. This includes access to hundreds of SDKs, a network of like-minded developers through our community forums, and more.
NVIDIA Deep Learning Institute (DLI) offers hands-on training in AI, accelerated computing, and accelerated data science to solve real-world problems. Powered by GPUs in the cloud, training is available as self-paced, online courses or live, instructor-led workshops.
Accelerate Your Startup
NVIDIA Inception—an acceleration platform for AI, data science, and HPC startups—supports over 7,000 startups worldwide with go-to-market support, expertise, and technology. Startups get access to training through the DLI, preferred pricing on hardware, and invitations to exclusive networking events.
NVIDIA Deep Learning Institute
The NVIDIA Deep Learning Institute (DLI) offers hands-on training in AI and accelerated computing to solve real-world problems. Training is available as self-paced, online courses or instructor-led workshops.
Application of AI for Predictive Maintenance
In this online course, you’ll learn how to:
- Use AI-based predictive maintenance to prevent failures and unplanned downtimes
- Identify key challenges around detecting anomalies that can lead to costly breakdowns
- Use time-series data to predict outcomes with XGBoost-based machine learning classification models
- Use an long short-term memory (LSTM)-based model to predict equipment failure
- Use anomaly detection with time-series autoencoders to predict failures when limited failure-example data is available
Applications of AI for Anomaly Detection
In this online course, you'll learn how to:
- Prepare data and build, train, and evaluate models using XGBoost, autoencoders, and generative adversarial networks (GANs)
- Detect anomalies in datasets with both labeled and unlabeled data
- Classify anomalies into multiple categories regardless of whether the original data was labeled
Building Transformer-Based Natural Language Processing Applications
In this online course, you’ll learn how to:
- Understand how text embeddings have rapidly evolved in NLP tasks such as Word2Vec, recurrent neural network (RNN)-based embeddings, and Transformers
- See how Transformer architecture features, especially self-attention, are used to create language models without RNNs
- Use self-supervision to improve the Transformer architecture in BERT, Megatron, and other variants for superior NLP results
- Leverage pretrained, modern NLP models to solve multiple tasks such as text classification, named-entity recognition (NER), and question answering
- Manage inference challenges and deploy refined models for live applications
Fundamentals of Accelerated Data Science
In this online course, you'll learn how to:
- Implement GPU-accelerated data preparation and feature extraction using cuDF and Apache Arrow data frames
- Apply a broad spectrum of GPU-accelerated machine learning tasks using XGBoost and a variety of cuML algorithms
- Execute GPU-accelerated graph analysis with cuGraph, achieving massive-scale analytics in small amounts of time
- Rapidly achieve massive-scale graph analytics using cuGraph routines
NVIDIA News for the Public Sector
NVIDIA Hopper Architecture in Depth
The new NVIDIA H100 Tensor Core GPU based on the new NVIDIA Hopper™ GPU architecture is now available. This is the ninth-generation data center GPU designed to deliver an order-of-magnitude performance leap for large-scale AI and HPC over the prior-generation NVIDIA A100 Tensor Core GPU. Take a look inside the GPU and explore the groundbreaking features of the Hopper architecture.
New Releases and Capabilities for NVIDIA Omniverse
Learn about the new NVIDIA Omniverse Connectors and asset libraries, along with updated apps and features. Now, you can build, extend, and connect 3D tools and platforms to the Omniverse ecosystem easier than ever before.
Expanding Hybrid-Cloud Support in Virtualized Data Centers
Get started with NVIDIA AI Enterprise on LaunchPad for free and see how NVIDIA AI Enterprise 1.1 is providing production support for container orchestration and Kubernetes cluster management using VMware vSphere with Tanzu 7.0 update 3c. The software suite delivers AI and machine learning workloads to every business in virtual machines, containers, or Kubernetes.
Enhancing Zero-Trust Security with Data
See how leveraging zero-trust principles doesn’t have to mean consigning users to a world where we spend as much time trying to access digital resources as using them. Get the scoop on zero trust and discuss how a thoughtful cybersecurity team can structure a zero-trust system that keeps users and data safe, while maintaining a seamless user experience.
Sign up for the latest developer news from NVIDIA