Technical Walkthrough 0

Using the NVIDIA CUDA Stream-Ordered Memory Allocator, Part 2

In part 1 of this series, we introduced new API functions, and , that enable memory allocation and deallocation to be stream-ordered operations. In this post… 9 MIN READ
Technical Walkthrough 0

Using the NVIDIA CUDA Stream-Ordered Memory Allocator, Part 1

This post introduces new API functions that enable memory allocation and deallocation to be stream-ordered operations. 14 MIN READ
Technical Walkthrough 0

Reducing Acceleration Structure Memory with NVIDIA RTXMU

RTXMU (RTX Memory Utility) combines both compaction and suballocation techniques to optimize and reduce memory consumption of acceleration structures for any DXR or Vulkan Ray Tracing application. 11 MIN READ
Technical Walkthrough 0

Tips: Acceleration Structure Compaction

Learn how to compact the acceleration structure in DXR and what to know before you start implementing. 7 MIN READ
Technical Walkthrough 0

Managing Memory for Acceleration Structures in DirectX Raytracing

In Microsoft Direct3D, anything that uses memory is considered a resource: textures, vertex buffers, index buffers, render targets, constant buffers… 6 MIN READ
Technical Walkthrough 0

Making Apache Spark More Concurrent

Apache Spark provides capabilities to program entire clusters with implicit data parallelism. With Spark 3.0 and the open source RAPIDS Accelerator for Spark… 7 MIN READ