Andy Adinets

Andy Adinets got his Diploma (“specialist”) degree in Computer Science in 2006 from Lomonosov Moscow State University, and his Ph.D. in Computer Science (“candidate of physical and mathematical sciences") in 2009, also from MSU. From end of November 2012 to end of March 2015, he worked as a researcher at NVIDIA application lab at Jülich Supercomputing Centre. Since July 2017, he has been working as an AI Developer Technology engineer at NVIDIA in Munich, Germany. His research interests include GPU programming, algorithm design for many-core architectures, high-performance computing and machine learning.

Posts by Andy Adinets

Accelerated Computing

CUDA Pro Tip: Optimized Filtering with Warp-Aggregated Atomics

This post introduces warp-aggregated atomics, a useful technique to improve performance when many CUDA threads atomically update a single counter. 14 MIN READ

A CUDA Dynamic Parallelism Case Study: PANDA

Learn how Dynamic Parallelism of NVIDIA GPUs is being used to accelerate the discoveries of particle physics running on the PANDA experiment part of the Facility for Antiproton and Ion Research in Europe (FAIR). 11 MIN READ

CUDA Dynamic Parallelism API and Principles

This post is the second in a series on CUDA Dynamic Parallelism. In my first post, I introduced Dynamic Parallelism by using it to compute images of the… 13 MIN READ
Accelerated Computing

Adaptive Parallel Computation with CUDA Dynamic Parallelism

Early CUDA programs had to conform to a flat, bulk parallel programming model. Programs had to perform a sequence of kernel launches, and for best performance… 13 MIN READ