GTC 2020: MAGMA: Accelerating Linear Algebra Through Mixed-Precision and Tensor Cores
After clicking “Watch Now” you will be prompted to login or join.
Click “Watch Now” to login or join the NVIDIA Developer Program.
MAGMA: Accelerating Linear Algebra Through Mixed-Precision and Tensor Cores
Ahmad Abdelfattah, University of Tennessee | Stanimire Tomov, University of Tennessee
The MAGMA library provides several GPU-accelerated linear algebra algorithms. We'll cover the mixed-precision (MP) algorithms for solving different linear algebra problems, such as linear systems of equations (Ax=b). Classic MP algorithms use two precisions to accelerate the solution of systems in double or double-complex precisions. Thanks to the introduction of half-precision in NVIDIA GPUs and the incredible performance of tensor cores, dual-precision MP algorithms can now accelerate systems in single precision as well as single-complex precision. Triple-precision MP algorithms can now solve systems in double and double-complex precisions. We'll show how to accelerate complex precisions using “half-complex” linear algebra kernels, which are not natively supported by the tensor core units.