Julia is a high-level programming language for mathematical computing that is as easy to use as Python, but as fast as C. The language has been created with performance in mind, and combines careful language design with a sophisticated LLVM-based compiler.
Julia is already well regarded for programming multi-core CPUs and large parallel computing systems, but recent developments make the language suited for GPU computing as well. The performance possibilities of GPUs can be democratized by providing more high-level tools that are easy to use by a large community of applied mathematicians and machine learning programmers.
In a new NVIDIA Developer Blog post by Tim Besard, a contributor to the Julia project from the University of Ghent, demonstrates native GPU programming with a Julia package that enhances the Julia compiler with native PTX code generation capabilities: CUDAnative.jl.
Read more >
High-Performance GPU Computing in the Julia Programming Language
Oct 26, 2017
Discuss (0)
Related resources
- GTC session: Multi GPU Programming Models for HPC and AI
- GTC session: Multi GPU Programming Models for HPC and AI
- GTC session: A Deep Dive into the Latest HPC Software
- GTC session: A Deep Dive into the Latest HPC Software
- NGC Containers: julia
- NGC Containers: julia