Maluuba, a Canadian artificial intelligence startup, created a system that can read, comprehend and reason, almost as well as humans. Their deep learning-based program called EpiReader is designed to solve a specific kind of comprehension task: a word is removed from a block of text and then the system is able to determine the missing
Researchers from University of Massachusetts Amherst and Mount Holyoke College received a four-year grant from the National Science Foundation to analyze images and data on the chemical composition of rocks and dust from NASA’s Curiosity rover. The rover has been exploring a crater on Mars since 2012 and sends back large amounts of data collected
Games often precompute ambient occlusion (AO) or other static lighting and bake out the results into vertex or texture data that is loaded into OpenGL or DirectX shaders later. Raytracing is the core computation of such a baking pipeline. However, writing a production-quality GPU ray tracer from scratch takes a fair amount of time and expertise.
Anne Severt, PhD student at Forschungszentrum Jülich in Germany shares how she is using NVIDIA Tesla K80s and OpenACC with complex geometries to create real-time simulations of smoke propagation to better prepare firefighters for real-life situations – such as where smoke will be propagating from underground metro stations over time. To learn more, view Anne’s
Stanford researchers in the Computational Vision and Geometry Lab developed a robot that could soon autonomously move among us with normal human social etiquettes — such as deciding rights of way on the sidewalk. Using a Tesla K40 GPU and CUDA to train the machine learning models, the robot is able to understand its surroundings