We love seeing all of the tweets from developers using NVIDIA GPUs – here are a few highlights from the week:
Nvidia leading the way on AI infrastructure – "15B-transistor chip for deep learning" https://t.co/IvjwIkwXFh pic.twitter.com/JSmGPqKKeU
— cdixon.eth (@cdixon) April 21, 2016
#DeepLearning SDK from #NVIDIA looks very cool -esp. the #cuDNN and #NCCL framework for multi-GPU communication. https://t.co/JLNOMqhQWj
— Alexander Stojanovic (@stojanovic) April 21, 2016
Spotted this last night in the CuDNN v5 Manual :) pic.twitter.com/lasXEezyOS
— Soumith Chintala (@soumithchintala) April 19, 2016
It takes 14 years to train a radiographer to recognise a cancer.. It takes 3 days to train a machine ! @nvidia @IF_ICL @imperialcollege
— Jakub Wachocki (@JakubWachocki) April 14, 2016
https://twitter.com/xcud/status/718121102791192576
Expect A LOT more deep learning tutorials on the PyImageSearch blog soon… CC: @nvidia #DeepLearning pic.twitter.com/o54ah5if6k
— PyImageSearch (@PyImageSearch) April 16, 2016
One more for scale! Can't believe @nvidia @NVIDIATesla #M4 gets 2.2 TeraFlops on 75W of power. #gpu #machinelearning pic.twitter.com/5TVcpKo32l
— Josh Patterson (@datametrician) April 16, 2016
On Twitter? Follow @GPUComputing and @NVIDIA, then @mention us in your tweet to be featured in next week’s ‘Weekly Social Roundup’.