We love seeing all of the NVIDIA GPU-related tweets – here’s some that we came across this week:
Putting the finishing touches on a #KNIME workflow for #deeplearning and #digitalpathology using #Azure and #spark and #nvidia #gpus
— Jon Fuller (@JonathanCFuller) April 3, 2017
@HULL_HPC_VIPER new @nvidia P100 accelerators will now be used intensely for computational language research.
— Hull Uni HPC-VIPER (@HULL_HPC_VIPER) March 27, 2017
I got space only in kitchen to keep my deep learning machine.2 titan x pascal 64 go.azus rog pic.twitter.com/UO04l9fjG0
— ഘടോൽകചൻ (@vu3mmg) April 2, 2017
Started the morning learning about #MXNet and #DeepLearning with @nvidia #GPUs. Pretty cool!
— Emmanuel Tsouris (@EmmanuelTsouris) April 4, 2017
When your neural net is taking way too long to compile because you're using 10 folds. Grr!! Time for TensorFlow GPU #RookieMistake
— DistrictAI (@districtai) April 5, 2017
https://twitter.com/MostafaElzoghbi/status/849995035953291266
All I want from @Apple is an @nvidia GPU to speed up training & testing my @tensorflow models https://t.co/LwYSNYN39Y
— Jared Messenger (@Jared_Mess) April 5, 2017
On Twitter? Follow @GPUComputing and @mention us so we’re able to keep track of what you’re up to.