Mixed Precision Training for NLP and Speech Recognition with OpenSeq2Seq

Artificial Intelligence, Facebook ConvS2S, FP16, Google NMT, Google Transformer, Horovod, machine learning and AI, Mixed Precision, Natural Language Processing, NLP, openseq2seq, speech recognition, TensorFlow

Nadeem Mohammad, posted Oct 09 2018

The success of neural networks thus far has been built on bigger datasets, better theoretical models, and reduced training time. Sequential models, in particular, could stand to benefit from even more from these.

Read more