Conversational AI

Inception Spotlight: Deepset collaborates with NVIDIA and AWS on BERT Optimization

Language models are essential for modern NLP. Building a new language model from scratch can be beneficial for many domains. NVIDIA Inception member Deepset bridges the gap between NLP research and industry – their core product, Haystack, is an open-source framework that enables developers to utilize the latest NLP models for semantic search and question answering at scale. Haystack Hub, is their software as a service (SaaS) platform, used by developers from various industries, including finance, legal, and automotive, to find answers in all kinds of text documents. 

In a collaborative effort with NVIDIA and AWS, deepset used NVIDIA V100 GPUs for training their language model. The GPU performance profiles were captured by the NVIDIA Nsight Systems.

The collaboration was a product of the partnership between NVIDIA Inception and AWS Activate, an initiative to support AI startups by providing access to the benefits of both acceleration programs. The benefits for NVIDIA Inception startups joining AWS Activate include business and marketing support, as well as AWS Cloud credits, which can be used to access NVIDIA’s latest generation GPUs in Amazon EC2 – P3 Instances. AWS Activate members that are using AI and machine learning are referred to NVIDIA Inception and can benefit from immediate preferred pricing on NVIDIA GPUs and Deep Learning Institute credits.

“A considerable amount of manual development is required to create the training data and vocabulary, configure hyperparameters, start and monitor training jobs, and run periodical evaluation of different model checkpoints. In our first training runs, we also found several bugs only after multiple hours of training, resulting in a slow development cycle. In summary, language model training can be a painful job for a developer and easily consumes multiple days of work”.

“The increased efficiency of training jobs reduces our energy usage and lowers our carbon footprint. By tackling different areas of FARM’s training pipeline, we were able to significantly optimize the resource utilization. In the end, we were able to achieve a speedup in training time of 3.9 times faster, a 12.8 times reduction in training cost, and reduced the developer effort required from days to hours”.

Collaborating with NVIDIA and AWS, NVIDIA Inception partner deepset achieves a 3.9x speedup and 12.8x cost reduction for training NLP models. As a result, the developer effort was significantly reduced.

Read more about technologies used in the training and their impact on improving BERT training performance.

Discuss (0)

Tags