To make Siri great, Apple employed several artificial intelligence experts three years ago to apply deep learning to their intelligent mobile smart assistant.
The team began training a neural net to replace the original Siri. “We have the biggest and baddest GPU farm cranking all the time,” says Alex Acero, who heads the speech team.
“The error rate has been cut by a factor of two in all the languages, more than a factor of two in many cases,” says Acero. “That’s mostly due to deep learning and the way we have optimized it.”
Besides Siri, Apple’s adoption of deep learning and neural nets are now found all over their products and services — including fraud detection on the Apple store, facial recognition and locations in your photos, and to help identify the most useful feedback from thousands of reports from beta testers.
“The typical customer is going to experience deep learning on a day-to-day level that [exemplifies] what you love about an Apple product,” says Phil Schiller, senior vice president of worldwide marketing at Apple. “The most exciting [instances] are so subtle that you don’t even think about it until the third time you see it, and then you stop and say, How is this happening?”
GPUs Help Cut Siri’s Error Rate by Half
Aug 25, 2016
Discuss (0)

Related resources
- GTC session: Merlin Updates - Build and Deploy Recommender Systems at Any Scale (Spring 2023)
- GTC session: Teaching Kits for Educators: Priming University Students for the AI and Accelerated Computing Future (Spring 2023)
- GTC session: How to Accelerate NLP Performance on GPU with Neural Architecture Search (Spring 2023)
- Webinar: Building Your First Conversational AI App with Jarvis
- Webinar: Meet the Experts: Accelerated Data Pre-Processing for Recommendation Systems, Computer Vision and Speech Applications
- Webinar: Healthcare AI Startups Solutions Showcase