Researchers from Karlsruhe Institute of Tech, MIT and University of Toronto published MovieQA, a dataset that contains 7702 reasoning questions and answers from 294 movies. Their innovative dataset and accuracy metrics provide a well-defined challenge for question/answer machine learning algorithms.
The questions range from simpler ‘Who’ did ‘What’ to ‘Whom’ that can be solved by computer vision alone, to ‘Why’ and ‘How’ something happened in the movie, questions that can only be solved by exploiting both the visual information and dialogs.
MovieQA is unique in that it contains multiple sources of information – full-length movies, plot synopses, subtitles, scripts and DVS (a service that narrates moves scenes to the visually impaired).
With the need to scale to large vocabulary data sets, they relied on a TITAN Black GPU for their overwhelming amount of training data.
In early 2016, the researchers plan to create an online benchmark that will have 15,000 questions and 75,000 answers which will encourage other to contribute.
Read the research paper >>
GPU-Trained System Understands Movies
Dec 25, 2015
Discuss (0)

Related resources
- GTC session: Unlock the Potential of Unstructured Data
- GTC session: Tree Attention: Scalable Long-Context Transformer Decoding on GPU Clusters
- GTC session: Applying AI Weather Models With NVIDIA Earth-2
- SDK: Llama3 70B Instruct NIM
- SDK: Neural VDB
- SDK: Nano VDB