Are You Smarter Than a Sixth Grader? Textbook Question Answering for Multimodal Machine Comprehension

We introduce the task of Multi-Modal Machine Comprehension (M3C), which aims at answering multimodal questions given a context of text, diagrams and images. We present the Textbook Question Answering (TQA) dataset that includes 1,076 lessons and 26,260 multi-modal questions, taken from middle school...

Full description

Saved in:
Bibliographic Details
Published in2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) pp. 5376 - 5384
Main Authors Kembhavi, Aniruddha, Minjoon Seo, Schwenk, Dustin, Jonghyun Choi, Farhadi, Ali, Hajishirzi, Hannaneh
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.07.2017
Subjects
Online AccessGet full text
ISSN1063-6919
1063-6919
DOI10.1109/CVPR.2017.571

Cover

More Information
Summary:We introduce the task of Multi-Modal Machine Comprehension (M3C), which aims at answering multimodal questions given a context of text, diagrams and images. We present the Textbook Question Answering (TQA) dataset that includes 1,076 lessons and 26,260 multi-modal questions, taken from middle school science curricula. Our analysis shows that a significant portion of questions require complex parsing of the text and the diagrams and reasoning, indicating that our dataset is more complex compared to previous machine comprehension and visual question answering datasets. We extend state-of-the-art methods for textual machine comprehension and visual question answering to the TQA dataset. Our experiments show that these models do not perform well on TQA. The presented dataset opens new challenges for research in question answering and reasoning across multiple modalities.
ISSN:1063-6919
1063-6919
DOI:10.1109/CVPR.2017.571