Understanding action concepts from videos and brain activity through subjects’ consensus

In this paper, we investigate brain activity associated with complex visual tasks, showing that electroencephalography (EEG) data can help computer vision in reliably recognizing actions from video footage that is used to stimulate human observers. Notably, we consider not only typical “explicit” vi...

Full description

Saved in:
Bibliographic Details
Published inScientific reports Vol. 12; no. 1; pp. 19073 - 15
Main Authors Cavazza, Jacopo, Ahmed, Waqar, Volpi, Riccardo, Morerio, Pietro, Bossi, Francesco, Willemse, Cesco, Wykowska, Agnieszka, Murino, Vittorio
Format Journal Article
LanguageEnglish
Published London Nature Publishing Group UK 09.11.2022
Nature Portfolio
Subjects
Online AccessGet full text
ISSN2045-2322
2045-2322
DOI10.1038/s41598-022-23067-2

Cover

More Information
Summary:In this paper, we investigate brain activity associated with complex visual tasks, showing that electroencephalography (EEG) data can help computer vision in reliably recognizing actions from video footage that is used to stimulate human observers. Notably, we consider not only typical “explicit” video action benchmarks, but also more complex data sequences in which action concepts are only referred to, implicitly. To this end, we consider a challenging action recognition benchmark dataset—Moments in Time—whose video sequences do not explicitly visualize actions, but only implicitly refer to them (e.g., fireworks in the sky as an extreme example of “flying”). We employ such videos as stimuli and involve a large sample of subjects to collect a high-definition, multi-modal EEG and video data, designed for understanding action concepts. We discover an agreement among brain activities of different subjects stimulated by the same video footage. We name it as subjects consensus , and we design a computational pipeline to transfer knowledge from EEG to video, sharply boosting the recognition performance.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:2045-2322
2045-2322
DOI:10.1038/s41598-022-23067-2