Multimodal Analysis of User-Generated Multimedia Content

This book presents a study of semantics and sentics understanding derived from user-generated multimodal content (UGC). It enables researchers to learn about the ways multimodal analysis of UGC can augment semantics and sentics understanding and it helps in addressing several multimedia analytics pr...

Full description

Saved in:
Bibliographic Details
Main Authors: Shah, Rajiv, (Author), Zimmermann, Roger, (Author)
Format: eBook
Language: English
Published: Cham : Springer International Publishing : Imprint : Springer, 2017.
Series: Socio-affective computing ; 6.
Subjects:
ISBN: 9783319618074
9783319618067
Physical Description: 1 online resource (xxii, 263 pages 63 illustrations, 42 illustrations in color)

Cover

Table of contents

Description
Summary: This book presents a study of semantics and sentics understanding derived from user-generated multimodal content (UGC). It enables researchers to learn about the ways multimodal analysis of UGC can augment semantics and sentics understanding and it helps in addressing several multimedia analytics problems from social media such as event detection and summarization, tag recommendation and ranking, soundtrack recommendation, lecture video segmentation, and news video uploading. Readers will discover how the derived knowledge structures from multimodal information are beneficial for efficient multimedia search, retrieval, and recommendation. However, real-world UGC is complex, and extracting the semantics and sentics from only multimedia content is very difficult because suitable concepts may be exhibited in different representations. Moreover, due to the increasing popularity of social media websites and advancements in technology, it is now possible to collect a significant amount of important contextual information (e.g., spatial, temporal, and preferential information). Thus, there is a need to analyze the information of UGC from multiple modalities to address these problems. A discussion of multimodal analysis is presented followed by studies on how multimodal information is exploited to address problems that have a significant impact on different areas of society (e.g., entertainment, education, and journalism). Specifically, the methods presented exploit the multimedia content (e.g., visual content) and associated contextual information (e.g., geo-, temporal, and other sensory data). The reader is introduced to several knowledge bases and fusion techniques to address these problems. This work includes future directions for several interesting multimedia analytics problems that have the potential to significantly impact society. The work is aimed at researchers in the multimedia field who would like to pursue research in the area of multimodal analysis of UGC.
Bibliography: Includes bibliographical references and index.
ISBN: 9783319618074
9783319618067
ISSN: 2509-5706 ;
Access: Plný text je dostupný pouze z IP adres počítačů Univerzity Tomáše Bati ve Zlíně nebo vzdáleným přístupem pro zaměstnance a studenty