Pose-Selective Max Pooling for Measuring Similarity

In this paper, we deal with two challenges for measuring the similarity of the subject identities in practical video-based face recognition - the variation of the head pose in uncontrolled environments and the computational expense of processing videos. Since the frame-wise feature mean is unable to...

Full description

Saved in:
Bibliographic Details
Main Authors Xiang, Xiang, Tran, Trac D
Format Journal Article
LanguageEnglish
Published 22.09.2016
Subjects
Online AccessGet full text
DOI10.48550/arxiv.1609.07042

Cover

Abstract In this paper, we deal with two challenges for measuring the similarity of the subject identities in practical video-based face recognition - the variation of the head pose in uncontrolled environments and the computational expense of processing videos. Since the frame-wise feature mean is unable to characterize the pose diversity among frames, we define and preserve the overall pose diversity and closeness in a video. Then, identity will be the only source of variation across videos since the pose varies even within a single video. Instead of simply using all the frames, we select those faces whose pose point is closest to the centroid of the K-means cluster containing that pose point. Then, we represent a video as a bag of frame-wise deep face features while the number of features has been reduced from hundreds to K. Since the video representation can well represent the identity, now we measure the subject similarity between two videos as the max correlation among all possible pairs in the two bags of features. On the official 5,000 video-pairs of the YouTube Face dataset for face verification, our algorithm achieves a comparable performance with VGG-face that averages over deep features of all frames. Other vision tasks can also benefit from the generic idea of employing geometric cues to improve the descriptiveness of deep features.
AbstractList In this paper, we deal with two challenges for measuring the similarity of the subject identities in practical video-based face recognition - the variation of the head pose in uncontrolled environments and the computational expense of processing videos. Since the frame-wise feature mean is unable to characterize the pose diversity among frames, we define and preserve the overall pose diversity and closeness in a video. Then, identity will be the only source of variation across videos since the pose varies even within a single video. Instead of simply using all the frames, we select those faces whose pose point is closest to the centroid of the K-means cluster containing that pose point. Then, we represent a video as a bag of frame-wise deep face features while the number of features has been reduced from hundreds to K. Since the video representation can well represent the identity, now we measure the subject similarity between two videos as the max correlation among all possible pairs in the two bags of features. On the official 5,000 video-pairs of the YouTube Face dataset for face verification, our algorithm achieves a comparable performance with VGG-face that averages over deep features of all frames. Other vision tasks can also benefit from the generic idea of employing geometric cues to improve the descriptiveness of deep features.
Author Tran, Trac D
Xiang, Xiang
Author_xml – sequence: 1
  givenname: Xiang
  surname: Xiang
  fullname: Xiang, Xiang
– sequence: 2
  givenname: Trac D
  surname: Tran
  fullname: Tran, Trac D
BackLink https://doi.org/10.48550/arXiv.1609.07042$$DView paper in arXiv
BookMark eNrjYmDJy89LZWCQNDTQM7EwNTXQTyyqyCzTMzQzsNQzMDcwMeJkMA7IL07VDU7NSU0uySxLVfBNrFAIyM_PycxLV0jLL1LwTU0sLi0C8YIzczNzEosySyp5GFjTEnOKU3mhNDeDvJtriLOHLtj4-IKizNzEosp4kDXxYGuMCasAAA3JM5s
ContentType Journal Article
Copyright http://arxiv.org/licenses/nonexclusive-distrib/1.0
Copyright_xml – notice: http://arxiv.org/licenses/nonexclusive-distrib/1.0
DBID AKY
EPD
GOX
DOI 10.48550/arxiv.1609.07042
DatabaseName arXiv Computer Science
arXiv Statistics
arXiv.org
DatabaseTitleList
Database_xml – sequence: 1
  dbid: GOX
  name: arXiv.org
  url: http://arxiv.org/find
  sourceTypes: Open Access Repository
DeliveryMethod fulltext_linktorsrc
ExternalDocumentID 1609_07042
GroupedDBID AKY
EPD
GOX
ID FETCH-arxiv_primary_1609_070423
IEDL.DBID GOX
IngestDate Tue Jul 22 22:00:20 EDT 2025
IsDoiOpenAccess true
IsOpenAccess true
IsPeerReviewed false
IsScholarly false
Language English
LinkModel DirectLink
MergedId FETCHMERGED-arxiv_primary_1609_070423
OpenAccessLink https://arxiv.org/abs/1609.07042
ParticipantIDs arxiv_primary_1609_07042
PublicationCentury 2000
PublicationDate 2016-09-22
PublicationDateYYYYMMDD 2016-09-22
PublicationDate_xml – month: 09
  year: 2016
  text: 2016-09-22
  day: 22
PublicationDecade 2010
PublicationYear 2016
Score 3.2132158
SecondaryResourceType preprint
Snippet In this paper, we deal with two challenges for measuring the similarity of the subject identities in practical video-based face recognition - the variation of...
SourceID arxiv
SourceType Open Access Repository
SubjectTerms Computer Science - Artificial Intelligence
Computer Science - Computer Vision and Pattern Recognition
Computer Science - Learning
Statistics - Machine Learning
Title Pose-Selective Max Pooling for Measuring Similarity
URI https://arxiv.org/abs/1609.07042
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwY2BQSUpJtTRKszDWtTRKsdQ1MbdI0k1MBTbkEs0Sk0xS0tKMkhJB4x2-fmYeoSZeEaYRTAwKsL0wiUUVmWWQ84GTivUNzUDHSZoDExYzaK8iKNW6-0dAJifBR3FB1SPUAduYYCGkSsJNkIEf2rpTcIREhxADU2qeCINxQH5xqm4w-MIZYNmi4JtYoRCQD7osJ10B2GRU8AUP04F4wZm5mcCeJrBhLMog7-Ya4uyhC7YmvgByJkQ8yAXxYBcYizGwAHvuqRIMCqDugYVlaqpZorkJsKIGtd1NksyNLS0MzVON01LMJRkkcJkihVtKmoELWGubgRYtGBnJMLCUFJWmygJrxpIkOXDwAAAAfmdw
linkProvider Cornell University
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Pose-Selective+Max+Pooling+for+Measuring+Similarity&rft.au=Xiang%2C+Xiang&rft.au=Tran%2C+Trac+D&rft.date=2016-09-22&rft_id=info:doi/10.48550%2Farxiv.1609.07042&rft.externalDocID=1609_07042