Auto-Calibrated Gaze Estimation Using Human Gaze Patterns

We present a novel method to auto-calibrate gaze estimators based on gaze patterns obtained from other viewers. Our method is based on the observation that the gaze patterns of humans are indicative of where a new viewer will look at. When a new viewer is looking at a stimulus, we first estimate a t...

Full description

Saved in:
Bibliographic Details
Published inInternational journal of computer vision Vol. 124; no. 2; pp. 223 - 236
Main Authors Alnajar, Fares, Gevers, Theo, Valenti, Roberto, Ghebreab, Sennay
Format Journal Article
LanguageEnglish
Published New York Springer US 01.09.2017
Springer
Springer Nature B.V
Subjects
Online AccessGet full text
ISSN0920-5691
1573-1405
1573-1405
DOI10.1007/s11263-017-1014-x

Cover

Abstract We present a novel method to auto-calibrate gaze estimators based on gaze patterns obtained from other viewers. Our method is based on the observation that the gaze patterns of humans are indicative of where a new viewer will look at. When a new viewer is looking at a stimulus, we first estimate a topology of gaze points (initial gaze points). Next, these points are transformed so that they match the gaze patterns of other humans to find the correct gaze points. In a flexible uncalibrated setup with a web camera and no chin rest, the proposed method is tested on ten subjects and ten images. The method estimates the gaze points after looking at a stimulus for a few seconds with an average error below 4 . 5 ∘ . Although the reported performance is lower than what could be achieved with dedicated hardware or calibrated setup, the proposed method still provides sufficient accuracy to trace the viewer attention. This is promising considering the fact that auto-calibration is done in a flexible setup , without the use of a chin rest, and based only on a few seconds of gaze initialization data. To the best of our knowledge, this is the first work to use human gaze patterns in order to auto-calibrate gaze estimators.
AbstractList (ProQuest: ... denotes formulae and/or non-USASCII text omitted; see image) We present a novel method to auto-calibrate gaze estimators based on gaze patterns obtained from other viewers. Our method is based on the observation that the gaze patterns of humans are indicative of where a new viewer will look at. When a new viewer is looking at a stimulus, we first estimate a topology of gaze points (initial gaze points). Next, these points are transformed so that they match the gaze patterns of other humans to find the correct gaze points. In a flexible uncalibrated setup with a web camera and no chin rest, the proposed method is tested on ten subjects and ten images. The method estimates the gaze points after looking at a stimulus for a few seconds with an average error below ... Although the reported performance is lower than what could be achieved with dedicated hardware or calibrated setup, the proposed method still provides sufficient accuracy to trace the viewer attention. This is promising considering the fact that auto-calibration is done in a flexible setup , without the use of a chin rest, and based only on a few seconds of gaze initialization data. To the best of our knowledge, this is the first work to use human gaze patterns in order to auto-calibrate gaze estimators.
We present a novel method to auto-calibrate gaze estimators based on gaze patterns obtained from other viewers. Our method is based on the observation that the gaze patterns of humans are indicative of where a new viewer will look at. When a new viewer is looking at a stimulus, we first estimate a topology of gaze points (initial gaze points). Next, these points are transformed so that they match the gaze patterns of other humans to find the correct gaze points. In a flexible uncalibrated setup with a web camera and no chin rest, the proposed method is tested on ten subjects and ten images. The method estimates the gaze points after looking at a stimulus for a few seconds with an average error below 4 . 5 ∘ . Although the reported performance is lower than what could be achieved with dedicated hardware or calibrated setup, the proposed method still provides sufficient accuracy to trace the viewer attention. This is promising considering the fact that auto-calibration is done in a flexible setup , without the use of a chin rest, and based only on a few seconds of gaze initialization data. To the best of our knowledge, this is the first work to use human gaze patterns in order to auto-calibrate gaze estimators.
We present a novel method to auto-calibrate gaze estimators based on gaze patterns obtained from other viewers. Our method is based on the observation that the gaze patterns of humans are indicative of where a new viewer will look at. When a new viewer is looking at a stimulus, we first estimate a topology of gaze points (initial gaze points). Next, these points are transformed so that they match the gaze patterns of other humans to find the correct gaze points. In a flexible uncalibrated setup with a web camera and no chin rest, the proposed method is tested on ten subjects and ten images. The method estimates the gaze points after looking at a stimulus for a few seconds with an average error below [Formula omitted]. Although the reported performance is lower than what could be achieved with dedicated hardware or calibrated setup, the proposed method still provides sufficient accuracy to trace the viewer attention. This is promising considering the fact that auto-calibration is done in a flexible setup, without the use of a chin rest, and based only on a few seconds of gaze initialization data. To the best of our knowledge, this is the first work to use human gaze patterns in order to auto-calibrate gaze estimators.
Audience Academic
Author Valenti, Roberto
Alnajar, Fares
Gevers, Theo
Ghebreab, Sennay
Author_xml – sequence: 1
  givenname: Fares
  surname: Alnajar
  fullname: Alnajar, Fares
  email: F.Alnajar@uva.nl
  organization: Informatics Institute, Faculty of Science, University of Amsterdam
– sequence: 2
  givenname: Theo
  surname: Gevers
  fullname: Gevers, Theo
  organization: Informatics Institute, Faculty of Science, University of Amsterdam
– sequence: 3
  givenname: Roberto
  surname: Valenti
  fullname: Valenti, Roberto
  organization: Informatics Institute, Faculty of Science, University of Amsterdam
– sequence: 4
  givenname: Sennay
  surname: Ghebreab
  fullname: Ghebreab, Sennay
  organization: Amsterdam University College
BookMark eNqNkc1qGzEURkVJobabB-jO0FUXcq9-ZhQtjUmTQKAhidfiWtYYhbHGlTTEztNX7njRFBKKFgLpO9L5pDE5C11whHxhMGMA6ntijNeCAlOUAZN0_4GMWKUEZRKqMzICzYFWtWafyDilJwDgF1yMiJ73uaMLbP0qYnbr6RW-uOllyn6L2Xdhukw-bKbX_RbDsHeHObsY0mfyscE2ufPTPCHLH5ePi2t6-_PqZjG_pVYKnmkjhapWTgmpkct1w6CyqFbIteSS28Y6LRlwUBqF1rYSWLwaRKskCljXYkL4cG4fdnh4xrY1u1js4sEwMMfyZihvSnlzLG_2Bfo6QLvY_epdyuap62MonoZpXkF9wTWU1GxIbbB1xoemyxFtGWu39bY8cOPL-lxqraSq2dHl2yugZLLb5w32KZmbh_vXWTZkbexSiq75L231D2N9_vMPRcy375KnV0rllrBx8a_Cb0K_AYSyqM4
CitedBy_id crossref_primary_10_1016_j_knosys_2021_107630
crossref_primary_10_3390_s23010381
crossref_primary_10_1109_TCDS_2021_3066465
crossref_primary_10_1109_TII_2023_3276322
crossref_primary_10_3390_s19010216
crossref_primary_10_1109_TII_2024_3413292
crossref_primary_10_3389_fpsyg_2024_1309047
crossref_primary_10_3390_s24041237
crossref_primary_10_1007_s11042_020_08638_7
crossref_primary_10_1109_ACCESS_2020_2999633
Cites_doi 10.1109/TPAMI.2012.101
10.1109/ICCV.2013.24
10.1126/science.290.5500.2323
10.1145/1344471.1344531
10.1167/9.4.29
10.1109/CVPR.2011.5995675
10.1007/s11263-013-0620-5
10.1109/ACV.2002.1182170
10.1109/ICIP.2015.7351256
10.1109/CVPR.2010.5539984
10.1109/TPAMI.2011.251
10.1145/507072.507076
10.1016/j.imavis.2005.06.001
10.1088/0954-898X_14_3_302
10.1109/ICCV.2011.6126237
10.1109/TPAMI.2009.30
10.1109/TPAMI.2007.70773
10.1109/TSMC.1978.4309999
10.1109/ICCV.2009.5459462
10.1109/TBME.2005.863952
10.1007/s11263-005-4632-7
10.1109/CVPR.2014.235
ContentType Journal Article
Copyright The Author(s) 2017
COPYRIGHT 2017 Springer
International Journal of Computer Vision is a copyright of Springer, 2017.
Copyright_xml – notice: The Author(s) 2017
– notice: COPYRIGHT 2017 Springer
– notice: International Journal of Computer Vision is a copyright of Springer, 2017.
DBID C6C
AAYXX
CITATION
ISR
3V.
7SC
7WY
7WZ
7XB
87Z
8AL
8FD
8FE
8FG
8FK
8FL
ABUWG
AFKRA
ARAPS
AZQEC
BENPR
BEZIV
BGLVJ
CCPQU
DWQXO
FRNLG
F~G
GNUQQ
HCIFZ
JQ2
K60
K6~
K7-
L.-
L7M
L~C
L~D
M0C
M0N
P5Z
P62
PHGZM
PHGZT
PKEHL
PQBIZ
PQBZA
PQEST
PQGLB
PQQKQ
PQUKI
PRINS
PYYUZ
Q9U
ADTOC
UNPAY
DOI 10.1007/s11263-017-1014-x
DatabaseName Springer Nature OA Free Journals
CrossRef
Gale In Context: Science
ProQuest Central (Corporate)
Computer and Information Systems Abstracts
ABI/INFORM Collection (via ProQuest)
ABI/INFORM Global (PDF only)
ProQuest Central (purchase pre-March 2016)
ABI/INFORM Collection
Computing Database (Alumni Edition)
Technology Research Database
ProQuest SciTech Collection
ProQuest Technology Collection
ProQuest Central (Alumni) (purchase pre-March 2016)
ABI/INFORM Collection (Alumni)
ProQuest Central (Alumni)
ProQuest Central UK/Ireland
Advanced Technologies & Computer Science Collection
ProQuest Central Essentials - QC
ProQuest Central
Business Premium Collection
Technology Collection
ProQuest One
ProQuest Central
Business Premium Collection (Alumni)
ABI/INFORM Global (Corporate)
ProQuest Central Student
SciTech Premium Collection
ProQuest Computer Science Collection
ProQuest Business Collection (Alumni Edition)
ProQuest Business Collection
Computer Science Database
ABI/INFORM Professional Advanced
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
ABI/INFORM Global
Computing Database
Advanced Technologies & Aerospace Database
ProQuest Advanced Technologies & Aerospace Collection
ProQuest Central Premium
ProQuest One Academic
ProQuest One Academic Middle East (New)
ProQuest One Business
ProQuest One Business (Alumni)
ProQuest One Academic Eastern Edition (DO NOT USE)
ProQuest One Applied & Life Sciences
ProQuest One Academic
ProQuest One Academic UKI Edition
ProQuest Central China
ABI/INFORM Collection China
ProQuest Central Basic
Unpaywall for CDI: Periodical Content
Unpaywall
DatabaseTitle CrossRef
ABI/INFORM Global (Corporate)
ProQuest Business Collection (Alumni Edition)
ProQuest One Business
Computer Science Database
ProQuest Central Student
Technology Collection
Technology Research Database
Computer and Information Systems Abstracts – Academic
ProQuest One Academic Middle East (New)
ProQuest Advanced Technologies & Aerospace Collection
ProQuest Central Essentials
ProQuest Computer Science Collection
Computer and Information Systems Abstracts
ProQuest Central (Alumni Edition)
SciTech Premium Collection
ProQuest One Community College
ProQuest Central China
ABI/INFORM Complete
ProQuest Central
ABI/INFORM Professional Advanced
ProQuest One Applied & Life Sciences
ProQuest Central Korea
ProQuest Central (New)
Advanced Technologies Database with Aerospace
ABI/INFORM Complete (Alumni Edition)
Advanced Technologies & Aerospace Collection
Business Premium Collection
ABI/INFORM Global
ProQuest Computing
ABI/INFORM Global (Alumni Edition)
ProQuest Central Basic
ProQuest Computing (Alumni Edition)
ProQuest One Academic Eastern Edition
ABI/INFORM China
ProQuest Technology Collection
ProQuest SciTech Collection
ProQuest Business Collection
Computer and Information Systems Abstracts Professional
Advanced Technologies & Aerospace Database
ProQuest One Academic UKI Edition
ProQuest One Business (Alumni)
ProQuest One Academic
ProQuest Central (Alumni)
ProQuest One Academic (New)
Business Premium Collection (Alumni)
DatabaseTitleList ABI/INFORM Global (Corporate)


Database_xml – sequence: 1
  dbid: C6C
  name: Springer Nature OA Free Journals
  url: http://www.springeropen.com/
  sourceTypes: Publisher
– sequence: 2
  dbid: UNPAY
  name: Unpaywall
  url: https://proxy.k.utb.cz/login?url=https://unpaywall.org/
  sourceTypes: Open Access Repository
– sequence: 3
  dbid: 8FG
  name: ProQuest Technology Collection
  url: https://search.proquest.com/technologycollection1
  sourceTypes: Aggregation Database
DeliveryMethod fulltext_linktorsrc
Discipline Applied Sciences
Computer Science
EISSN 1573-1405
EndPage 236
ExternalDocumentID 10.1007/s11263-017-1014-x
A499747616
10_1007_s11263_017_1014_x
GrantInformation_xml – fundername: University of Amsterdam
GroupedDBID -4Z
-59
-5G
-BR
-EM
-Y2
-~C
.4S
.86
.DC
.VR
06D
0R~
0VY
199
1N0
1SB
2.D
203
28-
29J
2J2
2JN
2JY
2KG
2KM
2LR
2P1
2VQ
2~H
30V
3V.
4.4
406
408
409
40D
40E
5GY
5QI
5VS
67Z
6NX
6TJ
78A
7WY
8FE
8FG
8FL
8TC
8UJ
95-
95.
95~
96X
AAAVM
AABHQ
AACDK
AAHNG
AAIAL
AAJBT
AAJKR
AANZL
AAOBN
AARHV
AARTL
AASML
AATNV
AATVU
AAUYE
AAWCG
AAYIU
AAYQN
AAYTO
AAYZH
ABAKF
ABBBX
ABBXA
ABDBF
ABDZT
ABECU
ABFTD
ABFTV
ABHLI
ABHQN
ABJNI
ABJOX
ABKCH
ABKTR
ABMNI
ABMQK
ABNWP
ABQBU
ABQSL
ABSXP
ABTEG
ABTHY
ABTKH
ABTMW
ABULA
ABUWG
ABWNU
ABXPI
ACAOD
ACBXY
ACDTI
ACGFO
ACGFS
ACHSB
ACHXU
ACIHN
ACKNC
ACMDZ
ACMLO
ACOKC
ACOMO
ACPIV
ACREN
ACUHS
ACZOJ
ADHHG
ADHIR
ADIMF
ADINQ
ADKNI
ADKPE
ADMLS
ADRFC
ADTPH
ADURQ
ADYFF
ADYOE
ADZKW
AEAQA
AEBTG
AEFIE
AEFQL
AEGAL
AEGNC
AEJHL
AEJRE
AEKMD
AEMSY
AENEX
AEOHA
AEPYU
AESKC
AETLH
AEVLU
AEXYK
AFBBN
AFEXP
AFGCZ
AFKRA
AFLOW
AFQWF
AFWTZ
AFYQB
AFZKB
AGAYW
AGDGC
AGGDS
AGJBK
AGMZJ
AGQEE
AGQMX
AGRTI
AGWIL
AGWZB
AGYKE
AHAVH
AHBYD
AHKAY
AHSBF
AHYZX
AIAKS
AIGIU
AIIXL
AILAN
AITGF
AJBLW
AJRNO
AJZVZ
ALMA_UNASSIGNED_HOLDINGS
ALWAN
AMKLP
AMTXH
AMXSW
AMYLF
AMYQR
AOCGG
ARAPS
ARCSS
ARMRJ
ASPBG
AVWKF
AXYYD
AYJHY
AZFZN
AZQEC
B-.
B0M
BA0
BBWZM
BDATZ
BENPR
BEZIV
BGLVJ
BGNMA
BPHCQ
BSONS
C6C
CAG
CCPQU
COF
CS3
CSCUP
DDRTE
DL5
DNIVK
DPUIP
DU5
DWQXO
EAD
EAP
EAS
EBLON
EBS
EDO
EIOEI
EJD
EMK
EPL
ESBYG
ESX
F5P
FEDTE
FERAY
FFXSO
FIGPU
FINBP
FNLPD
FRNLG
FRRFC
FSGXE
FWDCC
GGCAI
GGRSB
GJIRD
GNUQQ
GNWQR
GQ6
GQ7
GQ8
GROUPED_ABI_INFORM_COMPLETE
GXS
H13
HCIFZ
HF~
HG5
HG6
HMJXF
HQYDN
HRMNR
HVGLF
HZ~
I-F
I09
IAO
IHE
IJ-
IKXTQ
ISR
ITC
ITM
IWAJR
IXC
IZIGR
IZQ
I~X
I~Y
I~Z
J-C
J0Z
JBSCW
JCJTX
JZLTJ
K60
K6V
K6~
K7-
KDC
KOV
KOW
LAK
LLZTM
M0C
M0N
M4Y
MA-
N2Q
N9A
NB0
NDZJH
NPVJJ
NQJWS
NU0
O9-
O93
O9G
O9I
O9J
OAM
OVD
P19
P2P
P62
P9O
PF0
PQBIZ
PQBZA
PQQKQ
PROAC
PT4
PT5
QF4
QM1
QN7
QO4
QOK
QOS
R4E
R89
R9I
RHV
RNI
RNS
ROL
RPX
RSV
RZC
RZE
RZK
S16
S1Z
S26
S27
S28
S3B
SAP
SCJ
SCLPG
SCO
SDH
SDM
SHX
SISQX
SJYHP
SNE
SNPRN
SNX
SOHCF
SOJ
SPISZ
SRMVM
SSLCW
STPWE
SZN
T13
T16
TAE
TEORI
TSG
TSK
TSV
TUC
TUS
U2A
UG4
UOJIU
UTJUX
UZXMN
VC2
VFIZW
W23
W48
WK8
YLTOR
Z45
Z7R
Z7S
Z7V
Z7W
Z7X
Z7Y
Z7Z
Z83
Z86
Z88
Z8M
Z8N
Z8P
Z8Q
Z8R
Z8S
Z8T
Z8W
Z92
ZMTXR
~8M
~EX
AAPKM
AAYXX
ABBRH
ABDBE
ABFSG
ABRTQ
ACSTC
ADHKG
ADKFA
AEZWR
AFDZB
AFHIU
AFOHR
AGQPQ
AHPBZ
AHWEU
AIXLP
ATHPR
AYFIA
CITATION
ICD
PHGZM
PHGZT
PQGLB
PUEGO
7SC
7XB
8AL
8FD
8FK
JQ2
L.-
L7M
L~C
L~D
PKEHL
PQEST
PQUKI
PRINS
Q9U
ADTOC
UNPAY
ID FETCH-LOGICAL-c432t-f4375be7349a24df105ca7ba294242cfce94102079a399c53a823faac74a30d63
IEDL.DBID C6C
ISSN 0920-5691
1573-1405
IngestDate Sun Oct 26 04:02:43 EDT 2025
Sun Jul 13 04:07:07 EDT 2025
Mon Oct 20 16:31:00 EDT 2025
Thu Oct 16 14:45:46 EDT 2025
Thu Apr 24 23:07:26 EDT 2025
Wed Oct 01 05:08:41 EDT 2025
Fri Feb 21 02:26:39 EST 2025
IsDoiOpenAccess true
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Issue 2
Keywords Eye gaze estimation
Auto-calibration
Calibration free
Language English
License cc-by
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c432t-f4375be7349a24df105ca7ba294242cfce94102079a399c53a823faac74a30d63
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
OpenAccessLink https://doi.org/10.1007/s11263-017-1014-x
PQID 1925068290
PQPubID 1456341
PageCount 14
ParticipantIDs unpaywall_primary_10_1007_s11263_017_1014_x
proquest_journals_1925068290
gale_infotracacademiconefile_A499747616
gale_incontextgauss_ISR_A499747616
crossref_primary_10_1007_s11263_017_1014_x
crossref_citationtrail_10_1007_s11263_017_1014_x
springer_journals_10_1007_s11263_017_1014_x
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 20170900
2017-9-00
20170901
PublicationDateYYYYMMDD 2017-09-01
PublicationDate_xml – month: 9
  year: 2017
  text: 20170900
PublicationDecade 2010
PublicationPlace New York
PublicationPlace_xml – name: New York
PublicationTitle International journal of computer vision
PublicationTitleAbbrev Int J Comput Vis
PublicationYear 2017
Publisher Springer US
Springer
Springer Nature B.V
Publisher_xml – name: Springer US
– name: Springer
– name: Springer Nature B.V
References RoweisSTSaulLKNonlinear dimensionality reduction by locally linear embeddingScience20002292323232610.1126/science.290.5500.2323
ValentiRGeversTAccurate eye center location through invariant isocentric patternsIEEE Transactions on Pattern Analysis and Machine Intelligence20123491785179810.1109/TPAMI.2011.251
Chen, J., & Ji, Q. (2011). Probabilistic gaze estimation without active personal calibration. In IEEE computer vision and pattern recognition (CVPR) (pp. 609–616).
TamuraHMoriSYamawakiTTextural features corresponding to visual perceptionIEEE Systems, Man and Cybernetics1978846047310.1109/TSMC.1978.4309999
Zhu, X., & Ramanan, D. (2012). Face detection, pose estimation, and landmark localization in the wild. In IEEE conference on computer vision and pattern recognition (CVPR) (pp. 2879–2886).
Alnajar, F., Gevers, T., Valenti, R., & Ghebreab, S. (2013). Calibration-free gaze estimation using human gaze patterns. In IEEE international conference on computer vision (ICCV) (pp. 137–144).
Guestrin, E. D., & Eizenman, M. (2008). Remote point-of-gaze estimation requiring a single-point calibration for applications with infants. In Symposium on eye tracking research and applications (ETRA) (pp. 267–274).
Judd, T., Ehinger, K., Durand, F., & Torralba, A. (2009). Learning to predict where humans look. In IEEE international conference on computer vision (ICCV) (pp. 2106–2113).
Draelos, M., Qiu, Q., Bronstein, A., & Sapiro, G. (2015). Intel realsense= real low cost gaze. In International conference on image processing.
GeusebroekJ-MSmeuldersAWMA six-stimulus theory for stochastic textureInternational Journal of Computer Vision20056271610.1007/s11263-005-4632-7
GuestrinEDEizenmanMGeneral theory of remote gaze estimation using the pupil center and corneal reflectionsIEEE Transactions on Biomedical Engineering20065361124113310.1109/TBME.2005.863952
TorralbaAOlivaAStatistics of natural image categoriesNetwork: Computation in Neural Systems20031439141210.1088/0954-898X_14_3_302
SmithKBaSOOdobezJGatica-PerezDTracking the visual focus of attention for a varying number of wandering peopleIEEE Transactions on Pattern Analysis and Machine Intelligence20083071212122910.1109/TPAMI.2007.70773
Hansen, D. W., Hansen, J. P., Nielsen, M., Johansen, A. S., & Stegmann, M. B. (2002). Eye typing using Markov and active appearance models. In Sixth IEEE workshop on applications of computer vision (pp. 132–136).
Uijlings, J. R. R., Van De Sande, K. E. A., Gevers, T., & Smeulders, A. W. M. (2013). Selective search for object recognition. International journal of computer vision, 104, 154–171
WedelMPietersRA review of eye-tracking research in marketing. Review of marketing research2008New YorkEmerald Group Publishing Limited
Lu, F., Sugano, Y., Okabe, T., & Sato, Y. (2011). Inferring human gaze from appearance via adaptive linear regression. In IEEE international conference on computer vision (ICCV) (pp. 153–160).
Russell, B., Torralba, A., Murphy, K., & Freeman, W. (2005). Labelme: A database and web-based tool for image annotation. In MIT AI Lab Memo AIM-2005-025, MIT CSAIL.
Scholte, H. S., Ghebreab, S., Waldorp, L., Smeulders, A. W. M., & Lamme, V. A. F. (2009). Brain responses strongly correlate with Weibull image statistics when processing natural images. Journal of Vision, 9, 1–9
Harel, J., Koch, C., & Perona, P. (2006). Graph-based visual saliency. In Advances in neural information processing systems (NIPS) (pp. 545–552).
SuganoYMatsushitaYSatoYAppearance-based gaze estimation using visual saliencyIEEE Transactions on Pattern Analysis and Machine Intelligence201335232934110.1109/TPAMI.2012.101
Tan, K., Kriegman, D., & Ahuja, N. (2002). Appearance-based eye gaze estimation. In Applications of computer vision (pp. 191–195).
Tobii Technology: http://www.tobii.com/.
Sugano, Y., Matsushita, Y., & Sato, Y. (2010). Calibration-free gaze sensing using saliency maps. In IEEE conference on computer vision and pattern recognition (CVPR) (pp. 2667–2674).
Majaranta, P., & Rih, K.-J. (2002). Twenty years of eye typing: Systems and design issues. In Symposium on eye tracking research and applications (ETRA) (pp. 15–22).
HansenDWJiQIn the eye of the beholder: A survey of models for eyes and gazeIEEE Transactions on Pattern Analysis and Machine Intelligence201032347850010.1109/TPAMI.2009.30
Xiong, C., Huang, L., & Liu, C. (2015). Remote gaze estimation based on 3D face structure and iris centers under natural light. Multimedia Tools and Applications, 75, 1–15.
Sugano, Y., Matsushita, Y., & Sato, Y. (2014). Learning-by-synthesis for appearance-based 3D gaze estimation. In IEEE conference on computer vision and pattern recognition (CVPR) (pp. 1821–1828).
VillanuevaACabezaRPortaSEye tracking: Pupil orientation geometrical modelingImage and Vision Computing200624766367910.1016/j.imavis.2005.06.001
1014_CR7
1014_CR6
K Smith (1014_CR16) 2008; 30
1014_CR9
A Torralba (1014_CR23) 2003; 14
1014_CR15
1014_CR17
Y Sugano (1014_CR18) 2013; 35
1014_CR3
1014_CR2
1014_CR19
1014_CR1
A Villanueva (1014_CR26) 2006; 24
1014_CR10
H Tamura (1014_CR20) 1978; 8
1014_CR12
R Valenti (1014_CR25) 2012; 34
J-M Geusebroek (1014_CR4) 2005; 62
1014_CR11
1014_CR14
M Wedel (1014_CR27) 2008
ST Roweis (1014_CR13) 2000; 229
1014_CR29
1014_CR28
1014_CR21
1014_CR22
DW Hansen (1014_CR8) 2010; 32
1014_CR24
ED Guestrin (1014_CR5) 2006; 53
References_xml – reference: Sugano, Y., Matsushita, Y., & Sato, Y. (2010). Calibration-free gaze sensing using saliency maps. In IEEE conference on computer vision and pattern recognition (CVPR) (pp. 2667–2674).
– reference: Scholte, H. S., Ghebreab, S., Waldorp, L., Smeulders, A. W. M., & Lamme, V. A. F. (2009). Brain responses strongly correlate with Weibull image statistics when processing natural images. Journal of Vision, 9, 1–9
– reference: SmithKBaSOOdobezJGatica-PerezDTracking the visual focus of attention for a varying number of wandering peopleIEEE Transactions on Pattern Analysis and Machine Intelligence20083071212122910.1109/TPAMI.2007.70773
– reference: RoweisSTSaulLKNonlinear dimensionality reduction by locally linear embeddingScience20002292323232610.1126/science.290.5500.2323
– reference: SuganoYMatsushitaYSatoYAppearance-based gaze estimation using visual saliencyIEEE Transactions on Pattern Analysis and Machine Intelligence201335232934110.1109/TPAMI.2012.101
– reference: Uijlings, J. R. R., Van De Sande, K. E. A., Gevers, T., & Smeulders, A. W. M. (2013). Selective search for object recognition. International journal of computer vision, 104, 154–171
– reference: ValentiRGeversTAccurate eye center location through invariant isocentric patternsIEEE Transactions on Pattern Analysis and Machine Intelligence20123491785179810.1109/TPAMI.2011.251
– reference: VillanuevaACabezaRPortaSEye tracking: Pupil orientation geometrical modelingImage and Vision Computing200624766367910.1016/j.imavis.2005.06.001
– reference: Guestrin, E. D., & Eizenman, M. (2008). Remote point-of-gaze estimation requiring a single-point calibration for applications with infants. In Symposium on eye tracking research and applications (ETRA) (pp. 267–274).
– reference: Hansen, D. W., Hansen, J. P., Nielsen, M., Johansen, A. S., & Stegmann, M. B. (2002). Eye typing using Markov and active appearance models. In Sixth IEEE workshop on applications of computer vision (pp. 132–136).
– reference: Tan, K., Kriegman, D., & Ahuja, N. (2002). Appearance-based eye gaze estimation. In Applications of computer vision (pp. 191–195).
– reference: HansenDWJiQIn the eye of the beholder: A survey of models for eyes and gazeIEEE Transactions on Pattern Analysis and Machine Intelligence201032347850010.1109/TPAMI.2009.30
– reference: GeusebroekJ-MSmeuldersAWMA six-stimulus theory for stochastic textureInternational Journal of Computer Vision20056271610.1007/s11263-005-4632-7
– reference: GuestrinEDEizenmanMGeneral theory of remote gaze estimation using the pupil center and corneal reflectionsIEEE Transactions on Biomedical Engineering20065361124113310.1109/TBME.2005.863952
– reference: Lu, F., Sugano, Y., Okabe, T., & Sato, Y. (2011). Inferring human gaze from appearance via adaptive linear regression. In IEEE international conference on computer vision (ICCV) (pp. 153–160).
– reference: Sugano, Y., Matsushita, Y., & Sato, Y. (2014). Learning-by-synthesis for appearance-based 3D gaze estimation. In IEEE conference on computer vision and pattern recognition (CVPR) (pp. 1821–1828).
– reference: Zhu, X., & Ramanan, D. (2012). Face detection, pose estimation, and landmark localization in the wild. In IEEE conference on computer vision and pattern recognition (CVPR) (pp. 2879–2886).
– reference: Tobii Technology: http://www.tobii.com/.
– reference: TamuraHMoriSYamawakiTTextural features corresponding to visual perceptionIEEE Systems, Man and Cybernetics1978846047310.1109/TSMC.1978.4309999
– reference: Chen, J., & Ji, Q. (2011). Probabilistic gaze estimation without active personal calibration. In IEEE computer vision and pattern recognition (CVPR) (pp. 609–616).
– reference: Xiong, C., Huang, L., & Liu, C. (2015). Remote gaze estimation based on 3D face structure and iris centers under natural light. Multimedia Tools and Applications, 75, 1–15.
– reference: Harel, J., Koch, C., & Perona, P. (2006). Graph-based visual saliency. In Advances in neural information processing systems (NIPS) (pp. 545–552).
– reference: WedelMPietersRA review of eye-tracking research in marketing. Review of marketing research2008New YorkEmerald Group Publishing Limited
– reference: Draelos, M., Qiu, Q., Bronstein, A., & Sapiro, G. (2015). Intel realsense= real low cost gaze. In International conference on image processing.
– reference: Judd, T., Ehinger, K., Durand, F., & Torralba, A. (2009). Learning to predict where humans look. In IEEE international conference on computer vision (ICCV) (pp. 2106–2113).
– reference: Majaranta, P., & Rih, K.-J. (2002). Twenty years of eye typing: Systems and design issues. In Symposium on eye tracking research and applications (ETRA) (pp. 15–22).
– reference: TorralbaAOlivaAStatistics of natural image categoriesNetwork: Computation in Neural Systems20031439141210.1088/0954-898X_14_3_302
– reference: Alnajar, F., Gevers, T., Valenti, R., & Ghebreab, S. (2013). Calibration-free gaze estimation using human gaze patterns. In IEEE international conference on computer vision (ICCV) (pp. 137–144).
– reference: Russell, B., Torralba, A., Murphy, K., & Freeman, W. (2005). Labelme: A database and web-based tool for image annotation. In MIT AI Lab Memo AIM-2005-025, MIT CSAIL.
– volume: 35
  start-page: 329
  issue: 2
  year: 2013
  ident: 1014_CR18
  publication-title: IEEE Transactions on Pattern Analysis and Machine Intelligence
  doi: 10.1109/TPAMI.2012.101
– ident: 1014_CR14
– ident: 1014_CR1
  doi: 10.1109/ICCV.2013.24
– volume: 229
  start-page: 2323
  year: 2000
  ident: 1014_CR13
  publication-title: Science
  doi: 10.1126/science.290.5500.2323
– ident: 1014_CR6
  doi: 10.1145/1344471.1344531
– ident: 1014_CR15
  doi: 10.1167/9.4.29
– ident: 1014_CR2
  doi: 10.1109/CVPR.2011.5995675
– ident: 1014_CR28
– ident: 1014_CR24
  doi: 10.1007/s11263-013-0620-5
– ident: 1014_CR7
  doi: 10.1109/ACV.2002.1182170
– ident: 1014_CR3
  doi: 10.1109/ICIP.2015.7351256
– ident: 1014_CR17
  doi: 10.1109/CVPR.2010.5539984
– ident: 1014_CR22
– volume: 34
  start-page: 1785
  issue: 9
  year: 2012
  ident: 1014_CR25
  publication-title: IEEE Transactions on Pattern Analysis and Machine Intelligence
  doi: 10.1109/TPAMI.2011.251
– ident: 1014_CR12
  doi: 10.1145/507072.507076
– volume: 24
  start-page: 663
  issue: 7
  year: 2006
  ident: 1014_CR26
  publication-title: Image and Vision Computing
  doi: 10.1016/j.imavis.2005.06.001
– volume: 14
  start-page: 391
  year: 2003
  ident: 1014_CR23
  publication-title: Network: Computation in Neural Systems
  doi: 10.1088/0954-898X_14_3_302
– ident: 1014_CR11
  doi: 10.1109/ICCV.2011.6126237
– volume: 32
  start-page: 478
  issue: 3
  year: 2010
  ident: 1014_CR8
  publication-title: IEEE Transactions on Pattern Analysis and Machine Intelligence
  doi: 10.1109/TPAMI.2009.30
– volume: 30
  start-page: 1212
  issue: 7
  year: 2008
  ident: 1014_CR16
  publication-title: IEEE Transactions on Pattern Analysis and Machine Intelligence
  doi: 10.1109/TPAMI.2007.70773
– volume: 8
  start-page: 460
  year: 1978
  ident: 1014_CR20
  publication-title: IEEE Systems, Man and Cybernetics
  doi: 10.1109/TSMC.1978.4309999
– volume-title: A review of eye-tracking research in marketing. Review of marketing research
  year: 2008
  ident: 1014_CR27
– ident: 1014_CR9
– ident: 1014_CR10
  doi: 10.1109/ICCV.2009.5459462
– ident: 1014_CR29
– ident: 1014_CR21
– volume: 53
  start-page: 1124
  issue: 6
  year: 2006
  ident: 1014_CR5
  publication-title: IEEE Transactions on Biomedical Engineering
  doi: 10.1109/TBME.2005.863952
– volume: 62
  start-page: 7
  year: 2005
  ident: 1014_CR4
  publication-title: International Journal of Computer Vision
  doi: 10.1007/s11263-005-4632-7
– ident: 1014_CR19
  doi: 10.1109/CVPR.2014.235
SSID ssj0002823
Score 2.3139343
Snippet We present a novel method to auto-calibrate gaze estimators based on gaze patterns obtained from other viewers. Our method is based on the observation that the...
(ProQuest: ... denotes formulae and/or non-USASCII text omitted; see image) We present a novel method to auto-calibrate gaze estimators based on gaze patterns...
SourceID unpaywall
proquest
gale
crossref
springer
SourceType Open Access Repository
Aggregation Database
Enrichment Source
Index Database
Publisher
StartPage 223
SubjectTerms Artificial Intelligence
Calibration
Chin
Computer Imaging
Computer Science
Estimators
Image Processing and Computer Vision
Image processing systems
Pattern Recognition
Pattern Recognition and Graphics
Topology
Vision
SummonAdditionalLinks – databaseName: ProQuest Central
  dbid: BENPR
  link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwhV1RT9swED5B-wA8wDaYVihTNCFNAlmkiROTh2kqUys2iaoqQ-LNsp2ElyotpBXl33OX2G2ZBHuNHcXxnc-ffXffAZwQxBWB4cw3ImM8NRdM47bHjIh1B0WuE0P3HdeD-OqW_7mL7jZg4HJhKKzS2cTKUKcTQ3fk54hEIj8mt9_P6QOjqlHkXXUlNJQtrZD-qCjGNqEZEDNWA5qXvcFwtLTNeMCoi8vjoSmKk47zc1bJdJ2AfJpotal-LVu82qn-tddrjtMd2JoXU_X8pMbjtb2p_wF2Laj0urUWfISNrPgEexZgenb5lvjI1XBwz_Yh6c5nE0YZWppII1KPon-8Hi78OqfRq2IKvOquv24bVoycRXkAt_3e319XzJZTYIaHwYzlPBSRzkTIExXwNEdkZZTQKkg47tMmN1nCEW74IlEoQBOFCicrV8oIrkI_jcPP0CgmRfYFvMCYKIsErjAleKpCjThJC8ISXBt8oQW-mzppLNc4lbwYyxVLMs22xNmmIDMuFy04Xb4yrYk23uv8jeQhicCioAiZezUvS_n7ZiS7eITDI1LciVvw3XbKJ_hxHFedcIC_QJxXr3q2nVylXcKlXClcC86crNea3x7b2VId_v8nh-9_-gi2A9LIKqitDY3Z4zw7RhQ001-tar8AZx_-3Q
  priority: 102
  providerName: ProQuest
– databaseName: Unpaywall
  dbid: UNPAY
  link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV3dT9swED-V8jDxAGwDUQRTNE2aNOSSxnZMHivUCiYNoW2V4MmynYQHqrQiifj467nLByuVAKG9xufEH-fzz7m7nwG-EcRVgRPMdyphInZHzOK2x5wK7QCn3EaO_nf8OgtPJuLnhbzowKjNhami3VuXZJ3TQCxNWXE4j9MFr35OqS8UCaQY3TbL7vpYvgKroURI3oXVydn58LLi2cPjkQyjmjdVcYYHCtl6N6sUuqX3PNuflq30grt0DT6U2dzc35rpdGFHGm_UkSN5RWRIgSjX_bKwffewRPP4353dhPUGs3rDWsk-QifJPsFGg1-9xjrk-Ki9IqJ99hmiYVnMGCWAWeKkiD0KLvJGaFfqlEmvClnwKldCXXZeEX5m-RZMxqO_xyesua2BOcGDgqWCK2kTxUVkAhGnCNycUdYEkUAY4FKXRALRjK8ig_rhJDdHAU-NcUoY7sch34ZuNsuSHfAC52QiFS5go0RsuEUYZhVBFWEdVuiB386Rdg2VOd2oMdX_SJhpxDSOGMWwCX3Xgx9PVeY1j8drwl9p4jXxY2QUgHNlyjzXp39-6yGeEPEEFg7CHnxvhNIZfhzbVeczYBeIUuuZ5F6rQLqxELlGZC39kNzYPThodWCh-OW2HTzp3ds92X2X9B50i5sy2UeYVdgvzSJ6BLiPHJk
  priority: 102
  providerName: Unpaywall
Title Auto-Calibrated Gaze Estimation Using Human Gaze Patterns
URI https://link.springer.com/article/10.1007/s11263-017-1014-x
https://www.proquest.com/docview/1925068290
https://link.springer.com/content/pdf/10.1007%2Fs11263-017-1014-x.pdf
UnpaywallVersion publishedVersion
Volume 124
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVEBS
  databaseName: EBSCOhost Academic Search Ultimate
  customDbUrl: https://search.ebscohost.com/login.aspx?authtype=ip,shib&custid=s3936755&profile=ehost&defaultdb=asn
  eissn: 1573-1405
  dateEnd: 20241102
  omitProxy: true
  ssIdentifier: ssj0002823
  issn: 1573-1405
  databaseCode: ABDBF
  dateStart: 20030401
  isFulltext: true
  titleUrlDefault: https://search.ebscohost.com/direct.asp?db=asn
  providerName: EBSCOhost
– providerCode: PRVEBS
  databaseName: Inspec with Full Text
  customDbUrl:
  eissn: 1573-1405
  dateEnd: 20241102
  omitProxy: false
  ssIdentifier: ssj0002823
  issn: 1573-1405
  databaseCode: ADMLS
  dateStart: 19870101
  isFulltext: true
  titleUrlDefault: https://www.ebsco.com/products/research-databases/inspec-full-text
  providerName: EBSCOhost
– providerCode: PRVLSH
  databaseName: SpringerLink Journals
  customDbUrl:
  mediaType: online
  eissn: 1573-1405
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0002823
  issn: 1573-1405
  databaseCode: AFBBN
  dateStart: 19970101
  isFulltext: true
  providerName: Library Specific Holdings
– providerCode: PRVPQU
  databaseName: PROQUEST
  customDbUrl: http://www.proquest.com/pqcentral?accountid=15518
  eissn: 1573-1405
  dateEnd: 20241102
  omitProxy: true
  ssIdentifier: ssj0002823
  issn: 1573-1405
  databaseCode: BENPR
  dateStart: 19970101
  isFulltext: true
  titleUrlDefault: https://www.proquest.com/central
  providerName: ProQuest
– providerCode: PRVPQU
  databaseName: ProQuest Technology Collection
  customDbUrl:
  eissn: 1573-1405
  dateEnd: 20241102
  omitProxy: true
  ssIdentifier: ssj0002823
  issn: 1573-1405
  databaseCode: 8FG
  dateStart: 19970101
  isFulltext: true
  titleUrlDefault: https://search.proquest.com/technologycollection1
  providerName: ProQuest
– providerCode: PRVAVX
  databaseName: SpringerLINK - Czech Republic Consortium
  customDbUrl:
  eissn: 1573-1405
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0002823
  issn: 1573-1405
  databaseCode: AGYKE
  dateStart: 19970101
  isFulltext: true
  titleUrlDefault: http://link.springer.com
  providerName: Springer Nature
– providerCode: PRVAVX
  databaseName: SpringerLink Journals (ICM)
  customDbUrl:
  eissn: 1573-1405
  dateEnd: 99991231
  omitProxy: true
  ssIdentifier: ssj0002823
  issn: 1573-1405
  databaseCode: U2A
  dateStart: 19970101
  isFulltext: true
  titleUrlDefault: http://www.springerlink.com/journals/
  providerName: Springer Nature
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1bS8MwFD54eVAfvIvzMooIghLs2rSxj1U2bziGOtCnkKStL6MbdkP9957Ty5ziBV9ayKVNcpKcLzk3gH2CuMIxnNlGxIxH5oRpZHvMCF83kOQ6MHTfcdP2L7r86sF7KJ1Fky3MF_n9cUYmLqTxIxhFlWUIF2c9SiK5rH823nTx5FBEjcfTkOcHjUqA-d0nPrGgrxvxhER0AeZG6UC9vaheb4LptJZhsUSLVliQdwWm4nQVlkrkaJXrMsOkKjhDlbYGQTga9hmZXmnyBhFZpNZjNXFFF8aKVq4sYOWX-EVeJ3e1mWbr0G01788uWBkngRnuOkOWcFd4OhYuD5TDowQhk1FCKyfgyIBNYuKAI46wRaCQMsZzFQ5WopQRXLl25LsbMJP203gTLMcYL_YELh0leKRcjQBICwIJXBusUAO7GjppSifiFMuiJz_cH9NoSxxt0h7j8rUGh-Mqg8KDxm-F94gekjxTpKT68qRGWSYv725liGczPPv4Db8GB2WhpI8_x3YVlgTYBXJm9ankTkVXWa7NTCKm9WyfBMg1OKpoPZH9c9uOxtPh755s_evb2zDv0ATNldd2YGb4PIp3Ee0MdR2mT1rndZgNzx-vm_g-bbY7t_V89uOz64SY1m13wsd3qdz31A
linkProvider Springer Nature
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtR3LTtwwcEThQDm0pQ91C20j1KpSkdWs48TkgKptu2iXxwpRkLi5tpNwWWUXsivg5_i2ziT2slQqnLj6Ecee8Tw8L4BPJOJKbgULrcyZyOwWM8j2mJWJaSPITWrpveNgkPROxO5pfLoANz4WhtwqPU2sCXU2svRG_g0lkThMyOz3fXzOqGoUWVd9CQ3tSitk23WKMRfYsZdfX6IKV233fyG8P3O-0z3-2WOuygCzIuITVohIxiaXkUg1F1mBAofV0mieCmRftrB5KpALhzLVuC8bR3qLR4XWVgodhVkS4XefwJLA-aj8Lf3oDg6PZrwAFZqmmD0qaXGStr1dtQ7ea3OyoSKXoHq57OoOZ_yXP8wZaldgeVqO9fWlHg7neOHOC3jmhNig02DdKizk5Ut47gTawJGLCpt8zQjf9grSznQyYhQRZihJRRaQt1HQRULTxFAGtQ9DUNsWmr7DOgNoWb2Gk0c52DewWI7K_C0E3No4jyXeaC1FpiODcpmRJLsIY3FCC0J_dMq63OZUYmOobrMy02krPG1yahPqqgVfZ1PGTWKP-wZvEDwUJcwoySPnTE-rSvV_H6kOqoyokiXtpAVf3KBihIvjfzUBDrgFyrF1Z-S6h6tyJKNStwjegk0P67nu___b5gwdHt7Ju_uX_gjLveODfbXfH-ytwVNO2Fk71K3D4uRimr9HCWxiPjg0D-DPY9-sv5RKOyg
linkToPdf http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtR1db9Mw8FSKBOyBwQZatw4ixIS0ylrqOPHygFDFVlYG0wRM6puxnWQvVdotrdb9tf067pK465AYT3v1Rxz7zvfh-wJ4TyKu5FYw38qUicTuM4Nsj1kZmS6C3MSW3ju-n0RHZ-LrMBw24MbFwpBbpaOJJaFOxpbeyPdQEgn9iMx-e1ntFnF60P80uWBUQYosra6cRoUix-n1FapvxcfBAcJ6h_P-4a_PR6yuMMCsCPiUZSKQoUllIGLNRZKhsGG1NJrHAlmXzWwaC-TAvow17smGgd7nQaa1lUIHfhIF-N1H8FhSFneKUu9_WXABVGWqMvaonoVR3HUW1TJsr8vJeor8gSrlsvkdnvg3Z1gy0a7A01k-0ddXejRa4oL9F_C8Fl-9XoVvL6GR5muwWouyXk0oCmxy1SJc2zrEvdl0zCgWzFB6isQjPyPvEElMFT3pld4LXmlVqPpOy9yfefEKzh7kWF9DMx_n6QZ43NowDSXeZS1FogODEpmRJLUIY3FCC3x3dMrWWc2puMZI3eZjptNWeNrkzibUvAW7iymTKqXHfYPfETwUpcrICenO9awo1ODnD9VDZRGVsagbteBDPSgb4-L4X1VoA26BsmvdGdl2cFU1sSjULWq3oONgvdT973_rLNDh_zvZvH_pt_AE75P6Njg53oJnnJCz9KRrQ3N6OUu3UfSamjcljnvw-6Ev1R_XJDjC
linkToUnpaywall http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV3dT9swED-V8jDxAGwDUQRTNE2aNOSSxnZMHivUCiYNoW2V4MmynYQHqrQiifj467nLByuVAKG9xufEH-fzz7m7nwG-EcRVgRPMdyphInZHzOK2x5wK7QCn3EaO_nf8OgtPJuLnhbzowKjNhami3VuXZJ3TQCxNWXE4j9MFr35OqS8UCaQY3TbL7vpYvgKroURI3oXVydn58LLi2cPjkQyjmjdVcYYHCtl6N6sUuqX3PNuflq30grt0DT6U2dzc35rpdGFHGm_UkSN5RWRIgSjX_bKwffewRPP4353dhPUGs3rDWsk-QifJPsFGg1-9xjrk-Ki9IqJ99hmiYVnMGCWAWeKkiD0KLvJGaFfqlEmvClnwKldCXXZeEX5m-RZMxqO_xyesua2BOcGDgqWCK2kTxUVkAhGnCNycUdYEkUAY4FKXRALRjK8ig_rhJDdHAU-NcUoY7sch34ZuNsuSHfAC52QiFS5go0RsuEUYZhVBFWEdVuiB386Rdg2VOd2oMdX_SJhpxDSOGMWwCX3Xgx9PVeY1j8drwl9p4jXxY2QUgHNlyjzXp39-6yGeEPEEFg7CHnxvhNIZfhzbVeczYBeIUuuZ5F6rQLqxELlGZC39kNzYPThodWCh-OW2HTzp3ds92X2X9B50i5sy2UeYVdgvzSJ6BLiPHJk
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Auto-Calibrated+Gaze+Estimation+Using+Human+Gaze+Patterns&rft.jtitle=International+journal+of+computer+vision&rft.au=Alnajar%2C+Fares&rft.au=Gevers%2C+Theo&rft.au=Valenti%2C+Roberto&rft.au=Ghebreab%2C+Sennay&rft.date=2017-09-01&rft.pub=Springer&rft.issn=0920-5691&rft.volume=124&rft.issue=2&rft.spage=223&rft_id=info:doi/10.1007%2Fs11263-017-1014-x&rft.externalDBID=ISR&rft.externalDocID=A499747616
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0920-5691&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0920-5691&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0920-5691&client=summon