Face attribute analysis from structured light: an end-to-end approach
In this work we explore the use of structured-light imaging for face analysis. Towards this and due to lack of a publicly available structured-light face dataset, we (a) firstly generate a synthetic structured-light face dataset constructed based on the RGB-dataset London Face and the RGB-D dataset...
Saved in:
| Published in | Multimedia tools and applications Vol. 82; no. 7; pp. 10471 - 10490 |
|---|---|
| Main Authors | , , , , |
| Format | Journal Article |
| Language | English |
| Published |
New York
Springer US
01.03.2023
Springer Nature B.V Springer Verlag |
| Subjects | |
| Online Access | Get full text |
| ISSN | 1380-7501 1573-7721 1573-7721 |
| DOI | 10.1007/s11042-022-13224-0 |
Cover
| Abstract | In this work we explore the use of structured-light imaging for face analysis. Towards this and due to lack of a publicly available structured-light face dataset, we (a) firstly generate a synthetic structured-light face dataset constructed based on the RGB-dataset London Face and the RGB-D dataset Bosphorus 3D Face. We then (b) propose a conditional adversarial network for depth map estimation from generated synthetic data. Associated quantitative and qualitative results suggest the efficiency of the proposed depth estimation technique. Further, we (c) study the estimation of gender and age directly from (i) structured-light, (ii) binarized structured-light, as well as (iii) estimated depth maps from structured-light. In this context we (d) study the impact of different subject-to-camera distances, as well as pose-variations. Finally, we (e) validate the proposed gender and age models that we train on synthetic data on a small set of real data, which we acquire. While these are early results, our findings clearly indicate the suitability of structured-light based approaches in facial analysis. |
|---|---|
| AbstractList | In this work we explore the use of structured-light imaging for face analysis. Towards this and due to lack of a publicly available structured-light face dataset, we (a) firstly generate a synthetic structured-light face dataset constructed based on the RGB-dataset London Face and the RGB-D dataset Bosphorus 3D Face. We then (b) propose a conditional adversarial network for depth map estimation from generated synthetic data. Associated quantitative and qualitative results suggest the efficiency of the proposed depth estimation technique. Further, we (c) study the estimation of gender and age directly from (i) structured-light, (ii) binarized structuredlight, as well as (iii) estimated depth maps from structured-light. In this context we (d) study the impact of different subject-to-camera distances, as well as pose-variations. Finally, we (e) validate the proposed gender and age models that we train on synthetic data on a small set of real data, which we acquire. While these are early results, our findings clearly indicate the suitability of structured-light based approaches in facial analysis. In this work we explore the use of structured-light imaging for face analysis. Towards this and due to lack of a publicly available structured-light face dataset, we (a) firstly generate a synthetic structured-light face dataset constructed based on the RGB-dataset London Face and the RGB-D dataset Bosphorus 3D Face. We then (b) propose a conditional adversarial network for depth map estimation from generated synthetic data. Associated quantitative and qualitative results suggest the efficiency of the proposed depth estimation technique. Further, we (c) study the estimation of gender and age directly from (i) structured-light, (ii) binarized structured-light, as well as (iii) estimated depth maps from structured-light. In this context we (d) study the impact of different subject-to-camera distances, as well as pose-variations. Finally, we (e) validate the proposed gender and age models that we train on synthetic data on a small set of real data, which we acquire. While these are early results, our findings clearly indicate the suitability of structured-light based approaches in facial analysis. |
| Author | Das, Abhijit Bremond, Francois Thamizharasan, Vikas Dantcheva, Antitza Battaglino, Daniele |
| Author_xml | – sequence: 1 givenname: Vikas surname: Thamizharasan fullname: Thamizharasan, Vikas organization: Inria Sophia Antipolis - Méediterranée – sequence: 2 givenname: Abhijit orcidid: 0000-0002-6793-0582 surname: Das fullname: Das, Abhijit email: abhijit.das@inria.fr organization: Inria Sophia Antipolis - Méediterranée, BITS – sequence: 3 givenname: Daniele surname: Battaglino fullname: Battaglino, Daniele organization: Blu Manta, Sophia Antipolis – sequence: 4 givenname: Francois surname: Bremond fullname: Bremond, Francois organization: Inria Sophia Antipolis - Méediterranée – sequence: 5 givenname: Antitza surname: Dantcheva fullname: Dantcheva, Antitza organization: Inria Sophia Antipolis - Méediterranée |
| BackLink | https://hal.science/hal-04391848$$DView record in HAL |
| BookMark | eNqNkE1LxDAQhoOs4K76BzwVPHmI5qtJ601k_YAFL3oOaTJ1K922Jqmy_96sFQQPi6cZZt5nPt4FmnV9BwidUXJJCVFXgVIiGCaMYcoZE5gcoDnNFcdKMTpLOS8IVjmhR2gRwhshVOZMzNHyzljITIy-qcaYss6029CErPb9JgvRjzaOHlzWNq_reJ36GXQOxx6nkJlh8L2x6xN0WJs2wOlPPEYvd8vn2we8erp_vL1ZYctLFTEF48CqHJyQlQQooMxl5aQoFeNWMCukzF3ScCEK6yitoZQOpKuquubA-DHi09yxG8z207StHnyzMX6rKdE7J_TkhE5O6G8nNEnUxUStza--N41-uFnpXY0IXtJCFB80ac8nbXrsfYQQ9Vs_-mRK0EwVVBCes91ENqms70PwUP_vjOIPZJtoYtN30Zum3Y_-_B3Snu4V_O9Ve6gvOHKe2Q |
| CitedBy_id | crossref_primary_10_1007_s11042_023_15182_7 crossref_primary_10_1109_ACCESS_2024_3470128 |
| Cites_doi | 10.1109/3DV.2016.56 10.5772/34976 10.1109/TIFS.2015.2480381 10.1109/ICCV.2017.365 10.1145/3341161.3343525 10.1016/j.neucom.2019.07.047 10.1109/TIFS.2016.2632070 10.1109/CVPRW.2015.7301352 10.1109/ICB2018.2018.00031 10.1109/CVPR.2016.614 10.1007/978-3-540-89991-4_6 10.1109/ICCV.2019.00948 10.1007/978-3-319-46484-8_45 10.1109/CVPR.2015.7298965 10.1109/TVCG.2013.249 10.1109/TPAMI.2013.48 10.1016/j.patrec.2017.10.024 10.1109/CVPR.2017.699 10.1007/978-3-030-11009-3_35 10.1109/ICCV.2017.244 10.1007/978-3-030-58548-8_44 10.1007/978-3-030-01237-3_48 10.1109/BTAS.2016.7791199 10.6084/m9.figshare.5047666.v3 10.1109/TPAMI.2017.2753232 10.1155/2019/3547416 10.1109/ICCV.1999.790383 10.1109/ICB2018.2018.00029 10.1109/CVPR.2014.81 10.1109/CVPR.2015.7299152 10.1007/978-1-4471-4658-2 10.1007/978-981-13-8406-6_75 10.1109/3DV.2018.00073 10.1109/CVPR.2012.6248074 10.1007/978-3-319-54181-5_14 10.1109/ACCESS.2019.2962010 10.1007/978-3-319-46454-1_2 10.1109/TIFS.2019.2902823 10.1109/CVPR.2018.00412 10.1109/CVPR.2017.757 10.1109/CVPR.2015.7299105 10.1109/CVPR.2009.5206848 10.1109/CVPR.2016.587 10.1109/ICCV.2015.425 10.1109/CVPR.2016.278 10.1109/CVPR.2017.632 10.1007/978-1-4471-5520-1_6 10.1007/978-3-319-46454-1_36 10.1109/ICCV.2017.175 10.1109/ICCV.2003.1238384 10.1007/978-3-642-33715-4_54 10.1109/CVPR.2018.00013 |
| ContentType | Journal Article |
| Copyright | The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022. Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law. Distributed under a Creative Commons Attribution 4.0 International License |
| Copyright_xml | – notice: The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022. Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law. – notice: Distributed under a Creative Commons Attribution 4.0 International License |
| DBID | AAYXX CITATION 3V. 7SC 7WY 7WZ 7XB 87Z 8AL 8AO 8FD 8FE 8FG 8FK 8FL 8G5 ABUWG AFKRA ARAPS AZQEC BENPR BEZIV BGLVJ CCPQU DWQXO FRNLG F~G GNUQQ GUQSH HCIFZ JQ2 K60 K6~ K7- L.- L7M L~C L~D M0C M0N M2O MBDVC P5Z P62 PHGZM PHGZT PKEHL PQBIZ PQBZA PQEST PQGLB PQQKQ PQUKI Q9U 1XC VOOES ADTOC UNPAY |
| DOI | 10.1007/s11042-022-13224-0 |
| DatabaseName | CrossRef ProQuest Central (Corporate) Computer and Information Systems Abstracts ABI/INFORM Collection ABI/INFORM Global (PDF only) ProQuest Central (purchase pre-March 2016) ABI/INFORM Collection Computing Database (Alumni Edition) ProQuest Pharma Collection Technology Research Database ProQuest SciTech Collection ProQuest Technology Collection ProQuest Central (Alumni) (purchase pre-March 2016) ABI/INFORM Collection (Alumni) ProQuest Research Library ProQuest Central (Alumni) ProQuest Central UK/Ireland Advanced Technologies & Computer Science Collection ProQuest Central Essentials - QC ProQuest Central Business Premium Collection Technology Collection (via ProQuest SciTech Premium Collection) ProQuest One ProQuest Central Business Premium Collection (Alumni) ABI/INFORM Global (Corporate) ProQuest Central Student ProQuest Research Library SciTech Premium Collection ProQuest Computer Science Collection ProQuest Business Collection (Alumni Edition) ProQuest Business Collection Computer Science Database ABI/INFORM Professional Advanced Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional ABI/INFORM Global Computing Database Research Library Research Library (Corporate) Advanced Technologies & Aerospace Collection ProQuest Advanced Technologies & Aerospace Collection ProQuest Central Premium ProQuest One Academic (New) ProQuest One Academic Middle East (New) ProQuest One Business ProQuest One Business (Alumni) ProQuest One Academic Eastern Edition (DO NOT USE) ProQuest One Applied & Life Sciences ProQuest One Academic ProQuest One Academic UKI Edition ProQuest Central Basic Hyper Article en Ligne (HAL) Hyper Article en Ligne (HAL) (Open Access) Unpaywall for CDI: Periodical Content Unpaywall |
| DatabaseTitle | CrossRef ABI/INFORM Global (Corporate) ProQuest Business Collection (Alumni Edition) ProQuest One Business Research Library Prep Computer Science Database ProQuest Central Student Technology Collection Technology Research Database Computer and Information Systems Abstracts – Academic ProQuest One Academic Middle East (New) ProQuest Advanced Technologies & Aerospace Collection ProQuest Central Essentials ProQuest Computer Science Collection Computer and Information Systems Abstracts ProQuest Central (Alumni Edition) SciTech Premium Collection ProQuest One Community College Research Library (Alumni Edition) ProQuest Pharma Collection ABI/INFORM Complete ProQuest Central ABI/INFORM Professional Advanced ProQuest One Applied & Life Sciences ProQuest Central Korea ProQuest Research Library ProQuest Central (New) Advanced Technologies Database with Aerospace ABI/INFORM Complete (Alumni Edition) Advanced Technologies & Aerospace Collection Business Premium Collection ABI/INFORM Global ProQuest Computing ABI/INFORM Global (Alumni Edition) ProQuest Central Basic ProQuest Computing (Alumni Edition) ProQuest One Academic Eastern Edition ProQuest Technology Collection ProQuest SciTech Collection ProQuest Business Collection Computer and Information Systems Abstracts Professional Advanced Technologies & Aerospace Database ProQuest One Academic UKI Edition ProQuest One Business (Alumni) ProQuest One Academic ProQuest One Academic (New) ProQuest Central (Alumni) Business Premium Collection (Alumni) |
| DatabaseTitleList | ABI/INFORM Global (Corporate) |
| Database_xml | – sequence: 1 dbid: UNPAY name: Unpaywall url: https://proxy.k.utb.cz/login?url=https://unpaywall.org/ sourceTypes: Open Access Repository – sequence: 2 dbid: 8FG name: ProQuest Technology Collection url: https://search.proquest.com/technologycollection1 sourceTypes: Aggregation Database |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Engineering Computer Science |
| EISSN | 1573-7721 |
| EndPage | 10490 |
| ExternalDocumentID | oai:HAL:hal-04391848v1 10_1007_s11042_022_13224_0 |
| GrantInformation_xml | – fundername: Inria Blu Manta grantid: NA |
| GroupedDBID | -4Z -59 -5G -BR -EM -Y2 -~C .4S .86 .DC .VR 06D 0R~ 0VY 123 1N0 1SB 2.D 203 28- 29M 2J2 2JN 2JY 2KG 2LR 2P1 2VQ 2~H 30V 3EH 3V. 4.4 406 408 409 40D 40E 5QI 5VS 67Z 6NX 7WY 8AO 8FE 8FG 8FL 8G5 8UJ 95- 95. 95~ 96X AAAVM AABHQ AACDK AAHNG AAIAL AAJBT AAJKR AANZL AAOBN AARHV AARTL AASML AATNV AATVU AAUYE AAWCG AAYIU AAYQN AAYTO AAYZH ABAKF ABBBX ABBXA ABDZT ABECU ABFTV ABHLI ABHQN ABJNI ABJOX ABKCH ABKTR ABMNI ABMQK ABNWP ABQBU ABQSL ABSXP ABTEG ABTHY ABTKH ABTMW ABULA ABUWG ABWNU ABXPI ACAOD ACBXY ACDTI ACGFO ACGFS ACHSB ACHXU ACKNC ACMDZ ACMLO ACOKC ACOMO ACPIV ACREN ACSNA ACZOJ ADHHG ADHIR ADIMF ADINQ ADKNI ADKPE ADMLS ADRFC ADTPH ADURQ ADYFF ADYOE ADZKW AEBTG AEFIE AEFQL AEGAL AEGNC AEJHL AEJRE AEKMD AEMSY AENEX AEOHA AEPYU AESKC AETLH AEVLU AEXYK AFBBN AFEXP AFGCZ AFKRA AFLOW AFQWF AFWTZ AFYQB AFZKB AGAYW AGDGC AGGDS AGJBK AGMZJ AGQEE AGQMX AGRTI AGWIL AGWZB AGYKE AHAVH AHBYD AHKAY AHSBF AHYZX AIAKS AIGIU AIIXL AILAN AITGF AJBLW AJRNO AJZVZ ALMA_UNASSIGNED_HOLDINGS ALWAN AMKLP AMTXH AMXSW AMYLF AMYQR AOCGG ARAPS ARCSS ARMRJ ASPBG AVWKF AXYYD AYJHY AZFZN AZQEC B-. BA0 BBWZM BDATZ BENPR BEZIV BGLVJ BGNMA BPHCQ BSONS CAG CCPQU COF CS3 CSCUP DDRTE DL5 DNIVK DPUIP DU5 DWQXO EBLON EBS EIOEI EJD ESBYG FEDTE FERAY FFXSO FIGPU FINBP FNLPD FRNLG FRRFC FSGXE FWDCC GGCAI GGRSB GJIRD GNUQQ GNWQR GQ6 GQ7 GQ8 GROUPED_ABI_INFORM_COMPLETE GUQSH GXS H13 HCIFZ HF~ HG5 HG6 HMJXF HQYDN HRMNR HVGLF HZ~ I-F I09 IHE IJ- IKXTQ ITG ITH ITM IWAJR IXC IXE IZIGR IZQ I~X I~Z J-C J0Z JBSCW JCJTX JZLTJ K60 K6V K6~ K7- KDC KOV KOW LAK LLZTM M0C M0N M2O M4Y MA- N2Q N9A NB0 NDZJH NPVJJ NQJWS NU0 O9- O93 O9G O9I O9J OAM OVD P19 P2P P62 P9O PF0 PQBIZ PQBZA PQQKQ PROAC PT4 PT5 Q2X QOK QOS R4E R89 R9I RHV RNI RNS ROL RPX RSV RZC RZE RZK S16 S1Z S26 S27 S28 S3B SAP SCJ SCLPG SCO SDH SDM SHX SISQX SJYHP SNE SNPRN SNX SOHCF SOJ SPISZ SRMVM SSLCW STPWE SZN T13 T16 TEORI TH9 TSG TSK TSV TUC TUS U2A UG4 UOJIU UTJUX UZXMN VC2 VFIZW W23 W48 WK8 YLTOR Z45 Z7R Z7S Z7W Z7X Z7Y Z7Z Z81 Z83 Z86 Z88 Z8M Z8N Z8Q Z8R Z8S Z8T Z8U Z8W Z92 ZMTXR ~EX AAPKM AAYXX ABBRH ABDBE ABFSG ABRTQ ACSTC ADHKG ADKFA AEZWR AFDZB AFHIU AFOHR AGQPQ AHPBZ AHWEU AIXLP ATHPR AYFIA CITATION PHGZM PHGZT PQGLB PUEGO 7SC 7XB 8AL 8FD 8FK JQ2 L.- L7M L~C L~D MBDVC PKEHL PQEST PQUKI Q9U 1XC VOOES ADTOC UNPAY |
| ID | FETCH-LOGICAL-c397t-1eadec75ed46b6ee8e956bd649723c42c4665dade3448cd11fe96de6dbbff3e23 |
| IEDL.DBID | BENPR |
| ISSN | 1380-7501 1573-7721 |
| IngestDate | Sun Oct 26 03:41:35 EDT 2025 Tue Oct 14 20:33:29 EDT 2025 Fri Jul 25 23:02:03 EDT 2025 Wed Oct 01 04:51:23 EDT 2025 Thu Apr 24 22:59:47 EDT 2025 Fri Feb 21 02:44:37 EST 2025 |
| IsDoiOpenAccess | true |
| IsOpenAccess | true |
| IsPeerReviewed | true |
| IsScholarly | true |
| Issue | 7 |
| Keywords | Soft biometrics Age estimation Depth imagery IRDP Gender estimation Structured light C-GAN gender estimation age estimation depth imagery structured light |
| Language | English |
| License | Distributed under a Creative Commons Attribution 4.0 International License: http://creativecommons.org/licenses/by/4.0 other-oa |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-c397t-1eadec75ed46b6ee8e956bd649723c42c4665dade3448cd11fe96de6dbbff3e23 |
| Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
| ORCID | 0000-0002-6793-0582 0000-0003-2988-2142 |
| OpenAccessLink | https://proxy.k.utb.cz/login?url=https://hal.science/hal-04391848 |
| PQID | 2781403520 |
| PQPubID | 54626 |
| PageCount | 20 |
| ParticipantIDs | unpaywall_primary_10_1007_s11042_022_13224_0 hal_primary_oai_HAL_hal_04391848v1 proquest_journals_2781403520 crossref_primary_10_1007_s11042_022_13224_0 crossref_citationtrail_10_1007_s11042_022_13224_0 springer_journals_10_1007_s11042_022_13224_0 |
| ProviderPackageCode | CITATION AAYXX |
| PublicationCentury | 2000 |
| PublicationDate | 20230300 2023-03-00 20230301 2023-03 |
| PublicationDateYYYYMMDD | 2023-03-01 |
| PublicationDate_xml | – month: 3 year: 2023 text: 20230300 |
| PublicationDecade | 2020 |
| PublicationPlace | New York |
| PublicationPlace_xml | – name: New York – name: Dordrecht |
| PublicationSubtitle | An International Journal |
| PublicationTitle | Multimedia tools and applications |
| PublicationTitleAbbrev | Multimed Tools Appl |
| PublicationYear | 2023 |
| Publisher | Springer US Springer Nature B.V Springer Verlag |
| Publisher_xml | – name: Springer US – name: Springer Nature B.V – name: Springer Verlag |
| References | DantchevaAEliaPRossAWhat else does your biometric data reveal? a survey on soft biometricsIEEE Trans Inf Forensics Secur201611344146710.1109/TIFS.2015.2480381 Rudd EM, Gunther M, Boult TE (2016) Moon: a mixed objective optimization network for the recognition of facial attributes. In: Proceedings of the european conference on computer vision Eigen D, Puhrsch C, Fergus R (2014) Depth map prediction from a single image using a multi-scale deep network. In: NIPS Garg R, Kumar BV, Carneiro G, Reid I (2016) Unsupervised cnn for single view depth estimation: geometry to the rescue. In: European conference on computer vision. Springer, pp 740–756 Liu Z, Luo P, Wang X, Tang X (2015) Deep learning face attributes in the wild. In: Proceedings of the IEEE international conference on computer vision, pp 3730–3738 Kingma DP, Ba J (2014) Adam: a method for stochastic optimization. arXiv:1412.6980 Socher R, Huval B, Bhat B, Manning CD, Ng AY (2012) Convolutional-recursive deep learning for 3d object classification NIPS Tornow M, Grasshoff M, Nguyen N, Al-Hamadi A, Michaelis B (2012) Fast computation of dense and reliable depth maps from stereo images. In: Solari F, Chessa M, Sabatini SP (eds) Machine Vision, chap. 3. IntechOpen, Rijeka. https://doi.org/10.5772/34976 Abate AF, Barra P, Barra S, Molinari C, Nappi M, Narducci F (2019) Clustering facial attributes: narrowing the path from soft to hard biometrics. IEEE Access He Y, Chiu W, Keuper M, Fritz M (2017) RGB-D semantic segmentation using spatio-temporal data-driven pooling. CVPR Mueller F, Bernard F, Sotnychenko O, Mehta D, Sridhar S, Casas D, Theobalt C (2018) Ganerated hands for real-time 3d hand tracking from monocular RGB. In: Proceedings of computer vision and pattern recognition (CVPR), https://handtracker.mpi-inf.mpg.de/projects/GANeratedHands Pilzer A, Xu D, Puscas MM, Ricci E, Sebe N (2018) Unsupervised adversarial depth estimation using cycled generative networks. In: International conference on 3D vision (3DV) pp 587–595 Ren W, Yang J, Deng S, Wipf D, Cao X, Tong X (2019) Face video deblurring using 3d facial priors. In: Proceedings of the IEEE/CVF international conference on computer vision, pp 9388–9397 Ryan Fanello S, Rhemann C, Tankovich V, Kowdle A, Orts Escolano S, Kim D, Izadi S (2016) Hyperdepth: learning depth from structured light without matching. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 5441–5450 Cui J, Zhang H, Han H, Shan S, Chen X (2018) Improving 2d face recognition via discriminative face depth estimation. International Conference on Biometrics (ICB) pp 140–147 Deng J, Dong W, Socher R, Li LJ, Li K, Fei-Fei L (2009) Imagenet: a large-scale hierarchical image database. In: CVPR09 Liu F, Shen C, Lin G (2015) Deep convolutional neural fields for depth estimation from a single image. In: IEEE conference on computer vision and pattern recognition (CVPR) pp 5162–5170 Levi G, Hassner T (2015) Age and gender classification using convolutional neural networks. In: IEEE conf. on computer vision and pattern recognition (CVPR) workshops Freedman B, Shpunt A, Machline M, Arieli Y (2012) Depth mapping using projected patterns. US Patent 8,150,142 Mirza M, Osindero S (2014) Conditional generative adversarial nets. arXiv:1411.1784 CaiYLeiYYangMYouZShanSA fast and robust 3d face recognition approach based on deeply learned face representationNeurocomputing201936337539710.1016/j.neucom.2019.07.047 Boutellaa E (2017) Contribution to face analysis from RGB images and depth maps Chowdhury A, Ghosh S, Singh R, Vatsa M (2016) RGB-D face recognition via learning-based reconstruction. IEEE 8th international conference on biometrics theory, applications and systems (BTAS), pp 1–7 Long J, Shelhamer E, Darrell T (2015) Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 3431–3440 Vondrick C, Pirsiavash H, Torralba A (2016) Generating videos with scene dynamics. In: Advances in neural information processing systems, pp 613–621 Xu D, Wang W, Tang H, Liu HW, Sebe N, Ricci E (2018) Structured attention guided convolutional neural fields for monocular depth estimation Mathieu M, Couprie C, LeCun Y (2015) Deep multi-scale video prediction beyond mean square error. arXiv:1511.05440 Geiger A, Lenz P, Urtasun R (2012) Are we ready for autonomous driving? the kitti vision benchmark suite Hu X, Ren W, LaMaster J, Cao X, Li X, Li Z, Menze B, Liu W (2020) Face super-resolution guided by 3d facial priors. In: European conference on computer vision. Springer, pp 763–780 Savran A, Alyüz N, Dibeklioglu H, Çeliktutan O, Gökberk B, Sankur B, Akarun L (2008) Bosphorus database for 3d face analysis. In: BIOID Pathak D, Krahenbuhl P, Donahue J, Darrell T, Efros AA (2016) Context encoders: feature learning by inpainting. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2536–2544 Liu MY, Tuzel O (2016) Coupled generative adversarial networks. In: Advances in neural information processing systems, pp 469–477 DantchevaABrémondFGender estimation based on smile-dynamicsIEEE Trans Inf Forensics Secur201712371972910.1109/TIFS.2016.2632070 DeBruine L, Jones B (2017) Face research lab london set. https://doi.org/10.6084/m9.figshare.5047666.v3, https://figshare.com/articles/Face_Research_Lab_London_Set/5047666 DriraHAmorBBSrivastavaADaoudiMSlamaR3D face recognition under expressions, occlusions, and pose variationsIEEE Trans Pattern Anal Mach Intell2013352270228310.1109/TPAMI.2013.48 Zhu JY, Park T, Isola P, Efros AA (2017) Unpaired image-to-image translation using cycle-consistent adversarial networks Godard C, Mac Aodha O, Brostow GJ (2017) Unsupervised monocular depth estimation with left-right consistency. In: CVPR Hazirbas C, Ma L, Domokos C, Cremers D (2016) Fusenet: incorporating depth into semantic segmentation via fusion-based cnn architecture. ACCV Efros AA, Leung TK (1999) Texture synthesis by non-parametric sampling. In: Computer vision, 1999. The proceedings of the seventh IEEE international conference on, vol 2, pp 1033–1038. IEEE Bleyer M, Breiteneder C (2013) Stereo matching - state-of-the-art and research challenges . In: Advanced Topics in Computer Vision. Springer, pp 143–179 Rosales R, Achan K, Frey BJ (2003) Unsupervised image translation. In: Iccv, pp 472–478 RozsaAGüntherMRuddEMBoultTEFacial attributes: accuracy and adversarial robustnessPattern Recogn Lett201912410010810.1016/j.patrec.2017.10.024 Silberman N, Hoiem D, Kohli P, Fergus R (2012) Indoor segmentation and support inference from RGBD images. In: ECCV Zhang Y, Khamis S, Rhemann C, Valentin J, Kowdle A, Tankovich V, Schoenberg M, Izadi S, Funkhouser T, Fanello S (2018) Activestereonet: end-to-end self-supervised learning for active stereo systems. In: Proceedings of the european conference on computer vision (ECCV), pp 784–801 BarnesCShechtmanEFinkelsteinAGoldmanDBPatchmatch: a randomized correspondence algorithm for structural image editingACM Trans Grap20092824:124:11 Gupta S, Girshick R, Arbeláez P, Malik J (2015) Aligning 3D models to RGB-d images of cluttered scenes. CVPR Zhu JY, Krähenbühl P, Shechtman E, Efros AA (2016) Generative visual manipulation on the natural image manifold. In: European conference on computer vision. Springer, pp 597–613 Gupta S, Girshick R, Arbeláez P, Malik J (2014) Learning rich features from RGB-d images for object detection and segmentation. ECCV Taigman Y, Polyak A, Wolf L (2016) Unsupervised cross-domain image generation. arXiv:1611.02200 Choi Y, Choi M, Kim M, Ha JW, Kim S, Choo J (2017) Isola P, Zhu JY, Zhou T, Efros AA (2017) Image-to-image translation with conditional adversarial networks Kortylewski A, Schneider A, Gerig T, Egger B, Morel-Forster A, Vetter T (2018) Training deep face recognition systems with synthetic data. CoRR abs/1802.05891 Hansard M, Lee S, Horaud RP, Choi O (2012) Time-of-flight cameras: principles, methods and applications. Springer Science & Business Media Luo W, Schwing AG, Urtasun R (2016) Efficient deep learning for stereo matching. In: IEEE conference on computer vision and pattern recognition (CVPR), pp 5695–5703 Rose J, Bourlai T (2019) Deep learning based estimation of facial attributes on challenging mobile phone face datasets. In: Proceedings of the 2019 IEEE/ACM international conference on advances in social networks analysis and mining, pp 1120–1127 Kittler J, Koppen P, Kopp P, Huber P, Rätsch M (2018) Conformal mapping of a 3d face representation onto a 2d image for cnn based face recognition. In: International Conference on Biometrics (ICB), pp 124–131 Das A, Dantcheva A, Bremond F (2018) Mitigating bias in gender, age and ethnicity classification: a multi-task convolution neural network approach. In: ECCVW 2018-European conference of computer vision workshops Sela M, Richardson E, Kimmel R (2017) Unrestricted facial geometry reconstruction using image-to-image translation Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y (2014) Generative adversarial nets. In: Advances in neural information processing systems, pp 2672–2680 Richardson E, Sela M, Kimmel R (2016) 3d face reconstruction by learning from synthetic data. Fourth international conference on 3D vision (3DV), pp 460–469 XieJCPunCMChronological age estimation under the guidance of age-related facial attributesIEEE Trans Inf Forensics Secur20191492500251110.1109/TIFS.2019.2902823 Li J, Klein R, Yao A (2017) A two-streamed network for estimating fine-scaled depth maps from single RGB images. In: IEEE International Conference on Computer Vision (ICCV) pp 3392–3400 Radford A, Metz L, Chintala S (2015) Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv:1511.06434 Shekhawat HS, Rathor HS (2020) Impacts of change in facial features on age estimation and face identification: a review. In: Somani AK, Shekhawat RS, Mundra A, Srivastava S, Verma VK (eds) Smart systems and iot: innovations in comput 13224_CR41 13224_CR40 13224_CR43 C Cao (13224_CR7) 2014; 20 13224_CR42 13224_CR45 13224_CR44 A Dantcheva (13224_CR12) 2016; 11 13224_CR47 13224_CR46 JC Xie (13224_CR63) 2019; 14 Y Cai (13224_CR6) 2019; 363 13224_CR38 13224_CR37 13224_CR39 13224_CR30 13224_CR32 13224_CR31 13224_CR34 13224_CR33 13224_CR36 13224_CR35 H Drira (13224_CR16) 2013; 35 13224_CR4 13224_CR5 13224_CR8 13224_CR9 13224_CR27 13224_CR26 A Dantcheva (13224_CR11) 2017; 12 13224_CR29 13224_CR28 13224_CR1 13224_CR2 13224_CR62 13224_CR21 13224_CR65 13224_CR20 13224_CR64 13224_CR23 13224_CR67 13224_CR22 13224_CR66 13224_CR25 13224_CR24 13224_CR68 C Barnes (13224_CR3) 2009; 28 13224_CR61 13224_CR60 13224_CR15 13224_CR59 13224_CR18 13224_CR17 13224_CR19 13224_CR51 13224_CR10 13224_CR54 13224_CR53 13224_CR56 A Rozsa (13224_CR52) 2019; 124 13224_CR55 13224_CR14 13224_CR58 13224_CR13 13224_CR57 13224_CR50 13224_CR49 13224_CR48 |
| References_xml | – reference: Li J, Klein R, Yao A (2017) A two-streamed network for estimating fine-scaled depth maps from single RGB images. In: IEEE International Conference on Computer Vision (ICCV) pp 3392–3400 – reference: CaiYLeiYYangMYouZShanSA fast and robust 3d face recognition approach based on deeply learned face representationNeurocomputing201936337539710.1016/j.neucom.2019.07.047 – reference: Savran A, Alyüz N, Dibeklioglu H, Çeliktutan O, Gökberk B, Sankur B, Akarun L (2008) Bosphorus database for 3d face analysis. In: BIOID – reference: Luo W, Schwing AG, Urtasun R (2016) Efficient deep learning for stereo matching. In: IEEE conference on computer vision and pattern recognition (CVPR), pp 5695–5703 – reference: Zhang Y, Khamis S, Rhemann C, Valentin J, Kowdle A, Tankovich V, Schoenberg M, Izadi S, Funkhouser T, Fanello S (2018) Activestereonet: end-to-end self-supervised learning for active stereo systems. In: Proceedings of the european conference on computer vision (ECCV), pp 784–801 – reference: Hansard M, Lee S, Horaud RP, Choi O (2012) Time-of-flight cameras: principles, methods and applications. Springer Science & Business Media – reference: Silberman N, Hoiem D, Kohli P, Fergus R (2012) Indoor segmentation and support inference from RGBD images. In: ECCV – reference: BarnesCShechtmanEFinkelsteinAGoldmanDBPatchmatch: a randomized correspondence algorithm for structural image editingACM Trans Grap20092824:124:11 – reference: Ryan Fanello S, Rhemann C, Tankovich V, Kowdle A, Orts Escolano S, Kim D, Izadi S (2016) Hyperdepth: learning depth from structured light without matching. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 5441–5450 – reference: Deng J, Dong W, Socher R, Li LJ, Li K, Fei-Fei L (2009) Imagenet: a large-scale hierarchical image database. In: CVPR09 – reference: Geiger A, Lenz P, Urtasun R (2012) Are we ready for autonomous driving? the kitti vision benchmark suite – reference: Efros AA, Leung TK (1999) Texture synthesis by non-parametric sampling. In: Computer vision, 1999. The proceedings of the seventh IEEE international conference on, vol 2, pp 1033–1038. IEEE – reference: Ratyal N, Taj IA, Sajid M, Mahmood A, Razzaq S, Dar SH, Ali N, Usman M, Baig MJA, Mussadiq U (2019) Deeply learned pose invariant image analysis with applications in 3d face recognition. Math Probl Eng 2019 – reference: Radford A, Metz L, Chintala S (2015) Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv:1511.06434 – reference: Freedman B, Shpunt A, Machline M, Arieli Y (2012) Depth mapping using projected patterns. US Patent 8,150,142 – reference: Abate AF, Barra P, Barra S, Molinari C, Nappi M, Narducci F (2019) Clustering facial attributes: narrowing the path from soft to hard biometrics. IEEE Access – reference: Gupta S, Girshick R, Arbeláez P, Malik J (2015) Aligning 3D models to RGB-d images of cluttered scenes. CVPR – reference: RozsaAGüntherMRuddEMBoultTEFacial attributes: accuracy and adversarial robustnessPattern Recogn Lett201912410010810.1016/j.patrec.2017.10.024 – reference: CaoCWengYZhouSTongYZhouKFacewarehouse: a 3d facial expression database for visual computingIEEE Trans Vis Comput Graph20142041342510.1109/TVCG.2013.249 – reference: DantchevaAEliaPRossAWhat else does your biometric data reveal? a survey on soft biometricsIEEE Trans Inf Forensics Secur201611344146710.1109/TIFS.2015.2480381 – reference: Das A, Dantcheva A, Bremond F (2018) Mitigating bias in gender, age and ethnicity classification: a multi-task convolution neural network approach. In: ECCVW 2018-European conference of computer vision workshops – reference: Choi Y, Choi M, Kim M, Ha JW, Kim S, Choo J (2017) – reference: Kingma DP, Ba J (2014) Adam: a method for stochastic optimization. arXiv:1412.6980 – reference: Mathieu M, Couprie C, LeCun Y (2015) Deep multi-scale video prediction beyond mean square error. arXiv:1511.05440 – reference: Liu Z, Luo P, Wang X, Tang X (2015) Deep learning face attributes in the wild. In: Proceedings of the IEEE international conference on computer vision, pp 3730–3738 – reference: Liu MY, Tuzel O (2016) Coupled generative adversarial networks. In: Advances in neural information processing systems, pp 469–477 – reference: Rudd EM, Gunther M, Boult TE (2016) Moon: a mixed objective optimization network for the recognition of facial attributes. In: Proceedings of the european conference on computer vision – reference: Boutellaa E (2017) Contribution to face analysis from RGB images and depth maps – reference: DantchevaABrémondFGender estimation based on smile-dynamicsIEEE Trans Inf Forensics Secur201712371972910.1109/TIFS.2016.2632070 – reference: Hu X, Ren W, LaMaster J, Cao X, Li X, Li Z, Menze B, Liu W (2020) Face super-resolution guided by 3d facial priors. In: European conference on computer vision. Springer, pp 763–780 – reference: Mirza M, Osindero S (2014) Conditional generative adversarial nets. arXiv:1411.1784 – reference: Rosales R, Achan K, Frey BJ (2003) Unsupervised image translation. In: Iccv, pp 472–478 – reference: Gupta S, Girshick R, Arbeláez P, Malik J (2014) Learning rich features from RGB-d images for object detection and segmentation. ECCV – reference: DeBruine L, Jones B (2017) Face research lab london set. https://doi.org/10.6084/m9.figshare.5047666.v3, https://figshare.com/articles/Face_Research_Lab_London_Set/5047666 – reference: Socher R, Huval B, Bhat B, Manning CD, Ng AY (2012) Convolutional-recursive deep learning for 3d object classification NIPS – reference: Godard C, Mac Aodha O, Brostow GJ (2017) Unsupervised monocular depth estimation with left-right consistency. In: CVPR – reference: Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y (2014) Generative adversarial nets. In: Advances in neural information processing systems, pp 2672–2680 – reference: Aytar Y, Castrejon L, Vondrick C, Pirsiavash H (2017) Torralba, A, Cross-modal scene networks. IEEE transactions on pattern analysis and machine intelligence – reference: Sela M, Richardson E, Kimmel R (2017) Unrestricted facial geometry reconstruction using image-to-image translation – reference: Pilzer A, Xu D, Puscas MM, Ricci E, Sebe N (2018) Unsupervised adversarial depth estimation using cycled generative networks. In: International conference on 3D vision (3DV) pp 587–595 – reference: Richardson E, Sela M, Kimmel R (2016) 3d face reconstruction by learning from synthetic data. Fourth international conference on 3D vision (3DV), pp 460–469 – reference: Mueller F, Bernard F, Sotnychenko O, Mehta D, Sridhar S, Casas D, Theobalt C (2018) Ganerated hands for real-time 3d hand tracking from monocular RGB. In: Proceedings of computer vision and pattern recognition (CVPR), https://handtracker.mpi-inf.mpg.de/projects/GANeratedHands/ – reference: Kittler J, Koppen P, Kopp P, Huber P, Rätsch M (2018) Conformal mapping of a 3d face representation onto a 2d image for cnn based face recognition. In: International Conference on Biometrics (ICB), pp 124–131 – reference: Liu F, Shen C, Lin G (2015) Deep convolutional neural fields for depth estimation from a single image. In: IEEE conference on computer vision and pattern recognition (CVPR) pp 5162–5170 – reference: Rose J, Bourlai T (2019) Deep learning based estimation of facial attributes on challenging mobile phone face datasets. In: Proceedings of the 2019 IEEE/ACM international conference on advances in social networks analysis and mining, pp 1120–1127 – reference: Cui J, Zhang H, Han H, Shan S, Chen X (2018) Improving 2d face recognition via discriminative face depth estimation. International Conference on Biometrics (ICB) pp 140–147 – reference: Eigen D, Puhrsch C, Fergus R (2014) Depth map prediction from a single image using a multi-scale deep network. In: NIPS – reference: Hazirbas C, Ma L, Domokos C, Cremers D (2016) Fusenet: incorporating depth into semantic segmentation via fusion-based cnn architecture. ACCV – reference: He Y, Chiu W, Keuper M, Fritz M (2017) RGB-D semantic segmentation using spatio-temporal data-driven pooling. CVPR – reference: Pathak D, Krahenbuhl P, Donahue J, Darrell T, Efros AA (2016) Context encoders: feature learning by inpainting. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2536–2544 – reference: Zhao J, Mathieu M, LeCun Y (2016) – reference: DriraHAmorBBSrivastavaADaoudiMSlamaR3D face recognition under expressions, occlusions, and pose variationsIEEE Trans Pattern Anal Mach Intell2013352270228310.1109/TPAMI.2013.48 – reference: Levi G, Hassner T (2015) Age and gender classification using convolutional neural networks. In: IEEE conf. on computer vision and pattern recognition (CVPR) workshops – reference: Xu D, Wang W, Tang H, Liu HW, Sebe N, Ricci E (2018) Structured attention guided convolutional neural fields for monocular depth estimation – reference: Zhu JY, Krähenbühl P, Shechtman E, Efros AA (2016) Generative visual manipulation on the natural image manifold. In: European conference on computer vision. Springer, pp 597–613 – reference: Isola P, Zhu JY, Zhou T, Efros AA (2017) Image-to-image translation with conditional adversarial networks – reference: Kortylewski A, Schneider A, Gerig T, Egger B, Morel-Forster A, Vetter T (2018) Training deep face recognition systems with synthetic data. CoRR abs/1802.05891 – reference: Garg R, Kumar BV, Carneiro G, Reid I (2016) Unsupervised cnn for single view depth estimation: geometry to the rescue. In: European conference on computer vision. Springer, pp 740–756 – reference: Bleyer M, Breiteneder C (2013) Stereo matching - state-of-the-art and research challenges . In: Advanced Topics in Computer Vision. Springer, pp 143–179 – reference: Ren W, Yang J, Deng S, Wipf D, Cao X, Tong X (2019) Face video deblurring using 3d facial priors. In: Proceedings of the IEEE/CVF international conference on computer vision, pp 9388–9397 – reference: Vondrick C, Pirsiavash H, Torralba A (2016) Generating videos with scene dynamics. In: Advances in neural information processing systems, pp 613–621 – reference: Shekhawat HS, Rathor HS (2020) Impacts of change in facial features on age estimation and face identification: a review. In: Somani AK, Shekhawat RS, Mundra A, Srivastava S, Verma VK (eds) Smart systems and iot: innovations in computing, pp 801–812. Springer, Singapore – reference: Long J, Shelhamer E, Darrell T (2015) Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 3431–3440 – reference: Tornow M, Grasshoff M, Nguyen N, Al-Hamadi A, Michaelis B (2012) Fast computation of dense and reliable depth maps from stereo images. In: Solari F, Chessa M, Sabatini SP (eds) Machine Vision, chap. 3. IntechOpen, Rijeka. https://doi.org/10.5772/34976 – reference: XieJCPunCMChronological age estimation under the guidance of age-related facial attributesIEEE Trans Inf Forensics Secur20191492500251110.1109/TIFS.2019.2902823 – reference: Zhu JY, Park T, Isola P, Efros AA (2017) Unpaired image-to-image translation using cycle-consistent adversarial networks – reference: Taigman Y, Polyak A, Wolf L (2016) Unsupervised cross-domain image generation. arXiv:1611.02200 – reference: Chowdhury A, Ghosh S, Singh R, Vatsa M (2016) RGB-D face recognition via learning-based reconstruction. IEEE 8th international conference on biometrics theory, applications and systems (BTAS), pp 1–7 – ident: 13224_CR42 – ident: 13224_CR49 doi: 10.1109/3DV.2016.56 – ident: 13224_CR61 doi: 10.5772/34976 – ident: 13224_CR23 – volume: 11 start-page: 441 issue: 3 year: 2016 ident: 13224_CR12 publication-title: IEEE Trans Inf Forensics Secur doi: 10.1109/TIFS.2015.2480381 – ident: 13224_CR35 doi: 10.1109/ICCV.2017.365 – ident: 13224_CR51 doi: 10.1145/3341161.3343525 – ident: 13224_CR59 – volume: 363 start-page: 375 year: 2019 ident: 13224_CR6 publication-title: Neurocomputing doi: 10.1016/j.neucom.2019.07.047 – ident: 13224_CR8 – volume: 12 start-page: 719 issue: 3 year: 2017 ident: 13224_CR11 publication-title: IEEE Trans Inf Forensics Secur doi: 10.1109/TIFS.2016.2632070 – ident: 13224_CR34 doi: 10.1109/CVPRW.2015.7301352 – ident: 13224_CR10 doi: 10.1109/ICB2018.2018.00031 – ident: 13224_CR40 doi: 10.1109/CVPR.2016.614 – ident: 13224_CR55 doi: 10.1007/978-3-540-89991-4_6 – ident: 13224_CR46 – ident: 13224_CR48 doi: 10.1109/ICCV.2019.00948 – ident: 13224_CR20 doi: 10.1007/978-3-319-46484-8_45 – ident: 13224_CR39 doi: 10.1109/CVPR.2015.7298965 – volume: 20 start-page: 413 year: 2014 ident: 13224_CR7 publication-title: IEEE Trans Vis Comput Graph doi: 10.1109/TVCG.2013.249 – volume: 35 start-page: 2270 year: 2013 ident: 13224_CR16 publication-title: IEEE Trans Pattern Anal Mach Intell doi: 10.1109/TPAMI.2013.48 – volume: 124 start-page: 100 year: 2019 ident: 13224_CR52 publication-title: Pattern Recogn Lett doi: 10.1016/j.patrec.2017.10.024 – ident: 13224_CR22 doi: 10.1109/CVPR.2017.699 – ident: 13224_CR37 – ident: 13224_CR13 doi: 10.1007/978-3-030-11009-3_35 – ident: 13224_CR68 doi: 10.1109/ICCV.2017.244 – ident: 13224_CR5 – ident: 13224_CR29 doi: 10.1007/978-3-030-58548-8_44 – ident: 13224_CR33 – ident: 13224_CR65 doi: 10.1007/978-3-030-01237-3_48 – ident: 13224_CR9 doi: 10.1109/BTAS.2016.7791199 – ident: 13224_CR14 doi: 10.6084/m9.figshare.5047666.v3 – ident: 13224_CR2 doi: 10.1109/TPAMI.2017.2753232 – ident: 13224_CR60 – ident: 13224_CR47 doi: 10.1155/2019/3547416 – ident: 13224_CR17 doi: 10.1109/ICCV.1999.790383 – ident: 13224_CR32 doi: 10.1109/ICB2018.2018.00029 – ident: 13224_CR19 – ident: 13224_CR24 doi: 10.1109/CVPR.2014.81 – ident: 13224_CR36 doi: 10.1109/CVPR.2015.7299152 – ident: 13224_CR26 doi: 10.1007/978-1-4471-4658-2 – ident: 13224_CR57 doi: 10.1007/978-981-13-8406-6_75 – volume: 28 start-page: 24:1 year: 2009 ident: 13224_CR3 publication-title: ACM Trans Grap – ident: 13224_CR45 doi: 10.1109/3DV.2018.00073 – ident: 13224_CR21 doi: 10.1109/CVPR.2012.6248074 – ident: 13224_CR27 doi: 10.1007/978-3-319-54181-5_14 – ident: 13224_CR1 doi: 10.1109/ACCESS.2019.2962010 – ident: 13224_CR53 doi: 10.1007/978-3-319-46454-1_2 – volume: 14 start-page: 2500 issue: 9 year: 2019 ident: 13224_CR63 publication-title: IEEE Trans Inf Forensics Secur doi: 10.1109/TIFS.2019.2902823 – ident: 13224_CR64 doi: 10.1109/CVPR.2018.00412 – ident: 13224_CR28 doi: 10.1109/CVPR.2017.757 – ident: 13224_CR18 – ident: 13224_CR66 – ident: 13224_CR25 doi: 10.1109/CVPR.2015.7299105 – ident: 13224_CR41 – ident: 13224_CR15 doi: 10.1109/CVPR.2009.5206848 – ident: 13224_CR54 doi: 10.1109/CVPR.2016.587 – ident: 13224_CR38 doi: 10.1109/ICCV.2015.425 – ident: 13224_CR44 doi: 10.1109/CVPR.2016.278 – ident: 13224_CR30 doi: 10.1109/CVPR.2017.632 – ident: 13224_CR4 doi: 10.1007/978-1-4471-5520-1_6 – ident: 13224_CR31 – ident: 13224_CR67 doi: 10.1007/978-3-319-46454-1_36 – ident: 13224_CR56 doi: 10.1109/ICCV.2017.175 – ident: 13224_CR50 doi: 10.1109/ICCV.2003.1238384 – ident: 13224_CR62 – ident: 13224_CR58 doi: 10.1007/978-3-642-33715-4_54 – ident: 13224_CR43 doi: 10.1109/CVPR.2018.00013 |
| SSID | ssj0016524 |
| Score | 2.324553 |
| Snippet | In this work we explore the use of structured-light imaging for face analysis. Towards this and due to lack of a publicly available structured-light face... |
| SourceID | unpaywall hal proquest crossref springer |
| SourceType | Open Access Repository Aggregation Database Enrichment Source Index Database Publisher |
| StartPage | 10471 |
| SubjectTerms | Computer Communication Networks Computer Science Data Structures and Information Theory Datasets Light Multimedia Information Systems Qualitative analysis Special Purpose and Application-Based Systems |
| SummonAdditionalLinks | – databaseName: SpringerLINK - Czech Republic Consortium dbid: AGYKE link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3dS8MwED90e1AfnJ84vwjim0aWNU2rb0M2h4pPDvSpNF8Ijk60U_Sv97Km3RQRfWpp0iTN5Zr7kbvfARzK00ibGBXQaBYgQOGapsJKqmyLWdkKIix23hY3oj_gl3fhnQ8Keym93csjycmfehrsxlwoifM-dwiKUwTq9QnfVg3qnYv7q251eiBCn8w2blHcEZkPlvm5lS8b0vyDc4ecsTWr49ElWBhnT-n7WzoczuxAvQYMyrEXjiePJ-NcnqiPb7SO__24FVj2JinpFGtoFeZMtgaNMt0D8dq_Bksz3IXr0O2lypA0LzJm4Z1nNyEuYIUUtLTjZ6PJ0MH_MywnJtM0H1G8kJLKfAMGve7teZ_6nAxUoeWSU-YcrFUUGs2FFMbEBgGW1IK77GWKtxUXItRYJ0DcpzRj1pwKbYSW0trAtINNqGWjzGwB0bGSsbHcImLiDmcyYQMVhsJGHI22qAmsFEyiPGG5y5sxTKZUy27WEpy1ZDJrSasJR9U7TwVdx6-1D1DeVUXHtN3vXCfumYsYRvAbv7Im7JbLIfH6_ZK0J0xhaLxiG8elRKfFv3V5XC2jP4xw-3-t78BiG42wwkduF2ooa7OHRlMu972OfAIUHQha priority: 102 providerName: Springer Nature – databaseName: Unpaywall dbid: UNPAY link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1LS8NAEB60HsSDb7Gisog3Xck2m030VsRSRMSDBT2F7AvBEkVTRX-9M82mVg9FT1myk-fsZr-PzHwDcKhPU-synIDOihgJirS8UF5z4yPhdRSn2E3RFteqP5CXd8ndHLAmF-YBEWf49lObU-YmkpBsHhZUgmi7BQuD65vu_ZhHZRHHBa-WRE1jAooi5MXU2XGCck8oXJ0ol-TRj7Vn_oEiH6dg5eRP6BIsjsrn4uO9GA6nFpveSh30-DrWKKQYk8eTUaVPzOcvBcdZz7EKywFpsm49NNZgzpXrsNJUcWBhUq_D0pQk4QZc9ArjWFHVhbCwFURLGOWhsFptdvTiLBsSqz_DfuZKy6snjhvWKJRvwqB3cXve56HUAjcISCouKG7apImzUmnlXOaQN2mrJBUlM7JjpFKJRZsY6ZyxQnh3qqxTVmvvY9eJt6BVPpVuG5jNjM6clx6JkCT6KJSPTZIon0rEYmkbROOE3AQdciqHMcy_FZTJcTk6Lh87Lo_acDQ55rlW4ZhpfYCvfGJIAtr97lVO-xo3vIk27Dauz8O0fc07YwEwxKR4juNmOHx3z7rk8WTI_OEOd_5nvgstdK_bQ_hT6f0wA74ATaX8JA priority: 102 providerName: Unpaywall |
| Title | Face attribute analysis from structured light: an end-to-end approach |
| URI | https://link.springer.com/article/10.1007/s11042-022-13224-0 https://www.proquest.com/docview/2781403520 https://hal.science/hal-04391848 |
| UnpaywallVersion | submittedVersion |
| Volume | 82 |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVEBS databaseName: Inspec with Full Text customDbUrl: eissn: 1573-7721 dateEnd: 20241105 omitProxy: false ssIdentifier: ssj0016524 issn: 1380-7501 databaseCode: ADMLS dateStart: 20110101 isFulltext: true titleUrlDefault: https://www.ebsco.com/products/research-databases/inspec-full-text providerName: EBSCOhost – providerCode: PRVLSH databaseName: SpringerLink Journals customDbUrl: mediaType: online eissn: 1573-7721 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0016524 issn: 1380-7501 databaseCode: AFBBN dateStart: 19970101 isFulltext: true providerName: Library Specific Holdings – providerCode: PRVPQU databaseName: ProQuest Technology Collection customDbUrl: eissn: 1573-7721 dateEnd: 20241105 omitProxy: true ssIdentifier: ssj0016524 issn: 1380-7501 databaseCode: 8FG dateStart: 19970101 isFulltext: true titleUrlDefault: https://search.proquest.com/technologycollection1 providerName: ProQuest – providerCode: PRVAVX databaseName: SpringerLINK - Czech Republic Consortium customDbUrl: eissn: 1573-7721 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0016524 issn: 1380-7501 databaseCode: AGYKE dateStart: 19970101 isFulltext: true titleUrlDefault: http://link.springer.com providerName: Springer Nature – providerCode: PRVAVX databaseName: SpringerLink Journals (ICM) customDbUrl: eissn: 1573-7721 dateEnd: 99991231 omitProxy: true ssIdentifier: ssj0016524 issn: 1380-7501 databaseCode: U2A dateStart: 19970101 isFulltext: true titleUrlDefault: http://www.springerlink.com/journals/ providerName: Springer Nature |
| link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1Lb9NAEB41yQF64FFABEpkIW50hdder10khBwUNwJkVYhI7cnyvsQhckKbgvj3zMRrJ1wiTmt711prZh8z3pnvA3ijzlNjM5yA1vAYHRRhWC2dYtqF3KkwTrGaoi1KOV-Iz1fJ1RGUXS4MhVV2a-J2oTYrTf_I30VbbCY0F8KP65-MWKPodLWj0Kg9tYL5sIUYG8AoImSsIYyms_LyW3-uIBNPc5uFDPdK7tNo2mQ6TqkqFN1OHppg4T9b1eAHBUruWaH9wekx3Ltr1vWf3_Vyubc3FY_ggTcqg7wdBY_hyDYn8LAjbAj8_D2B4z30wScwK2ptg3rTcl7hlccnCSjlJGiBZe9urAmW5MC_x_rANoZtVgyLoAMjfwqLYvb905x5VgWm0fbYME4h0jpNrBFSSWsziy6SMlIQ_5gWkRZSJgbbxOi5acO5s-fSWGmUci62UfwMhs2qsc8hMJlWmXXCoc8jyFPk0sU6SaRLBZpd6Rh4J8BKe8hxYr5YVjuwZBJ6hUKvtkKvwjG87d9Zt4AbB1u_Rr30DQkre55_regZ5fyi-5r94mM47dRW-Rl6W-3G0xjOOlXuqg91edar-z--8MXhzl_CfSKwb6PaTmGIurWv0MzZqAkMsuJiAqO8mE5LKi-uv8wmfkRj7SLK8W5RXubXfwGBaPuf |
| linkProvider | ProQuest |
| linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV1Lb9QwEB6V7aH0wKOA2FLAQnCiFnHiOClShQrsakuXFUKt1JtJ_BCHVXZpt1T9c_w2ZhInu1xWXHpKFDtKNGN7ZuyZ7wN4XR5k1uU4AZ0VCQYo0vJC-ZIbHwlfRkmGzZRtMVGjM_nlPD3fgD9tLQylVbZrYr1Q25mhPfJ3cY3NhO5C9GH-ixNrFJ2uthQaRaBWsIc1xFgo7DhxN9cYwl0eHn9Gfb-J4-Hg9NOIB5YBbtAWL7iglGGTpc5KVSrncochQ2mVJD4uI2MjlUot9kkwkjFWCO8OlHXKlqX3iSPgAzQBmzLBF3qw-XEw-fa9O8dQaaDVzSOOtlmEsp2meE9QaQxl01NEKHn0j2m885MSM1e83u6gdhu2rqp5cXNdTKcrtnD4AO4FJ5YdNaPuIWy4agfutwQRLKwXO7C9gnb4CAbDwjhWLBqOLbwLeCiMSlxYA2R7deEsm9KGwXtsZ66yfDHjeGEt-PljOLsV-T6BXjWr3FNgNjdl7rz0GGNJikyF8olJU-UziW5e1gfRClCbAHFOTBtTvQRnJqFrFLquha6jPrzt3pk3AB9re79CvXQdCZt7dDTW9IxqjDFczn-LPuy1atNhRbjUy_Hbh_1WlcvmdZ_c79T9H3-4u_7jL2FrdPp1rMfHk5NncDcmNuN6T2kPeqhn9xxdrEX5IoxjBj9ue-r8Bf2BNL4 |
| linkToPdf | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV3dT9RAEJ8gJioPqKjxFHVj9Ek2dNvttpAQQoR6CCE-SMLb0v2KD5feKYeEf82_zpnrtne-XHzhqU13mzYzs_OxO_MbgA9mp3C-xAXoncgwQJGO1yoYbkMigkmyAocp2-JMDc_l14v8YgX-dLUwlFbZ6cSZonZjS3vk2-kMmwndhWQ7xLSIb4fV_uQnpw5SdNLatdNoReTE395g-Ha1d3yIvP6YptXR989DHjsMcIt2eMoFpQvbIvdOKqO8Lz2GC8YpSb24rEytVCp3OCfDKMY6IYLfUc4rZ0wImSfQA1T_9wtCcacq9epLf4Kh8thQt0w4WmURC3basj1BRTGUR0-xoOTJP0bx3g9KyVzwd_sj2jV4eN1M6tubejRasILVE1iP7is7aOXtKaz4ZgMed60hWNQUG7C2gHP4DI6q2npWT9vuWngXkVAYFbewFsL2-pd3bERbBbs4znzj-HTM8cI62PPncH4n1H0Bq8248S-BudKa0gcZMLqSFJMKFTKb5yoUEh28YgCiI6C2EdycemyM9ByWmYiukeh6RnSdDOBT_86khfZYOvs98qWfSKjcw4NTTc-ouhgD5fK3GMBmxzYddcGVnkvuALY6Vs6Hl31yq2f3f_zhq-UffwcPcMHo0-Ozk9fwKEVfrU2l24RVZLN_g77V1LydCTGDy7teNX8Bd1cyUw |
| linkToUnpaywall | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1LS8NAEB60HsSDb7Gisog3Xck2m030VsRSRMSDBT2F7AvBEkVTRX-9M82mVg9FT1myk-fsZr-PzHwDcKhPU-synIDOihgJirS8UF5z4yPhdRSn2E3RFteqP5CXd8ndHLAmF-YBEWf49lObU-YmkpBsHhZUgmi7BQuD65vu_ZhHZRHHBa-WRE1jAooi5MXU2XGCck8oXJ0ol-TRj7Vn_oEiH6dg5eRP6BIsjsrn4uO9GA6nFpveSh30-DrWKKQYk8eTUaVPzOcvBcdZz7EKywFpsm49NNZgzpXrsNJUcWBhUq_D0pQk4QZc9ArjWFHVhbCwFURLGOWhsFptdvTiLBsSqz_DfuZKy6snjhvWKJRvwqB3cXve56HUAjcISCouKG7apImzUmnlXOaQN2mrJBUlM7JjpFKJRZsY6ZyxQnh3qqxTVmvvY9eJt6BVPpVuG5jNjM6clx6JkCT6KJSPTZIon0rEYmkbROOE3AQdciqHMcy_FZTJcTk6Lh87Lo_acDQ55rlW4ZhpfYCvfGJIAtr97lVO-xo3vIk27Dauz8O0fc07YwEwxKR4juNmOHx3z7rk8WTI_OEOd_5nvgstdK_bQ_hT6f0wA74ATaX8JA |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Face+attribute+analysis+from+structured+light%3A+an+end-to-end+approach&rft.jtitle=Multimedia+tools+and+applications&rft.au=Thamizharasan%2C+Vikas&rft.au=Das%2C+Abhijit&rft.au=Battaglino%2C+Daniele&rft.au=Bremond%2C+Francois&rft.date=2023-03-01&rft.issn=1380-7501&rft.eissn=1573-7721&rft.volume=82&rft.issue=7&rft.spage=10471&rft.epage=10490&rft_id=info:doi/10.1007%2Fs11042-022-13224-0&rft.externalDBID=n%2Fa&rft.externalDocID=10_1007_s11042_022_13224_0 |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1380-7501&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1380-7501&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1380-7501&client=summon |