Robust LSTM-Autoencoders for Face De-Occlusion in the Wild

Face recognition techniques have been developed significantly in recent years. However, recognizing faces with partial occlusion is still challenging for existing face recognizers, which is heavily desired in real-world applications concerning surveillance and security. Although much research effort...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on image processing Vol. 27; no. 2; pp. 778 - 790
Main Authors Zhao, Fang, Feng, Jiashi, Zhao, Jian, Yang, Wenhan, Yan, Shuicheng
Format Journal Article
LanguageEnglish
Published United States IEEE 01.02.2018
Subjects
Online AccessGet full text
ISSN1057-7149
1941-0042
1941-0042
DOI10.1109/TIP.2017.2771408

Cover

Abstract Face recognition techniques have been developed significantly in recent years. However, recognizing faces with partial occlusion is still challenging for existing face recognizers, which is heavily desired in real-world applications concerning surveillance and security. Although much research effort has been devoted to developing face de-occlusion methods, most of them can only work well under constrained conditions, such as all of faces are from a pre-defined closed set of subjects. In this paper, we propose a robust LSTM-Autoencoders (RLA) model to effectively restore partially occluded faces even in the wild. The RLA model consists of two LSTM components, which aims at occlusion-robust face encoding and recurrent occlusion removal respectively. The first one, named multi-scale spatial LSTM encoder, reads facial patches of various scales sequentially to output a latent representation, and occlusion-robustness is achieved owing to the fact that the influence of occlusion is only upon some of the patches. Receiving the representation learned by the encoder, the LSTM decoder with a dual channel architecture reconstructs the overall face and detects occlusion simultaneously, and by feat of LSTM, the decoder breaks down the task of face de-occlusion into restoring the occluded part step by step. Moreover, to minimize identify information loss and guarantee face recognition accuracy over recovered faces, we introduce an identity-preserving adversarial training scheme to further improve RLA. Extensive experiments on both synthetic and real data sets of faces with occlusion clearly demonstrate the effectiveness of our proposed RLA in removing different types of facial occlusion at various locations. The proposed method also provides significantly larger performance gain than other de-occlusion methods in promoting recognition performance over partially-occluded faces.
AbstractList Face recognition techniques have been developed significantly in recent years. However, recognizing faces with partial occlusion is still challenging for existing face recognizers, which is heavily desired in real-world applications concerning surveillance and security. Although much research effort has been devoted to developing face de-occlusion methods, most of them can only work well under constrained conditions, such as all of faces are from a pre-defined closed set of subjects. In this paper, we propose a robust LSTM-Autoencoders (RLA) model to effectively restore partially occluded faces even in the wild. The RLA model consists of two LSTM components, which aims at occlusion-robust face encoding and recurrent occlusion removal respectively. The first one, named multi-scale spatial LSTM encoder, reads facial patches of various scales sequentially to output a latent representation, and occlusion-robustness is achieved owing to the fact that the influence of occlusion is only upon some of the patches. Receiving the representation learned by the encoder, the LSTM decoder with a dual channel architecture reconstructs the overall face and detects occlusion simultaneously, and by feat of LSTM, the decoder breaks down the task of face de-occlusion into restoring the occluded part step by step. Moreover, to minimize identify information loss and guarantee face recognition accuracy over recovered faces, we introduce an identity-preserving adversarial training scheme to further improve RLA. Extensive experiments on both synthetic and real data sets of faces with occlusion clearly demonstrate the effectiveness of our proposed RLA in removing different types of facial occlusion at various locations. The proposed method also provides significantly larger performance gain than other de-occlusion methods in promoting recognition performance over partially-occluded faces.
Face recognition techniques have been developed significantly in recent years. However, recognizing faces with partial occlusion is still challenging for existing face recognizers, which is heavily desired in real-world applications concerning surveillance and security. Although much research effort has been devoted to developing face de-occlusion methods, most of them can only work well under constrained conditions, such as all of faces are from a pre-defined closed set of subjects. In this paper, we propose a robust LSTM-Autoencoders (RLA) model to effectively restore partially occluded faces even in the wild. The RLA model consists of two LSTM components, which aims at occlusion-robust face encoding and recurrent occlusion removal respectively. The first one, named multi-scale spatial LSTM encoder, reads facial patches of various scales sequentially to output a latent representation, and occlusion-robustness is achieved owing to the fact that the influence of occlusion is only upon some of the patches. Receiving the representation learned by the encoder, the LSTM decoder with a dual channel architecture reconstructs the overall face and detects occlusion simultaneously, and by feat of LSTM, the decoder breaks down the task of face de-occlusion into restoring the occluded part step by step. Moreover, to minimize identify information loss and guarantee face recognition accuracy over recovered faces, we introduce an identity-preserving adversarial training scheme to further improve RLA. Extensive experiments on both synthetic and real data sets of faces with occlusion clearly demonstrate the effectiveness of our proposed RLA in removing different types of facial occlusion at various locations. The proposed method also provides significantly larger performance gain than other de-occlusion methods in promoting recognition performance over partially-occluded faces.Face recognition techniques have been developed significantly in recent years. However, recognizing faces with partial occlusion is still challenging for existing face recognizers, which is heavily desired in real-world applications concerning surveillance and security. Although much research effort has been devoted to developing face de-occlusion methods, most of them can only work well under constrained conditions, such as all of faces are from a pre-defined closed set of subjects. In this paper, we propose a robust LSTM-Autoencoders (RLA) model to effectively restore partially occluded faces even in the wild. The RLA model consists of two LSTM components, which aims at occlusion-robust face encoding and recurrent occlusion removal respectively. The first one, named multi-scale spatial LSTM encoder, reads facial patches of various scales sequentially to output a latent representation, and occlusion-robustness is achieved owing to the fact that the influence of occlusion is only upon some of the patches. Receiving the representation learned by the encoder, the LSTM decoder with a dual channel architecture reconstructs the overall face and detects occlusion simultaneously, and by feat of LSTM, the decoder breaks down the task of face de-occlusion into restoring the occluded part step by step. Moreover, to minimize identify information loss and guarantee face recognition accuracy over recovered faces, we introduce an identity-preserving adversarial training scheme to further improve RLA. Extensive experiments on both synthetic and real data sets of faces with occlusion clearly demonstrate the effectiveness of our proposed RLA in removing different types of facial occlusion at various locations. The proposed method also provides significantly larger performance gain than other de-occlusion methods in promoting recognition performance over partially-occluded faces.
Author Jian Zhao
Shuicheng Yan
Fang Zhao
Wenhan Yang
Jiashi Feng
Author_xml – sequence: 1
  givenname: Fang
  orcidid: 0000-0002-6772-8042
  surname: Zhao
  fullname: Zhao, Fang
– sequence: 2
  givenname: Jiashi
  surname: Feng
  fullname: Feng, Jiashi
– sequence: 3
  givenname: Jian
  surname: Zhao
  fullname: Zhao, Jian
– sequence: 4
  givenname: Wenhan
  surname: Yang
  fullname: Yang, Wenhan
– sequence: 5
  givenname: Shuicheng
  surname: Yan
  fullname: Yan, Shuicheng
BackLink https://www.ncbi.nlm.nih.gov/pubmed/29757731$$D View this record in MEDLINE/PubMed
BookMark eNp9kEtLxDAURoMojq-9IEiXbjrmncadjK-BEUULLkOa3GCl04xNu_Df22FGFy5c3Qv3fB_cc4h229gCQqcETwnB-rKcP08pJmpKlSIcFzvogGhOcow53R13LFQ-HvQEHab0gTHhgsh9NKFaCaUYOUBXL7EaUp8tXsvH_HroI7QueuhSFmKX3VkH2Q3kT841Q6pjm9Vt1r9D9lY3_hjtBdskONnOI1Te3Zazh3zxdD-fXS9yxwrR5z5IapnkTtlgPWGCCE9VcCFIqYLQ2nHLAQsJglahEoVnTgfPhPTcQmBH6GJTu-ri5wCpN8s6OWga20IckqGYaaqF4HJEz7foUC3Bm1VXL233ZX7eHQG5AVwXU-ogGFf3th8f6ztbN4Zgs_ZqRq9m7dVsvY5B_Cf40_1P5GwTqQHgFy8IJoJz9g0pboEd
CODEN IIPRE4
CitedBy_id crossref_primary_10_1007_s10489_021_02728_1
crossref_primary_10_1002_tee_23335
crossref_primary_10_1109_TIP_2021_3132827
crossref_primary_10_3390_su141912286
crossref_primary_10_1007_s12652_019_01257_7
crossref_primary_10_1109_TCSVT_2020_3007723
crossref_primary_10_1155_2022_3705581
crossref_primary_10_1080_02533839_2024_2346494
crossref_primary_10_29121_ijetmr_v11_i7_2024_1477
crossref_primary_10_1016_j_visinf_2020_04_003
crossref_primary_10_1109_ACCESS_2019_2909553
crossref_primary_10_1016_j_compeleceng_2022_108065
crossref_primary_10_3390_electronics10212666
crossref_primary_10_1109_TCSVT_2020_2967754
crossref_primary_10_1109_TCSVT_2024_3419933
crossref_primary_10_1007_s00521_022_07421_z
crossref_primary_10_1109_TCYB_2019_2931067
crossref_primary_10_3390_s23208559
crossref_primary_10_1016_j_asoc_2020_106662
crossref_primary_10_1080_13614568_2018_1524934
crossref_primary_10_1631_FITEE_1700786
crossref_primary_10_1111_exsy_13625
crossref_primary_10_1109_JIOT_2020_3034156
crossref_primary_10_1016_j_measurement_2024_114410
crossref_primary_10_1007_s11227_022_04818_4
crossref_primary_10_1109_TIFS_2021_3122072
crossref_primary_10_1016_j_patcog_2019_05_023
crossref_primary_10_1007_s11042_022_12851_x
crossref_primary_10_1007_s12021_021_09538_3
crossref_primary_10_1016_j_displa_2022_102245
crossref_primary_10_1016_j_patrec_2023_08_014
crossref_primary_10_1109_ACCESS_2022_3163270
crossref_primary_10_32604_jai_2024_048911
crossref_primary_10_1016_j_compeleceng_2022_108090
crossref_primary_10_1007_s13042_024_02390_2
crossref_primary_10_1016_j_patcog_2020_107737
crossref_primary_10_1049_ipr2_12414
crossref_primary_10_1093_jcde_qwab054
crossref_primary_10_1155_2021_5591020
crossref_primary_10_1016_j_chaos_2024_115939
crossref_primary_10_1049_ccs_2019_0014
crossref_primary_10_1007_s11760_024_03603_5
crossref_primary_10_3390_math11081926
crossref_primary_10_1109_ACCESS_2020_2967845
crossref_primary_10_1109_TMM_2023_3246238
crossref_primary_10_9728_dcs_2024_25_11_3281
crossref_primary_10_1007_s13246_022_01181_9
crossref_primary_10_1145_3524137
crossref_primary_10_1177_07356331211065097
crossref_primary_10_1007_s11760_019_01471_y
crossref_primary_10_1049_bme2_12029
crossref_primary_10_1109_TIFS_2021_3053458
crossref_primary_10_1109_TMM_2023_3253054
crossref_primary_10_1080_15481603_2020_1841459
crossref_primary_10_1016_j_aei_2023_102135
crossref_primary_10_1016_j_neunet_2019_12_009
crossref_primary_10_1007_s12596_022_01053_1
crossref_primary_10_1109_TBIOM_2022_3185884
crossref_primary_10_1109_TIP_2020_3003227
crossref_primary_10_1109_ACCESS_2020_3025035
crossref_primary_10_1016_j_imavis_2021_104245
crossref_primary_10_1109_TETCI_2019_2910243
crossref_primary_10_1016_j_asoc_2020_106165
crossref_primary_10_1109_ACCESS_2019_2901376
crossref_primary_10_1109_ACCESS_2020_2973243
crossref_primary_10_1109_TPAMI_2021_3098962
crossref_primary_10_1109_TMM_2022_3157036
crossref_primary_10_1109_ACCESS_2020_2995549
crossref_primary_10_1109_JSTSP_2023_3288398
crossref_primary_10_1016_j_asoc_2024_112595
crossref_primary_10_1016_j_neucom_2021_01_027
crossref_primary_10_1016_j_patcog_2021_108308
crossref_primary_10_1016_j_rineng_2024_103171
crossref_primary_10_1109_ACCESS_2022_3199014
crossref_primary_10_1049_bme2_12036
crossref_primary_10_3233_IDA_227309
crossref_primary_10_1177_2041669519840047
crossref_primary_10_1109_ACCESS_2020_2996637
crossref_primary_10_1016_j_neucom_2022_02_035
crossref_primary_10_1016_j_neucom_2019_09_045
crossref_primary_10_1016_j_ress_2022_108482
Cites_doi 10.1007/978-3-642-33712-3_41
10.1109/TPAMI.2015.2417578
10.1145/344779.344972
10.1109/CVPR.2015.7298682
10.1162/neco.1997.9.8.1735
10.1109/CVPR.2014.220
10.1109/TPAMI.2015.2456899
10.1109/TIP.2017.2675206
10.1109/CVPR.2015.7298977
10.1109/CVPR.2016.278
10.1109/CVPR.2014.244
10.1109/TPAMI.2005.103
10.1109/TIP.2012.2235849
10.1145/2733373.2806291
10.1109/CVPR.2015.7298878
10.1109/CVPR.2001.990477
10.1109/TPAMI.2008.79
10.1109/TIP.2004.833105
10.1109/TPAMI.2014.2359453
10.1109/CVPR.2015.7298594
10.1137/040605412
ContentType Journal Article
DBID 97E
RIA
RIE
AAYXX
CITATION
NPM
7X8
DOI 10.1109/TIP.2017.2771408
DatabaseName IEEE All-Society Periodicals Package (ASPP) 2005–Present
IEEE All-Society Periodicals Package (ASPP) 1998–Present
IEEE Electronic Library (IEL)
CrossRef
PubMed
MEDLINE - Academic
DatabaseTitle CrossRef
PubMed
MEDLINE - Academic
DatabaseTitleList PubMed

MEDLINE - Academic
Database_xml – sequence: 1
  dbid: NPM
  name: PubMed
  url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
– sequence: 2
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Applied Sciences
Engineering
EISSN 1941-0042
EndPage 790
ExternalDocumentID 29757731
10_1109_TIP_2017_2771408
8101544
Genre orig-research
Journal Article
GroupedDBID ---
-~X
.DC
0R~
29I
4.4
53G
5GY
5VS
6IK
97E
AAJGR
AARMG
AASAJ
AAWTH
ABAZT
ABFSI
ABQJQ
ABVLG
ACGFO
ACGFS
ACIWK
AENEX
AETIX
AGQYO
AGSQL
AHBIQ
AI.
AIBXA
AKJIK
AKQYR
ALLEH
ALMA_UNASSIGNED_HOLDINGS
ASUFR
ATWAV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
CS3
DU5
E.L
EBS
EJD
F5P
HZ~
H~9
ICLAB
IFIPE
IFJZH
IPLJI
JAVBF
LAI
M43
MS~
O9-
OCL
P2P
RIA
RIE
RNS
TAE
TN5
VH1
AAYXX
CITATION
AAYOK
NPM
PKN
RIG
Z5M
7X8
ID FETCH-LOGICAL-c385t-df62a364c7afad13515d27fcff667f599c4a4e056e52bfb58d3c9fd356d4aef3
IEDL.DBID RIE
ISSN 1057-7149
1941-0042
IngestDate Sat Sep 27 21:00:55 EDT 2025
Wed Feb 19 02:40:57 EST 2025
Wed Oct 01 02:44:58 EDT 2025
Thu Apr 24 23:02:51 EDT 2025
Tue Aug 26 17:04:58 EDT 2025
IsPeerReviewed true
IsScholarly true
Issue 2
Language English
License https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c385t-df62a364c7afad13515d27fcff667f599c4a4e056e52bfb58d3c9fd356d4aef3
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ORCID 0000-0002-6772-8042
PMID 29757731
PQID 2039295546
PQPubID 23479
PageCount 13
ParticipantIDs crossref_citationtrail_10_1109_TIP_2017_2771408
crossref_primary_10_1109_TIP_2017_2771408
ieee_primary_8101544
proquest_miscellaneous_2039295546
pubmed_primary_29757731
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 2018-Feb.
2018-2-00
2018-Feb
20180201
PublicationDateYYYYMMDD 2018-02-01
PublicationDate_xml – month: 02
  year: 2018
  text: 2018-Feb.
PublicationDecade 2010
PublicationPlace United States
PublicationPlace_xml – name: United States
PublicationTitle IEEE transactions on image processing
PublicationTitleAbbrev TIP
PublicationTitleAlternate IEEE Trans Image Process
PublicationYear 2018
Publisher IEEE
Publisher_xml – name: IEEE
References ref13
xie (ref16) 2012
ref12
ref15
ref14
xu (ref20) 2015
ref33
ref11
tang (ref2) 2012
ref32
srivastava (ref22) 2015
ref1
ref17
zhou (ref6) 2009
ref18
theis (ref26) 2015
goodfellow (ref10) 2014
huang (ref31) 2007
ref23
cormen (ref25) 2001
ref21
graves (ref19) 2014
ref28
yi (ref30) 2014
ref27
ref29
ref8
ref7
gregor (ref24) 2015
ref9
ref4
ref3
ref5
References_xml – ident: ref33
  doi: 10.1007/978-3-642-33712-3_41
– start-page: 2672
  year: 2014
  ident: ref10
  article-title: Generative adversarial nets
  publication-title: Proc Adv Neural Info Process Syst
– ident: ref11
  doi: 10.1109/TPAMI.2015.2417578
– start-page: 843
  year: 2015
  ident: ref22
  article-title: Unsupervised learning of video representations using LSTMs
  publication-title: Proc Int Conf Mach Learn (ICML)
– year: 2014
  ident: ref30
  publication-title: Learning face representation from scratch
– ident: ref13
  doi: 10.1145/344779.344972
– ident: ref29
  doi: 10.1109/CVPR.2015.7298682
– start-page: 1918
  year: 2015
  ident: ref26
  article-title: Generative image modeling using spatial LSTMs
  publication-title: Proc Adv Neural Inf Process Syst (NIPS)
– year: 2007
  ident: ref31
  article-title: Labeled faces in the wild: A database for studying face recognition in unconstrained environments
– ident: ref18
  doi: 10.1162/neco.1997.9.8.1735
– start-page: 1764
  year: 2014
  ident: ref19
  article-title: Towards end-to-end speech recognition with recurrent neural networks
  publication-title: Proc Int Conf Mach Learn (ICML)
– year: 2001
  ident: ref25
  publication-title: Introduction to Algorithms
– ident: ref27
  doi: 10.1109/CVPR.2014.220
– ident: ref12
  doi: 10.1109/TPAMI.2015.2456899
– ident: ref9
  doi: 10.1109/TIP.2017.2675206
– start-page: 341
  year: 2012
  ident: ref16
  article-title: Image denoising and inpainting with deep neural networks
  publication-title: Proc Adv Neural Inf Process Syst
– start-page: 2048
  year: 2015
  ident: ref20
  article-title: Show, attend and tell: Neural image caption generation with visual attention
  publication-title: Proc Int Conf Mach Learn
– ident: ref23
  doi: 10.1109/CVPR.2015.7298977
– start-page: 1462
  year: 2015
  ident: ref24
  article-title: DRAW: A recurrent neural network for image generation
  publication-title: Proc Int Conf Mach Learn (ICML)
– ident: ref17
  doi: 10.1109/CVPR.2016.278
– ident: ref28
  doi: 10.1109/CVPR.2014.244
– ident: ref1
  doi: 10.1109/TPAMI.2005.103
– ident: ref7
  doi: 10.1109/TIP.2012.2235849
– ident: ref3
  doi: 10.1145/2733373.2806291
– ident: ref21
  doi: 10.1109/CVPR.2015.7298878
– start-page: 2264
  year: 2012
  ident: ref2
  article-title: Robust boltzmann machines for recognition and denoising
  publication-title: Proc IEEE Conf Comput Vis Pattern Recognit (CVPR)
– ident: ref4
  doi: 10.1109/CVPR.2001.990477
– ident: ref5
  doi: 10.1109/TPAMI.2008.79
– ident: ref14
  doi: 10.1109/TIP.2004.833105
– start-page: 1050
  year: 2009
  ident: ref6
  article-title: Face recognition with contiguous occlusion using Markov random fields
  publication-title: Proc Int Conf Comput Vis (ICCV)
– ident: ref8
  doi: 10.1109/TPAMI.2014.2359453
– ident: ref32
  doi: 10.1109/CVPR.2015.7298594
– ident: ref15
  doi: 10.1137/040605412
SSID ssj0014516
Score 2.6175005
Snippet Face recognition techniques have been developed significantly in recent years. However, recognizing faces with partial occlusion is still challenging for...
SourceID proquest
pubmed
crossref
ieee
SourceType Aggregation Database
Index Database
Enrichment Source
Publisher
StartPage 778
SubjectTerms Decoding
Face
Face de-occlusion
Face recognition
Image reconstruction
Image restoration
Logic gates
long short-term memory
Robustness
Title Robust LSTM-Autoencoders for Face De-Occlusion in the Wild
URI https://ieeexplore.ieee.org/document/8101544
https://www.ncbi.nlm.nih.gov/pubmed/29757731
https://www.proquest.com/docview/2039295546
Volume 27
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVIEE
  databaseName: IEEE Electronic Library (IEL)
  customDbUrl:
  eissn: 1941-0042
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0014516
  issn: 1057-7149
  databaseCode: RIE
  dateStart: 19920101
  isFulltext: true
  titleUrlDefault: https://ieeexplore.ieee.org/
  providerName: IEEE
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV3db9MwED-1fWIPK7Rsy_iQkXhBwm2a-OKEtwqoCqKAoJP6FiX-kKZNybQmL_vr50vSCNBAvOXhzk58vvhn3_l3AK9DG6COspyHRgZc6AXyWGjkbmUJFrkWzscats-v0fpCfN7hbgBv-7swxpgm-czM6LGJ5etS1XRUNicyKhRiCEMZR-1drT5iQAVnm8gmSi4d7D-EJP1kvv30nXK45CyQRE8X_7YENTVV_g4vm2VmNYbN4QXb7JKrWV3lM3X3B3fj_37BYzju8CZbthPkCQxMMYFxhz1Z59n7CRz9Qkw4hXc_yrzeV-zLz-2GL-uqJLpLSnlmDuOyVaYM-2D4N6WuazptY5cFc0CSuV-Mfgrb1cft-zXvqixwFcZYcW2jIAsjoWRmM00F-1AH0ipro0haTBIlMmEcTjIY5DbHWIcqsTrESIvM2PAERkVZmDNgvpBC-9omPrpNncAEXSMqccq5k0b0YH4Y91R1DORUCOM6bXYifpI6S6VkqbSzlAdveo2bln3jH7JTGu9erhtqD14dTJs6z6FwSFaYst47ZcKGlKXnwWlr816Z7htLGS7OH270GTxyXcdt9vZzGFW3tXnhwEmVv2xm5T0Vb90L
linkProvider IEEE
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1Lb9QwEB6VcgAOLbRAl6eRuCDh3WziiRNuFbDawm5BEKTeosQPCbVKqm5y6a9nJslGgABxiXLwWI7Hk_nsGX8D8DLyIdq4KGXkdCiVnaNMlEVJniWcl1aRjXVsn6fx8pv6cIZnO_B6vAvjnOuSz9yUX7tYvq1Ny0dlMyajQqVuwE16Kuxva40xAy4528U2UUtNwH8blAzSWXbymbO49DTUTFCX_OKEuqoqfweYnaNZ7MN6O8Q-v-R82jbl1Fz_xt74v99wF_YGxCmO-yVyD3ZcdQD7A_oUg21vDuDOT9SEh_DmS122m0asvmZredw2NRNectKzIJQrFoVx4p2Tn4y5aPm8TXyvBEFJQT8Zex-yxfvs7VIOdRakiRJspPVxWESxMrrwheWSfWhD7Y33caw9pqlRhXKElByGpS8xsZFJvY0wtqpwPnoAu1VduSMQgdLKBtanAdK2TmGK1IlJSbik1ogTmG3nPTcDBzmXwrjIu71IkOakqZw1lQ-amsCrUeKy59_4R9tDnu-x3TDVE3ixVW1OtsMBkaJydbshYUaHnKc3gYe9zkdhvnGsdTR_9OdOn8OtZbZe5auT04-P4TYNI-lzuZ_AbnPVuqcEVZryWbdCfwDTDeBY
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Robust+LSTM-Autoencoders+for+Face+De-Occlusion+in+the+Wild&rft.jtitle=IEEE+transactions+on+image+processing&rft.au=Fang+Zhao&rft.au=Jiashi+Feng&rft.au=Jian+Zhao&rft.au=Wenhan+Yang&rft.date=2018-02-01&rft.eissn=1941-0042&rft.volume=27&rft.issue=2&rft.spage=778&rft_id=info:doi/10.1109%2FTIP.2017.2771408&rft_id=info%3Apmid%2F29757731&rft.externalDocID=29757731
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1057-7149&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1057-7149&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1057-7149&client=summon