TAPE: Task-Agnostic Prior Embedding for Image Restoration

Learning a generalized prior for natural image restoration is an important yet challenging task. Early methods mostly involved handcrafted priors including normalized sparsity, ℓ0 $$\ell _0$$ gradients, dark channel priors, etc. Recently, deep neural networks have been used to learn various image pr...

Full description

Saved in:
Bibliographic Details
Published inComputer Vision - ECCV 2022 Vol. 13678; pp. 447 - 464
Main Authors Liu, Lin, Xie, Lingxi, Zhang, Xiaopeng, Yuan, Shanxin, Chen, Xiangyu, Zhou, Wengang, Li, Houqiang, Tian, Qi
Format Book Chapter
LanguageEnglish
Published Switzerland Springer 2022
Springer Nature Switzerland
SeriesLecture Notes in Computer Science
Online AccessGet full text
ISBN9783031197963
3031197968
ISSN0302-9743
1611-3349
DOI10.1007/978-3-031-19797-0_26

Cover

Abstract Learning a generalized prior for natural image restoration is an important yet challenging task. Early methods mostly involved handcrafted priors including normalized sparsity, ℓ0 $$\ell _0$$ gradients, dark channel priors, etc. Recently, deep neural networks have been used to learn various image priors but do not guarantee to generalize. In this paper, we propose a novel approach that embeds a task-agnostic prior into a transformer. Our approach, named Task-Agnostic Prior Embedding (TAPE), consists of two stages, namely, task-agnostic pre-training and task-specific fine-tuning, where the first stage embeds prior knowledge about natural images into the transformer and the second stage extracts the knowledge to assist downstream image restoration. Experiments on various types of degradation validate the effectiveness of TAPE. The image restoration performance in terms of PSNR is improved by as much as 1.45 dB and even outperforms task-specific algorithms. More importantly, TAPE shows the ability of disentangling generalized image priors from degraded images, which enjoys favorable transfer ability to unknown downstream tasks.
AbstractList Learning a generalized prior for natural image restoration is an important yet challenging task. Early methods mostly involved handcrafted priors including normalized sparsity, ℓ0 $$\ell _0$$ gradients, dark channel priors, etc. Recently, deep neural networks have been used to learn various image priors but do not guarantee to generalize. In this paper, we propose a novel approach that embeds a task-agnostic prior into a transformer. Our approach, named Task-Agnostic Prior Embedding (TAPE), consists of two stages, namely, task-agnostic pre-training and task-specific fine-tuning, where the first stage embeds prior knowledge about natural images into the transformer and the second stage extracts the knowledge to assist downstream image restoration. Experiments on various types of degradation validate the effectiveness of TAPE. The image restoration performance in terms of PSNR is improved by as much as 1.45 dB and even outperforms task-specific algorithms. More importantly, TAPE shows the ability of disentangling generalized image priors from degraded images, which enjoys favorable transfer ability to unknown downstream tasks.
Author Chen, Xiangyu
Li, Houqiang
Tian, Qi
Zhang, Xiaopeng
Yuan, Shanxin
Xie, Lingxi
Liu, Lin
Zhou, Wengang
Author_xml – sequence: 1
  givenname: Lin
  surname: Liu
  fullname: Liu, Lin
– sequence: 2
  givenname: Lingxi
  surname: Xie
  fullname: Xie, Lingxi
– sequence: 3
  givenname: Xiaopeng
  surname: Zhang
  fullname: Zhang, Xiaopeng
– sequence: 4
  givenname: Shanxin
  surname: Yuan
  fullname: Yuan, Shanxin
– sequence: 5
  givenname: Xiangyu
  surname: Chen
  fullname: Chen, Xiangyu
– sequence: 6
  givenname: Wengang
  surname: Zhou
  fullname: Zhou, Wengang
– sequence: 7
  givenname: Houqiang
  surname: Li
  fullname: Li, Houqiang
– sequence: 8
  givenname: Qi
  surname: Tian
  fullname: Tian, Qi
  email: tian.qi1@huawei.com
BookMark eNpVUEFOwzAQNFAQbekPOOQDhl2vE8fcqqpApUpUqJwtJ3FKaBuXOPwft-XCaTWzO6uZGbFB61vH2D3CAwKoR61yThwIOWqlFQcjsgs2iTRF8sTBJRtihsiJpL76t8towIZAILhWkm7YCCkFpSUqecsmIXwBgFDxFnHI9Hq6mj8laxu2fLppfeibMll1je-S-b5wVdW0m6SOaLG3G5e8u9D7zvaNb-_YdW13wU3-5ph9PM_Xs1e-fHtZzKZLfhCSei5R1EQ5Cp0WVkSTzmWiQqeE1mVaVmXERZ3rStkslwUp6dKscLYAC1kqBI2ZOP8Nhy6acZ0pvN8Gg2COZZmY3JCJ2c2pGHMsK4rkWXTo_PdPNG3cUVW6tu_srvy0h951wSgkyKJGIpoUiH4BfgFnrQ
ContentType Book Chapter
Copyright The Author(s), under exclusive license to Springer Nature Switzerland AG 2022
Copyright_xml – notice: The Author(s), under exclusive license to Springer Nature Switzerland AG 2022
DBID FFUUA
DEWEY 006.37
DOI 10.1007/978-3-031-19797-0_26
DatabaseName ProQuest Ebook Central - Book Chapters - Demo use only
DatabaseTitleList
DeliveryMethod fulltext_linktorsrc
Discipline Applied Sciences
Computer Science
EISBN 9783031197970
3031197976
EISSN 1611-3349
Editor Farinella, Giovanni Maria
Avidan, Shai
Cissé, Moustapha
Brostow, Gabriel
Hassner, Tal
Editor_xml – sequence: 1
  fullname: Avidan, Shai
– sequence: 2
  fullname: Cissé, Moustapha
– sequence: 3
  fullname: Farinella, Giovanni Maria
– sequence: 4
  fullname: Brostow, Gabriel
– sequence: 5
  fullname: Hassner, Tal
EndPage 464
ExternalDocumentID EBC7130697_411_503
GroupedDBID 38.
AABBV
AAZWU
ABSVR
ABTHU
ABVND
ACBPT
ACHZO
ACPMC
ADNVS
AEDXK
AEJLV
AEKFX
AHVRR
ALMA_UNASSIGNED_HOLDINGS
BBABE
CZZ
FFUUA
IEZ
SBO
TPJZQ
TSXQS
Z5O
Z7R
Z7S
Z7U
Z7W
Z7X
Z7Y
Z7Z
Z81
Z82
Z83
Z84
Z85
Z87
Z88
-DT
-~X
29L
2HA
2HV
ACGFS
ADCXD
EJD
F5P
LAS
LDH
P2P
RSU
~02
ID FETCH-LOGICAL-p243t-412f3381295ba2830ee62d1e7299c5cdcee6bf89d7a684b374e56beab0a065223
ISBN 9783031197963
3031197968
ISSN 0302-9743
IngestDate Tue Jul 29 20:37:08 EDT 2025
Thu May 29 17:02:21 EDT 2025
IsPeerReviewed true
IsScholarly true
LCCallNum TA1634
Language English
LinkModel OpenURL
MergedId FETCHMERGED-LOGICAL-p243t-412f3381295ba2830ee62d1e7299c5cdcee6bf89d7a684b374e56beab0a065223
Notes Supplementary InformationThe online version contains supplementary material available at https://doi.org/10.1007/978-3-031-19797-0_26.
Original Abstract: Learning a generalized prior for natural image restoration is an important yet challenging task. Early methods mostly involved handcrafted priors including normalized sparsity, ℓ0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\ell _0$$\end{document} gradients, dark channel priors, etc. Recently, deep neural networks have been used to learn various image priors but do not guarantee to generalize. In this paper, we propose a novel approach that embeds a task-agnostic prior into a transformer. Our approach, named Task-Agnostic Prior Embedding (TAPE), consists of two stages, namely, task-agnostic pre-training and task-specific fine-tuning, where the first stage embeds prior knowledge about natural images into the transformer and the second stage extracts the knowledge to assist downstream image restoration. Experiments on various types of degradation validate the effectiveness of TAPE. The image restoration performance in terms of PSNR is improved by as much as 1.45 dB and even outperforms task-specific algorithms. More importantly, TAPE shows the ability of disentangling generalized image priors from degraded images, which enjoys favorable transfer ability to unknown downstream tasks.
OCLC 1350794174
PQID EBC7130697_411_503
PageCount 18
ParticipantIDs springer_books_10_1007_978_3_031_19797_0_26
proquest_ebookcentralchapters_7130697_411_503
PublicationCentury 2000
PublicationDate 2022
20221103
PublicationDateYYYYMMDD 2022-01-01
2022-11-03
PublicationDate_xml – year: 2022
  text: 2022
PublicationDecade 2020
PublicationPlace Switzerland
PublicationPlace_xml – name: Switzerland
– name: Cham
PublicationSeriesTitle Lecture Notes in Computer Science
PublicationSeriesTitleAlternate Lect.Notes Computer
PublicationSubtitle 17th European Conference, Tel Aviv, Israel, October 23-27, 2022, Proceedings, Part XVIII
PublicationTitle Computer Vision - ECCV 2022
PublicationYear 2022
Publisher Springer
Springer Nature Switzerland
Publisher_xml – name: Springer
– name: Springer Nature Switzerland
RelatedPersons Hartmanis, Juris
Gao, Wen
Steffen, Bernhard
Bertino, Elisa
Goos, Gerhard
Yung, Moti
RelatedPersons_xml – sequence: 1
  givenname: Gerhard
  surname: Goos
  fullname: Goos, Gerhard
– sequence: 2
  givenname: Juris
  surname: Hartmanis
  fullname: Hartmanis, Juris
– sequence: 3
  givenname: Elisa
  surname: Bertino
  fullname: Bertino, Elisa
– sequence: 4
  givenname: Wen
  surname: Gao
  fullname: Gao, Wen
– sequence: 5
  givenname: Bernhard
  orcidid: 0000-0001-9619-1558
  surname: Steffen
  fullname: Steffen, Bernhard
– sequence: 6
  givenname: Moti
  orcidid: 0000-0003-0848-0873
  surname: Yung
  fullname: Yung, Moti
SSID ssj0002731111
ssj0002792
Score 2.3118234
Snippet Learning a generalized prior for natural image restoration is an important yet challenging task. Early methods mostly involved handcrafted priors including...
SourceID springer
proquest
SourceType Publisher
StartPage 447
Title TAPE: Task-Agnostic Prior Embedding for Image Restoration
URI http://ebookcentral.proquest.com/lib/SITE_ID/reader.action?docID=7130697&ppg=503
http://link.springer.com/10.1007/978-3-031-19797-0_26
Volume 13678
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1La-MwEBbb9LL00Ncu2yc69Ga0OLIk272lwaUt7VJKG3oTUiTDsjRdkuylv74zipXE2VzaizGDbMR8YvTNaGZEyJkTpjCmtsx4JRksCs9KJw3LCoX7gTO5xALnu1_q6kncPMvnxZ1tobpkan8O39bWlXwGVZABrlgl-wFk5z8FAbwDvvAEhOG5Qn7bYdZZX4HmPoZkEMrDE5ZU_f4g4SnnrXXQu69CDN1M_rAeptVhi9b78e_XcVK9WO9czKW8fsH8nYdw1cwCryYiwPlKRCBGBFueIuxUeGBYNtYkmr5MzS7Q-c-QLudOwKcMv81ZqvmavtUyzdrCsD1WF31wgFNV5lqAkyGxJetGXogO2exVN7eDeSQMCBTabSy8iZMsZq2RFpNeKnpcN6eWe7Byoh2IwuMO2cLiEYpVHTDLXfLFj_bIdkP1aWNIJyCK6EXZPikRqXPawokGnOgcJwo40YATXcLpG3m6rB77V6y52IL95SKbMtHldQZUiZfSGuzA5r3iruvB0SmHcuiAuChbF6XLjSqEzXLhpbLe2NQAYwRC9510Rq8j_4NQV3puunUKI6UQVhQm8-BEF47X0gG5OyAsakaH4_cm53c408NEr2B0QJKoPo3DJzr2tQa960yD3nXQu0a9H37w70fk62LBHpPOdPzPnwCpm9rTZlW8A2fNQnY
linkProvider Library Specific Holdings
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Abook&rft.genre=bookitem&rft.title=Computer+Vision+-+ECCV+2022&rft.atitle=TAPE%3A+Task-Agnostic+Prior+Embedding+for+Image+Restoration&rft.date=2022-01-01&rft.pub=Springer&rft.isbn=9783031197963&rft.volume=13678&rft_id=info:doi/10.1007%2F978-3-031-19797-0_26&rft.externalDBID=503&rft.externalDocID=EBC7130697_411_503
thumbnail_s http://utb.summon.serialssolutions.com/2.0.0/image/custom?url=https%3A%2F%2Febookcentral.proquest.com%2Fcovers%2F7130697-l.jpg