A robust framework combined saliency detection and image recognition for garbage classification

•A robust framework combined saliency detection and image classification is proposed.•The smallest rectangle containing the saliency region is used for segmentation.•Data fusion is used to generate synthetic images for improving the robustness. Using deep learning to solve garbage classification has...

Full description

Saved in:
Bibliographic Details
Published inWaste management (Elmsford) Vol. 140; pp. 193 - 203
Main Authors Qin, Jiongming, Wang, Cong, Ran, Xu, Yang, Shaohua, Chen, Bin
Format Journal Article
LanguageEnglish
Published United States Elsevier Ltd 01.03.2022
Subjects
Online AccessGet full text
ISSN0956-053X
1879-2456
1879-2456
DOI10.1016/j.wasman.2021.11.027

Cover

Abstract •A robust framework combined saliency detection and image classification is proposed.•The smallest rectangle containing the saliency region is used for segmentation.•Data fusion is used to generate synthetic images for improving the robustness. Using deep learning to solve garbage classification has become a hot topic in computer version. The most widely used garbage dataset Trashnet only has garbage images with a white board as background. Previous studies based on Trashnet focus on using different networks to achieve a higher classification accuracy without considering the complex backgrounds which might encounter in practical applications. To solve this problem, we propose a framework that combines saliency detection and image classification to improve the generalization performance and robustness. A saliency network Salinet is adopted to obtain the garbage target area. Then, a smallest rectangle containing this area is created and used to segment the garbage. A classification network Inception V3 is used to identify the segmented garbage image. Images of the original Trashnet are fused with complex backgrounds of the other saliency detection datasets. The fused and original Trashnet are used together for training to improve the robustness to noises and complex backgrounds. Compared with the image classification networks and classic target detection algorithms, the proposed framework improves the accuracy of 0.50% − 15.79% on the testing sets fused with complex backgrounds. In addition, the proposed framework achieves the best performance with a gain of 4.80% in accuracy on the collected actual dataset. The comparisons prove that our framework is more robust to garbage classification in complex backgrounds. This method can be applied to smart trash cans to achieve automatic garbage classification.
AbstractList Using deep learning to solve garbage classification has become a hot topic in computer version. The most widely used garbage dataset Trashnet only has garbage images with a white board as background. Previous studies based on Trashnet focus on using different networks to achieve a higher classification accuracy without considering the complex backgrounds which might encounter in practical applications. To solve this problem, we propose a framework that combines saliency detection and image classification to improve the generalization performance and robustness. A saliency network Salinet is adopted to obtain the garbage target area. Then, a smallest rectangle containing this area is created and used to segment the garbage. A classification network Inception V3 is used to identify the segmented garbage image. Images of the original Trashnet are fused with complex backgrounds of the other saliency detection datasets. The fused and original Trashnet are used together for training to improve the robustness to noises and complex backgrounds. Compared with the image classification networks and classic target detection algorithms, the proposed framework improves the accuracy of 0.50% − 15.79% on the testing sets fused with complex backgrounds. In addition, the proposed framework achieves the best performance with a gain of 4.80% in accuracy on the collected actual dataset. The comparisons prove that our framework is more robust to garbage classification in complex backgrounds. This method can be applied to smart trash cans to achieve automatic garbage classification.
•A robust framework combined saliency detection and image classification is proposed.•The smallest rectangle containing the saliency region is used for segmentation.•Data fusion is used to generate synthetic images for improving the robustness. Using deep learning to solve garbage classification has become a hot topic in computer version. The most widely used garbage dataset Trashnet only has garbage images with a white board as background. Previous studies based on Trashnet focus on using different networks to achieve a higher classification accuracy without considering the complex backgrounds which might encounter in practical applications. To solve this problem, we propose a framework that combines saliency detection and image classification to improve the generalization performance and robustness. A saliency network Salinet is adopted to obtain the garbage target area. Then, a smallest rectangle containing this area is created and used to segment the garbage. A classification network Inception V3 is used to identify the segmented garbage image. Images of the original Trashnet are fused with complex backgrounds of the other saliency detection datasets. The fused and original Trashnet are used together for training to improve the robustness to noises and complex backgrounds. Compared with the image classification networks and classic target detection algorithms, the proposed framework improves the accuracy of 0.50% − 15.79% on the testing sets fused with complex backgrounds. In addition, the proposed framework achieves the best performance with a gain of 4.80% in accuracy on the collected actual dataset. The comparisons prove that our framework is more robust to garbage classification in complex backgrounds. This method can be applied to smart trash cans to achieve automatic garbage classification.
Using deep learning to solve garbage classification has become a hot topic in computer version. The most widely used garbage dataset Trashnet only has garbage images with a white board as background. Previous studies based on Trashnet focus on using different networks to achieve a higher classification accuracy without considering the complex backgrounds which might encounter in practical applications. To solve this problem, we propose a framework that combines saliency detection and image classification to improve the generalization performance and robustness. A saliency network Salinet is adopted to obtain the garbage target area. Then, a smallest rectangle containing this area is created and used to segment the garbage. A classification network Inception V3 is used to identify the segmented garbage image. Images of the original Trashnet are fused with complex backgrounds of the other saliency detection datasets. The fused and original Trashnet are used together for training to improve the robustness to noises and complex backgrounds. Compared with the image classification networks and classic target detection algorithms, the proposed framework improves the accuracy of 0.50% - 15.79% on the testing sets fused with complex backgrounds. In addition, the proposed framework achieves the best performance with a gain of 4.80% in accuracy on the collected actual dataset. The comparisons prove that our framework is more robust to garbage classification in complex backgrounds. This method can be applied to smart trash cans to achieve automatic garbage classification.
Using deep learning to solve garbage classification has become a hot topic in computer version. The most widely used garbage dataset Trashnet only has garbage images with a white board as background. Previous studies based on Trashnet focus on using different networks to achieve a higher classification accuracy without considering the complex backgrounds which might encounter in practical applications. To solve this problem, we propose a framework that combines saliency detection and image classification to improve the generalization performance and robustness. A saliency network Salinet is adopted to obtain the garbage target area. Then, a smallest rectangle containing this area is created and used to segment the garbage. A classification network Inception V3 is used to identify the segmented garbage image. Images of the original Trashnet are fused with complex backgrounds of the other saliency detection datasets. The fused and original Trashnet are used together for training to improve the robustness to noises and complex backgrounds. Compared with the image classification networks and classic target detection algorithms, the proposed framework improves the accuracy of 0.50% - 15.79% on the testing sets fused with complex backgrounds. In addition, the proposed framework achieves the best performance with a gain of 4.80% in accuracy on the collected actual dataset. The comparisons prove that our framework is more robust to garbage classification in complex backgrounds. This method can be applied to smart trash cans to achieve automatic garbage classification.Using deep learning to solve garbage classification has become a hot topic in computer version. The most widely used garbage dataset Trashnet only has garbage images with a white board as background. Previous studies based on Trashnet focus on using different networks to achieve a higher classification accuracy without considering the complex backgrounds which might encounter in practical applications. To solve this problem, we propose a framework that combines saliency detection and image classification to improve the generalization performance and robustness. A saliency network Salinet is adopted to obtain the garbage target area. Then, a smallest rectangle containing this area is created and used to segment the garbage. A classification network Inception V3 is used to identify the segmented garbage image. Images of the original Trashnet are fused with complex backgrounds of the other saliency detection datasets. The fused and original Trashnet are used together for training to improve the robustness to noises and complex backgrounds. Compared with the image classification networks and classic target detection algorithms, the proposed framework improves the accuracy of 0.50% - 15.79% on the testing sets fused with complex backgrounds. In addition, the proposed framework achieves the best performance with a gain of 4.80% in accuracy on the collected actual dataset. The comparisons prove that our framework is more robust to garbage classification in complex backgrounds. This method can be applied to smart trash cans to achieve automatic garbage classification.
Author Wang, Cong
Qin, Jiongming
Yang, Shaohua
Chen, Bin
Ran, Xu
Author_xml – sequence: 1
  givenname: Jiongming
  surname: Qin
  fullname: Qin, Jiongming
– sequence: 2
  givenname: Cong
  surname: Wang
  fullname: Wang, Cong
– sequence: 3
  givenname: Xu
  surname: Ran
  fullname: Ran, Xu
– sequence: 4
  givenname: Shaohua
  surname: Yang
  fullname: Yang, Shaohua
– sequence: 5
  givenname: Bin
  surname: Chen
  fullname: Chen, Bin
  email: chenbin121@swu.edu.cn
BackLink https://www.ncbi.nlm.nih.gov/pubmed/34836728$$D View this record in MEDLINE/PubMed
BookMark eNqNkUFr3DAUhEVJSTZp_kEpOvZiV0-WZbuHQghtUgj00kJv4ll6WrS1rVTyNuTf184mlx6anARPMwMz3yk7muJEjL0FUYIA_WFX3mEecSqlkFAClEI2r9gG2qYrpKr1EduIrtaFqKufJ-w0550QoFoQx-ykUm2lG9lumLngKfb7PHOfcKS7mH5xG8c-TOR4xiHQZO-5o5nsHOLEcXI8jLglnsjG7RQerj4mvsXUr3c7YM7BB4vr1xv22uOQ6fzxPWM_vnz-fnld3Hy7-np5cVPYqqvnApXTDmtoBEkva6W9bLEh63rwHkEo8E2NlVO-U1IrXGpUBFAr9A6pF9UZe3_IvU3x957ybMaQLQ0DThT32UhdaaUFVO0LpEIJ6JYhF-m7R-m-H8mZ27R0T_fmab9F8PEgsCnmnMgbG-aH4nPCMBgQZoVlduYAy6ywDIBZYC1m9Y_5Kf8Z26eDjZY9_wRKJtsVE7mwMJmNi-H_AX8B1xOxTg
CitedBy_id crossref_primary_10_3390_su17051902
crossref_primary_10_1007_s11356_024_33233_w
crossref_primary_10_3390_s22197455
crossref_primary_10_2478_sbeef_2024_0010
crossref_primary_10_3389_fenvs_2023_1228732
crossref_primary_10_3390_s24092821
crossref_primary_10_1002_adc2_195
crossref_primary_10_1016_j_wasman_2023_02_014
crossref_primary_10_1117_1_JEI_32_4_043017
crossref_primary_10_1109_TAI_2023_3250207
crossref_primary_10_1007_s11554_025_01655_5
crossref_primary_10_1007_s11042_024_18474_8
crossref_primary_10_1088_1402_4896_ad76e8
crossref_primary_10_1016_j_iot_2023_100987
crossref_primary_10_1016_j_wasman_2024_01_047
crossref_primary_10_1007_s11356_023_28639_x
crossref_primary_10_1088_2515_7620_ad3db7
crossref_primary_10_4018_JOEUC_351242
Cites_doi 10.1007/s00371-013-0867-4
10.1109/TPAMI.2011.130
10.1109/TPAMI.2012.120
10.1109/ACCESS.2020.2995681
10.1109/TPAMI.2016.2577031
10.1109/TPAMI.2016.2644615
10.1007/978-3-642-28658-2_85
10.1109/TII.2017.2786778
10.1016/j.promfg.2019.05.086
10.1109/CVPR.2013.407
10.1016/j.wasman.2017.09.019
10.1111/exsy.12343
10.1109/ACCESS.2019.2959033
10.1016/0031-3203(86)90030-0
10.1109/TPAMI.2015.2465960
10.1109/CVPR.2007.383047
10.1109/TIP.2019.2919937
10.1109/CVPR.2011.5995344
10.1016/j.wasman.2020.04.041
10.1109/BigData.2018.8622212
10.1016/j.compag.2018.02.024
10.1109/TIP.2016.2602079
10.1109/CVPR.2014.43
10.1109/ITOEC.2018.8740751
ContentType Journal Article
Copyright 2021 Elsevier Ltd
Copyright © 2021 Elsevier Ltd. All rights reserved.
Copyright_xml – notice: 2021 Elsevier Ltd
– notice: Copyright © 2021 Elsevier Ltd. All rights reserved.
DBID AAYXX
CITATION
CGR
CUY
CVF
ECM
EIF
NPM
7X8
7S9
L.6
DOI 10.1016/j.wasman.2021.11.027
DatabaseName CrossRef
Medline
MEDLINE
MEDLINE (Ovid)
MEDLINE
MEDLINE
PubMed
MEDLINE - Academic
AGRICOLA
AGRICOLA - Academic
DatabaseTitle CrossRef
MEDLINE
Medline Complete
MEDLINE with Full Text
PubMed
MEDLINE (Ovid)
MEDLINE - Academic
AGRICOLA
AGRICOLA - Academic
DatabaseTitleList AGRICOLA

MEDLINE
MEDLINE - Academic
Database_xml – sequence: 1
  dbid: NPM
  name: PubMed
  url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
– sequence: 2
  dbid: EIF
  name: MEDLINE
  url: https://proxy.k.utb.cz/login?url=https://www.webofscience.com/wos/medline/basic-search
  sourceTypes: Index Database
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
Chemistry
EISSN 1879-2456
EndPage 203
ExternalDocumentID 34836728
10_1016_j_wasman_2021_11_027
S0956053X21006152
Genre Journal Article
GroupedDBID ---
--K
--M
-~X
.DC
.~1
0R~
123
1B1
1RT
1~.
4.4
457
4G.
5VS
71M
8P~
9JM
9JN
AABNK
AACTN
AAEDT
AAEDW
AAIAV
AAIKJ
AAKOC
AALRI
AAOAW
AAQFI
AAXUO
ABFYP
ABJNI
ABLST
ABMAC
ABQEM
ABQYD
ABYKQ
ACDAQ
ACGFS
ACLVX
ACRLP
ACSBN
ADBBV
ADEZE
AEBSH
AEKER
AENEX
AFKWA
AFTJW
AFXIZ
AGHFR
AGUBO
AGYEJ
AHEUO
AHHHB
AIEXJ
AIKHN
AITUG
AJOXV
AKIFW
ALMA_UNASSIGNED_HOLDINGS
AMFUW
AMRAJ
ATOGT
AXJTR
BKOJK
BLECG
BLXMC
CS3
EBS
EFJIC
EFLBG
EO8
EO9
EP2
EP3
F5P
FDB
FIRID
FNPLU
FYGXN
G-Q
GBLVA
IHE
IMUCA
J1W
KCYFY
KOM
LY9
M41
MO0
N9A
O-L
O9-
OAUVE
OZT
P-8
P-9
P2P
PC.
Q38
ROL
SDF
SDG
SES
SPC
SPCBC
SSE
SSJ
SSZ
T5K
Y6R
~02
~G-
1~5
29R
53G
7-5
AAHBH
AAQXK
AATTM
AAXKI
AAYWO
AAYXX
ABEFU
ABFNM
ABWVN
ABXDB
ACLOT
ACRPL
ACVFH
ADCNI
ADMUD
ADNMO
AEGFY
AEIPS
AEUPX
AFJKZ
AFPUW
AGQPQ
AIGII
AIIUN
AKBMS
AKRWK
AKYEP
ANKPU
APXCP
ASPBG
AVWKF
AZFZN
CITATION
EFKBS
EJD
FEDTE
FGOYB
G-2
HMC
HVGLF
HZ~
R2-
RPZ
SEN
SEW
TAE
WUQ
~HD
CGR
CUY
CVF
ECM
EIF
NPM
SSH
7X8
7S9
L.6
ID FETCH-LOGICAL-c395t-a4d6da5170e2f2546f28a7ecdb1ffa1041f75a3d4f94264a8103e1154afdaeb03
IEDL.DBID .~1
ISSN 0956-053X
1879-2456
IngestDate Thu Oct 02 07:05:12 EDT 2025
Sun Sep 28 04:59:54 EDT 2025
Thu Apr 03 07:03:58 EDT 2025
Sat Oct 25 04:50:53 EDT 2025
Thu Apr 24 23:09:51 EDT 2025
Fri Feb 23 02:43:47 EST 2024
IsPeerReviewed true
IsScholarly true
Keywords Garbage classification
Image segmentation
Data fusion
Saliency detection
Language English
License Copyright © 2021 Elsevier Ltd. All rights reserved.
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c395t-a4d6da5170e2f2546f28a7ecdb1ffa1041f75a3d4f94264a8103e1154afdaeb03
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
PMID 34836728
PQID 2604019187
PQPubID 23479
PageCount 11
ParticipantIDs proquest_miscellaneous_2636460138
proquest_miscellaneous_2604019187
pubmed_primary_34836728
crossref_citationtrail_10_1016_j_wasman_2021_11_027
crossref_primary_10_1016_j_wasman_2021_11_027
elsevier_sciencedirect_doi_10_1016_j_wasman_2021_11_027
PublicationCentury 2000
PublicationDate 2022-03-01
2022-03-00
2022-Mar-01
20220301
PublicationDateYYYYMMDD 2022-03-01
PublicationDate_xml – month: 03
  year: 2022
  text: 2022-03-01
  day: 01
PublicationDecade 2020
PublicationPlace United States
PublicationPlace_xml – name: United States
PublicationTitle Waste management (Elmsford)
PublicationTitleAlternate Waste Manag
PublicationYear 2022
Publisher Elsevier Ltd
Publisher_xml – name: Elsevier Ltd
References Zhang, Zhang, Dai, Harandi, Hartley (b0195) 2018
Liu, T., Sun, J., Zheng, N., N., Tang, X., Shum, H., Y., 2007. Learning to detect a salient object. In: 2007 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Presented at the 2007 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, Minneapolis, MN, USA, pp. 1-8.
Soni, Kandasamy (b0145) 2017
Jiang, Wang, Yuan, Wu, Zheng, Li (b0060) 2013
Jha, Pan, Elahi, Patel (b0070) 2019; 36
Cheng, M., M., Zhang, G., X., Mitra, N., J., Huang, X., Hu, S., M., 2011. Global contrast based on salient region detection. In: 2011 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Presented at the 2011 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, Providence, RI, USA, pp. 409-416.
Shi, Yan, Xu, Jia (b0140) 2016; 38
Gundupalli, Hait, Thakur (b0050) 2017; 70
Yan, Xu, Shi, Jia (b0180) 2013
Ahmad, K., Khan, K., Fuqaha, A., A., 2020. Intelligent Fusion of Deep Features for Improved Waste Classification. IEEE ACCESS. 8, 96495-96504.
Zhou, Bai, Zhang, Zhao, Mei (b0205) 2020
Kittler, Illingworth (b0075) 1986; 19
Adedeji, Wang (b0020) 2019; 35
Wang, Lu, Wang, Feng, Wang, Yin, Ruan (b0160) 2017
Achanta, Shaji, Smith, Lucchi, Fua, Süsstrunk (b0005) 2012; 34
Hu, Z., Cao, Z., Shi, J., 2012. Application of Pattern Recognizing Technique to Automatic Ioslating Garbage Can. In: Advances in Electronic Commerce, Web Application and Communication. pp. 543-548.
Li, Y., Hou, X., Koch, C., Rehg, J., M., Yuille, A., L., 2014. The secrets of salient object segmentation. In: 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Presented at the 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, Columbus, OH, USA, pp. 280-287.
Szegedy, Vanhoucke, Ioffe, Shlens (b0135) 2016
Yang, M., Thung, G., 2016. Classification of Trash for Recyclability Status. Mach. Learn., Stanford, CA, USA, Project Rep. CS229.
Gonzalez, R., C., Woods, R., E., 2007. Digital Image Processing, third ed. Knoxville, TN, USA: Gatesmark Publishing, pp. 742-745.
Perazzi, Krahenbuhl, Pritch, Hornung (b0120) 2012
Redmon, J., Farhadi, A., 2018. YOLOv3: An Incremental Improvement. ArXiv preprint arXiv: 1804. 02767v1.
Jha, S., K., Ahmad, Z., 2018. Soil microbial dynamics prediction using machine learning regression methods. Comput. Electron. Arg. 147, 158-165.
Zhou, Nie, Adeli, Yin, Lian, Shen (b0200) 2020; 29
Zuiderveld (b0190) 1994
Long, Shelhamer, Darrell (b0105) 2015
Krizhevsky, Sutskever, Hinton (b0080) 2012
Kingma, D., P., Ba, J., 2014. Adam: A method for stochastic optimization. ArXiv preprint arXiv: 1412.6980.
Ren, He, Girshick, Sun (b0125) 2017; 39
Yang, C., Zhang, L., Lu, H., Ruan, X., Yang, M., H., 2013. Saliency Detection via Graph-Based Manifold Ranking. In: 2013 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Presented at the 2013 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, Portland, OR, USA, pp. 3166-3173.
Vo, Hoang Son, Vo, Le (b0155) 2019; 7
Aral, R., A., Keskin, S., Kaya, M., Haciomeroglu, M., 2018. Classification of TrashNet Dataset Based on Deep Learning Models. In: 2018 IEEE International Conference on Big Data (Big Data 2018). pp. 2058-2062.
.
Badrinarayanan, Kendall, Cipolla (b0030) 2017; 39
Nowakowski, Pamula (b0115) 2020; 109
O'Toole, Karimian, Peyton (b0150) 2018; 14
Xu, Qi, Diao (b0170) 2020
Wang, B., Zhou, W., Shen, S., 2018. Garbage Classification and Environmental Monitoring based on Internet of Things. In: 2018 IEEE 4th Information Technology and Mechatronics Engineering Conference (ITOEC 2018). pp. 1762-1766.
Alpert, Galun, Brandt, Basri (b0010) 2012; 34
Li, Yu (b0110) 2016; 25
Cheng, Mitra, Huang, Hu (b0040) 2014; 30
Kim, Park, Roh, Shin (b0090) 2020
10.1016/j.wasman.2021.11.027_b0055
10.1016/j.wasman.2021.11.027_b0175
Li (10.1016/j.wasman.2021.11.027_b0110) 2016; 25
10.1016/j.wasman.2021.11.027_b0130
Xu (10.1016/j.wasman.2021.11.027_b0170) 2020
10.1016/j.wasman.2021.11.027_b0095
Kittler (10.1016/j.wasman.2021.11.027_b0075) 1986; 19
Kim (10.1016/j.wasman.2021.11.027_b0090) 2020
Alpert (10.1016/j.wasman.2021.11.027_b0010) 2012; 34
Zhang (10.1016/j.wasman.2021.11.027_b0195) 2018
Ren (10.1016/j.wasman.2021.11.027_b0125) 2017; 39
Zhou (10.1016/j.wasman.2021.11.027_b0205) 2020
Shi (10.1016/j.wasman.2021.11.027_b0140) 2016; 38
Cheng (10.1016/j.wasman.2021.11.027_b0040) 2014; 30
Szegedy (10.1016/j.wasman.2021.11.027_b0135) 2016
Soni (10.1016/j.wasman.2021.11.027_b0145) 2017
Nowakowski (10.1016/j.wasman.2021.11.027_b0115) 2020; 109
10.1016/j.wasman.2021.11.027_b0025
10.1016/j.wasman.2021.11.027_b0045
10.1016/j.wasman.2021.11.027_b0100
10.1016/j.wasman.2021.11.027_b0165
10.1016/j.wasman.2021.11.027_b0065
Yan (10.1016/j.wasman.2021.11.027_b0180) 2013
Perazzi (10.1016/j.wasman.2021.11.027_b0120) 2012
10.1016/j.wasman.2021.11.027_b0185
10.1016/j.wasman.2021.11.027_b0085
Wang (10.1016/j.wasman.2021.11.027_b0160) 2017
Zuiderveld (10.1016/j.wasman.2021.11.027_b0190) 1994
Achanta (10.1016/j.wasman.2021.11.027_b0005) 2012; 34
Vo (10.1016/j.wasman.2021.11.027_b0155) 2019; 7
Zhou (10.1016/j.wasman.2021.11.027_b0200) 2020; 29
Jha (10.1016/j.wasman.2021.11.027_b0070) 2019; 36
O'Toole (10.1016/j.wasman.2021.11.027_b0150) 2018; 14
Badrinarayanan (10.1016/j.wasman.2021.11.027_b0030) 2017; 39
Gundupalli (10.1016/j.wasman.2021.11.027_b0050) 2017; 70
Adedeji (10.1016/j.wasman.2021.11.027_b0020) 2019; 35
Long (10.1016/j.wasman.2021.11.027_b0105) 2015
Krizhevsky (10.1016/j.wasman.2021.11.027_b0080) 2012
10.1016/j.wasman.2021.11.027_b0015
10.1016/j.wasman.2021.11.027_b0035
Jiang (10.1016/j.wasman.2021.11.027_b0060) 2013
References_xml – start-page: 2083
  year: 2013
  end-page: 2090
  ident: b0060
  article-title: Salient object detection: A discriminative regional feature integration approach
– volume: 39
  start-page: 2481
  year: 2017
  end-page: 2495
  ident: b0030
  article-title: SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
– volume: 36
  start-page: e12343
  year: 2019
  ident: b0070
  article-title: A comprehensive search for expert classification methods in disease diagnosis and prediction
  publication-title: Expert Syst.
– start-page: 11771
  year: 2020
  end-page: 11780
  ident: b0205
  article-title: Look-into-Object: Self-supervised Structure Modeling for Object Recognition
  publication-title: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Presented at the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), IEEE
– year: 2012
  ident: b0080
  article-title: ImageNet Classification with Deep Convolutional Neural Networks
  publication-title: 2012 Conference and Workshop on Neural Information Processing Systems (NIPS)
– start-page: 136
  year: 2017
  end-page: 145
  ident: b0160
  article-title: Learning to detect salient objects with image-level supervision
  publication-title: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Presented at the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
– reference: Hu, Z., Cao, Z., Shi, J., 2012. Application of Pattern Recognizing Technique to Automatic Ioslating Garbage Can. In: Advances in Electronic Commerce, Web Application and Communication. pp. 543-548.
– reference: Liu, T., Sun, J., Zheng, N., N., Tang, X., Shum, H., Y., 2007. Learning to detect a salient object. In: 2007 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Presented at the 2007 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, Minneapolis, MN, USA, pp. 1-8.
– volume: 30
  start-page: 443
  year: 2014
  end-page: 453
  ident: b0040
  article-title: Salientshape: group saliency in image collections
  publication-title: The Visual Computer.
– reference: Aral, R., A., Keskin, S., Kaya, M., Haciomeroglu, M., 2018. Classification of TrashNet Dataset Based on Deep Learning Models. In: 2018 IEEE International Conference on Big Data (Big Data 2018). pp. 2058-2062.
– volume: 34
  start-page: 315
  year: 2012
  end-page: 327
  ident: b0010
  article-title: Image segmentation by probabilistic bottom-up aggregation and cue integration
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
– volume: 19
  start-page: 41
  year: 1986
  end-page: 47
  ident: b0075
  article-title: Minimum error thresholding
  publication-title: Pattern Recognition.
– start-page: 194
  year: 2017
  end-page: 206
  ident: b0145
  article-title: Smart Garbage Bin System - A Comprehensive Survey
  publication-title: Smart Secure Systems-IoT and Analytics Perspective, 2017 and International Conference on Intelligent Information Technologies, International Conference on
– reference: Yang, M., Thung, G., 2016. Classification of Trash for Recyclability Status. Mach. Learn., Stanford, CA, USA, Project Rep. CS229.
– reference: Ahmad, K., Khan, K., Fuqaha, A., A., 2020. Intelligent Fusion of Deep Features for Improved Waste Classification. IEEE ACCESS. 8, 96495-96504.
– start-page: 2818
  year: 2016
  end-page: 2826
  ident: b0135
  article-title: Rethinking the Inception Architecture for Computer Vision
– volume: 29
  start-page: 461
  year: 2020
  end-page: 475
  ident: b0200
  article-title: High-Resolution Encoder–Decoder Networks for Low-Contrast Medical Image Segmentation
  publication-title: IEEE Trans. Image Process.
– volume: 25
  start-page: 5012
  year: 2016
  end-page: 5024
  ident: b0110
  article-title: Visual Saliency Detection Based on Multiscale Deep CNN Features
  publication-title: IEEE Trans. Image Process.
– volume: 7
  start-page: 178631
  year: 2019
  end-page: 178639
  ident: b0155
  article-title: A Novel Framework for Trash Classification Using Deep Transfer Learning
  publication-title: IEEE ACCESS.
– year: 2020
  ident: b0170
  article-title: Reach on Waste Classification and Identification by Transfer Learning and Lightweight Neural Network
  publication-title: Preprint.
– reference: Gonzalez, R., C., Woods, R., E., 2007. Digital Image Processing, third ed. Knoxville, TN, USA: Gatesmark Publishing, pp. 742-745.
– reference: Kingma, D., P., Ba, J., 2014. Adam: A method for stochastic optimization. ArXiv preprint arXiv: 1412.6980.
– start-page: 474
  year: 1994
  end-page: 485
  ident: b0190
  article-title: Graphics Gems
– start-page: 5620
  year: 2020
  end-page: 5629
  ident: b0090
  article-title: GroupFace: Learning Latent Groups and Constructing Group-Based Representations for Face Recognition
  publication-title: 2020 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Presented at the 2020 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE
– reference: Wang, B., Zhou, W., Shen, S., 2018. Garbage Classification and Environmental Monitoring based on Internet of Things. In: 2018 IEEE 4th Information Technology and Mechatronics Engineering Conference (ITOEC 2018). pp. 1762-1766.
– volume: 109
  start-page: 1
  year: 2020
  end-page: 9
  ident: b0115
  article-title: Application of deep learning object classifier to improve e-waste collection planning
  publication-title: Waste Manag.
– reference: Cheng, M., M., Zhang, G., X., Mitra, N., J., Huang, X., Hu, S., M., 2011. Global contrast based on salient region detection. In: 2011 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Presented at the 2011 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, Providence, RI, USA, pp. 409-416.
– volume: 70
  start-page: 13
  year: 2017
  end-page: 21
  ident: b0050
  article-title: Multi-material classification of dry recyclables from municipal solid waste based on thermal imaging
  publication-title: Waste Manag.
– start-page: 733
  year: 2012
  end-page: 740
  ident: b0120
  article-title: Saliency filters: Contrast based filtering for salient region detection2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Presented at the 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
  publication-title: IEEE
– volume: 14
  start-page: 3477
  year: 2018
  end-page: 3485
  ident: b0150
  article-title: Classification of Nonferrous Metals Using Magnetic Induction Spectroscopy
  publication-title: IEEE Trans. Indus. Inform.
– reference: .
– reference: Li, Y., Hou, X., Koch, C., Rehg, J., M., Yuille, A., L., 2014. The secrets of salient object segmentation. In: 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Presented at the 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, Columbus, OH, USA, pp. 280-287.
– start-page: 9029
  year: 2018
  end-page: 9038
  ident: b0195
  article-title: Deep Unsupervised Saliency Detection: A Multiple Noisy Labeling Perspective
– reference: Redmon, J., Farhadi, A., 2018. YOLOv3: An Incremental Improvement. ArXiv preprint arXiv: 1804. 02767v1.
– volume: 34
  start-page: 2274
  year: 2012
  end-page: 2282
  ident: b0005
  article-title: SLIC Superpixels Compared to State-of-the-Art Superpixel Methods
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
– reference: Jha, S., K., Ahmad, Z., 2018. Soil microbial dynamics prediction using machine learning regression methods. Comput. Electron. Arg. 147, 158-165.
– volume: 38
  start-page: 717
  year: 2016
  end-page: 729
  ident: b0140
  article-title: Hierarchical Image Saliency Detection on Extended CSSD
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
– reference: Yang, C., Zhang, L., Lu, H., Ruan, X., Yang, M., H., 2013. Saliency Detection via Graph-Based Manifold Ranking. In: 2013 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Presented at the 2013 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, Portland, OR, USA, pp. 3166-3173.
– volume: 35
  start-page: 607
  year: 2019
  end-page: 612
  ident: b0020
  article-title: Intelligent Waste Classification System Using Deep Learning Convolutional Neural Network
  publication-title: Procedia Manufacturing.
– start-page: 3434
  year: 2015
  end-page: 3440
  ident: b0105
  article-title: Fully convolutional networks for semantic segmentation
– volume: 39
  start-page: 1137
  year: 2017
  end-page: 1149
  ident: b0125
  article-title: Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
– start-page: 1155
  year: 2013
  end-page: 1162
  ident: b0180
  publication-title: Hierarchical saliency detection
– volume: 30
  start-page: 443
  issue: 4
  year: 2014
  ident: 10.1016/j.wasman.2021.11.027_b0040
  article-title: Salientshape: group saliency in image collections
  publication-title: The Visual Computer.
  doi: 10.1007/s00371-013-0867-4
– volume: 34
  start-page: 315
  issue: 2
  year: 2012
  ident: 10.1016/j.wasman.2021.11.027_b0010
  article-title: Image segmentation by probabilistic bottom-up aggregation and cue integration
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
  doi: 10.1109/TPAMI.2011.130
– volume: 34
  start-page: 2274
  issue: 11
  year: 2012
  ident: 10.1016/j.wasman.2021.11.027_b0005
  article-title: SLIC Superpixels Compared to State-of-the-Art Superpixel Methods
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
  doi: 10.1109/TPAMI.2012.120
– ident: 10.1016/j.wasman.2021.11.027_b0025
  doi: 10.1109/ACCESS.2020.2995681
– start-page: 3434
  year: 2015
  ident: 10.1016/j.wasman.2021.11.027_b0105
– start-page: 733
  year: 2012
  ident: 10.1016/j.wasman.2021.11.027_b0120
  article-title: Saliency filters: Contrast based filtering for salient region detection2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Presented at the 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
– start-page: 136
  year: 2017
  ident: 10.1016/j.wasman.2021.11.027_b0160
  article-title: Learning to detect salient objects with image-level supervision
– volume: 39
  start-page: 1137
  issue: 6
  year: 2017
  ident: 10.1016/j.wasman.2021.11.027_b0125
  article-title: Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
  doi: 10.1109/TPAMI.2016.2577031
– volume: 39
  start-page: 2481
  issue: 12
  year: 2017
  ident: 10.1016/j.wasman.2021.11.027_b0030
  article-title: SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
  doi: 10.1109/TPAMI.2016.2644615
– ident: 10.1016/j.wasman.2021.11.027_b0055
  doi: 10.1007/978-3-642-28658-2_85
– start-page: 9029
  year: 2018
  ident: 10.1016/j.wasman.2021.11.027_b0195
– start-page: 5620
  year: 2020
  ident: 10.1016/j.wasman.2021.11.027_b0090
  article-title: GroupFace: Learning Latent Groups and Constructing Group-Based Representations for Face Recognition
– ident: 10.1016/j.wasman.2021.11.027_b0045
– year: 2012
  ident: 10.1016/j.wasman.2021.11.027_b0080
  article-title: ImageNet Classification with Deep Convolutional Neural Networks
  publication-title: 2012 Conference and Workshop on Neural Information Processing Systems (NIPS)
– start-page: 2818
  year: 2016
  ident: 10.1016/j.wasman.2021.11.027_b0135
– volume: 14
  start-page: 3477
  issue: 8
  year: 2018
  ident: 10.1016/j.wasman.2021.11.027_b0150
  article-title: Classification of Nonferrous Metals Using Magnetic Induction Spectroscopy
  publication-title: IEEE Trans. Indus. Inform.
  doi: 10.1109/TII.2017.2786778
– ident: 10.1016/j.wasman.2021.11.027_b0085
– volume: 35
  start-page: 607
  year: 2019
  ident: 10.1016/j.wasman.2021.11.027_b0020
  article-title: Intelligent Waste Classification System Using Deep Learning Convolutional Neural Network
  publication-title: Procedia Manufacturing.
  doi: 10.1016/j.promfg.2019.05.086
– ident: 10.1016/j.wasman.2021.11.027_b0175
  doi: 10.1109/CVPR.2013.407
– volume: 70
  start-page: 13
  year: 2017
  ident: 10.1016/j.wasman.2021.11.027_b0050
  article-title: Multi-material classification of dry recyclables from municipal solid waste based on thermal imaging
  publication-title: Waste Manag.
  doi: 10.1016/j.wasman.2017.09.019
– volume: 36
  start-page: e12343
  issue: 1
  year: 2019
  ident: 10.1016/j.wasman.2021.11.027_b0070
  article-title: A comprehensive search for expert classification methods in disease diagnosis and prediction
  publication-title: Expert Syst.
  doi: 10.1111/exsy.12343
– volume: 7
  start-page: 178631
  year: 2019
  ident: 10.1016/j.wasman.2021.11.027_b0155
  article-title: A Novel Framework for Trash Classification Using Deep Transfer Learning
  publication-title: IEEE ACCESS.
  doi: 10.1109/ACCESS.2019.2959033
– year: 2020
  ident: 10.1016/j.wasman.2021.11.027_b0170
  article-title: Reach on Waste Classification and Identification by Transfer Learning and Lightweight Neural Network
  publication-title: Preprint.
– volume: 19
  start-page: 41
  issue: 1
  year: 1986
  ident: 10.1016/j.wasman.2021.11.027_b0075
  article-title: Minimum error thresholding
  publication-title: Pattern Recognition.
  doi: 10.1016/0031-3203(86)90030-0
– ident: 10.1016/j.wasman.2021.11.027_b0130
– start-page: 1155
  year: 2013
  ident: 10.1016/j.wasman.2021.11.027_b0180
– start-page: 474
  year: 1994
  ident: 10.1016/j.wasman.2021.11.027_b0190
– volume: 38
  start-page: 717
  issue: 4
  year: 2016
  ident: 10.1016/j.wasman.2021.11.027_b0140
  article-title: Hierarchical Image Saliency Detection on Extended CSSD
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
  doi: 10.1109/TPAMI.2015.2465960
– ident: 10.1016/j.wasman.2021.11.027_b0095
  doi: 10.1109/CVPR.2007.383047
– volume: 29
  start-page: 461
  year: 2020
  ident: 10.1016/j.wasman.2021.11.027_b0200
  article-title: High-Resolution Encoder–Decoder Networks for Low-Contrast Medical Image Segmentation
  publication-title: IEEE Trans. Image Process.
  doi: 10.1109/TIP.2019.2919937
– ident: 10.1016/j.wasman.2021.11.027_b0035
  doi: 10.1109/CVPR.2011.5995344
– start-page: 11771
  year: 2020
  ident: 10.1016/j.wasman.2021.11.027_b0205
  article-title: Look-into-Object: Self-supervised Structure Modeling for Object Recognition
– volume: 109
  start-page: 1
  year: 2020
  ident: 10.1016/j.wasman.2021.11.027_b0115
  article-title: Application of deep learning object classifier to improve e-waste collection planning
  publication-title: Waste Manag.
  doi: 10.1016/j.wasman.2020.04.041
– ident: 10.1016/j.wasman.2021.11.027_b0015
  doi: 10.1109/BigData.2018.8622212
– start-page: 194
  year: 2017
  ident: 10.1016/j.wasman.2021.11.027_b0145
  article-title: Smart Garbage Bin System - A Comprehensive Survey
– ident: 10.1016/j.wasman.2021.11.027_b0065
  doi: 10.1016/j.compag.2018.02.024
– ident: 10.1016/j.wasman.2021.11.027_b0185
– volume: 25
  start-page: 5012
  issue: 11
  year: 2016
  ident: 10.1016/j.wasman.2021.11.027_b0110
  article-title: Visual Saliency Detection Based on Multiscale Deep CNN Features
  publication-title: IEEE Trans. Image Process.
  doi: 10.1109/TIP.2016.2602079
– start-page: 2083
  year: 2013
  ident: 10.1016/j.wasman.2021.11.027_b0060
– ident: 10.1016/j.wasman.2021.11.027_b0100
  doi: 10.1109/CVPR.2014.43
– ident: 10.1016/j.wasman.2021.11.027_b0165
  doi: 10.1109/ITOEC.2018.8740751
SSID ssj0014810
Score 2.459401
Snippet •A robust framework combined saliency detection and image classification is proposed.•The smallest rectangle containing the saliency region is used for...
Using deep learning to solve garbage classification has become a hot topic in computer version. The most widely used garbage dataset Trashnet only has garbage...
SourceID proquest
pubmed
crossref
elsevier
SourceType Aggregation Database
Index Database
Enrichment Source
Publisher
StartPage 193
SubjectTerms Algorithms
computers
data collection
Data fusion
Garbage
Garbage classification
image analysis
Image Processing, Computer-Assisted
Image segmentation
municipal solid waste
Saliency detection
waste management
Title A robust framework combined saliency detection and image recognition for garbage classification
URI https://dx.doi.org/10.1016/j.wasman.2021.11.027
https://www.ncbi.nlm.nih.gov/pubmed/34836728
https://www.proquest.com/docview/2604019187
https://www.proquest.com/docview/2636460138
Volume 140
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVESC
  databaseName: Baden-Württemberg Complete Freedom Collection (Elsevier)
  customDbUrl:
  eissn: 1879-2456
  dateEnd: 99991231
  omitProxy: true
  ssIdentifier: ssj0014810
  issn: 0956-053X
  databaseCode: GBLVA
  dateStart: 20110101
  isFulltext: true
  titleUrlDefault: https://www.sciencedirect.com
  providerName: Elsevier
– providerCode: PRVESC
  databaseName: Elsevier SD Complete Freedom Collection [SCCMFC]
  customDbUrl:
  eissn: 1879-2456
  dateEnd: 99991231
  omitProxy: true
  ssIdentifier: ssj0014810
  issn: 0956-053X
  databaseCode: ACRLP
  dateStart: 19950101
  isFulltext: true
  titleUrlDefault: https://www.sciencedirect.com
  providerName: Elsevier
– providerCode: PRVESC
  databaseName: Elsevier SD Freedom Collection
  customDbUrl:
  eissn: 1879-2456
  dateEnd: 99991231
  omitProxy: true
  ssIdentifier: ssj0014810
  issn: 0956-053X
  databaseCode: .~1
  dateStart: 19950101
  isFulltext: true
  titleUrlDefault: https://www.sciencedirect.com
  providerName: Elsevier
– providerCode: PRVESC
  databaseName: ScienceDirect Freedom Collection Journals
  customDbUrl:
  eissn: 1879-2456
  dateEnd: 99991231
  omitProxy: true
  ssIdentifier: ssj0014810
  issn: 0956-053X
  databaseCode: AIKHN
  dateStart: 19950101
  isFulltext: true
  titleUrlDefault: https://www.sciencedirect.com
  providerName: Elsevier
– providerCode: PRVLSH
  databaseName: Elsevier Journals
  customDbUrl:
  mediaType: online
  eissn: 1879-2456
  dateEnd: 99991231
  omitProxy: true
  ssIdentifier: ssj0014810
  issn: 0956-053X
  databaseCode: AKRWK
  dateStart: 19930101
  isFulltext: true
  providerName: Library Specific Holdings
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1Na9wwEB1Cekh7CG3SNtuPoEKvzlqW_LHHZWnYtjSXNLA3MdZH2dJ4Q9ZLbvntmZHtpYEmgV6FDLJGjN5Db94AfOaqsyzUNglFrRONUieIUiY2EJeoUitTx8XJP86K-YX-tsgXOzAbamFYVtnn_i6nx2zdj4z73RxfLZfjc7bQoyO0INLC9zLnYa1L7mJwcruVeRDaj44E0W-PZw_lc1HjdYPrS2QX1EyesJcn95b59_X0EPyM19DpS9jv8aOYdkt8BTu-OYC92dC27QBe_OUweAhmKq5X9WbdijCosAT9KtFh78SaMDhXXgrn26jIagQ2TiwvKceIrbKIRgnYil_8MEHjluE264tiSF_DxemXn7N50vdUSKya5G2C2hUOc1mmPgvshR-yCktvXS1DQOJmMpQ5KqfDhLES0u4pz5Y9GBz6OlVvYLdZNf4IhNWFTxXKoDHXgXlm8N5mngBVTjxFjkANW2lsbzjOfS_-mEFZ9tt0ATAcAOIihgIwgmT71VVnuPHE_HKIkrl3cAzdCU98-WkIqqEY8UMJNn61WRvieEQ7J7J6dI4qdMHvvCN4252I7XqVrlRRZtW7_17be3iecZ1FFLt9gN32euM_Evpp6-N4vI_h2fTr9_nZHbsiBV8
linkProvider Elsevier
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1Nb9QwEB1V5VA4IFoKLNDWSL2mG8fOxx6rFdXSrwuttDdr4g-0iGarblbc-O3MOMkKJKASV8uRHM9o_J785hngmLvOslDbJBS1TjRKnSBKmdhAXKJKrUwdNydfXRezW30-z-dbMB16YVhW2df-rqbHat2PjPvdHN8vFuPPbKFHKTQn0sLnMtXhJzrPSmZgJz82Og-C-9GSIBru8fShfy6KvL7j6g7ZBjWTJ2zmyY_L_Pl8-hv-jOfQ2Qt43gNIcdqtcRe2fLMHO9Ph3bY9ePaLxeBLMKfiYVmvV60IgwxL0L8SH_ZOrAiEc-ulcL6NkqxGYOPE4o6KjNhIi2iUkK34wjcTNG4Zb7PAKMZ0H27PPt5MZ0n_qEJi1SRvE9SucJjLMvVZYDP8kFVYeutqGQISOZOhzFE5HSYMlpB2T3n27MHg0NepegXbzbLxb0BYXfhUoQwacx2YaAbvbeYJUeVEVOQI1LCVxvaO4_zwxTczSMu-mi4AhgNAZMRQAEaQbL667xw3HplfDlEyv2WOoUPhkS8_DEE1FCO-KcHGL9crQySPeOdEVv-cowpd8EXvCF53GbFZr9KVKsqsevvfazuCndnN1aW5_HR98Q6eZtx0EZVv72G7fVj7A4JCbX0YU_0nsmQG9A
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=A+robust+framework+combined+saliency+detection+and+image+recognition+for+garbage+classification&rft.jtitle=Waste+management+%28Elmsford%29&rft.au=Qin%2C+Jiongming&rft.au=Wang%2C+Cong&rft.au=Ran%2C+Xu&rft.au=Yang%2C+Shaohua&rft.date=2022-03-01&rft.issn=0956-053X&rft.volume=140+p.193-203&rft.spage=193&rft.epage=203&rft_id=info:doi/10.1016%2Fj.wasman.2021.11.027&rft.externalDBID=NO_FULL_TEXT
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0956-053X&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0956-053X&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0956-053X&client=summon