Explaining anomalies detected by autoencoders using Shapley Additive Explanations

Deep learning algorithms for anomaly detection, such as autoencoders, point out the outliers, saving experts the time-consuming task of examining normal cases in order to find anomalies. Most outlier detection algorithms output a score for each instance in the database. The top-k most intense outlie...

Full description

Saved in:
Bibliographic Details
Published inExpert systems with applications Vol. 186; p. 115736
Main Authors Antwarg, Liat, Miller, Ronnie Mindlin, Shapira, Bracha, Rokach, Lior
Format Journal Article
LanguageEnglish
Published New York Elsevier Ltd 30.12.2021
Elsevier BV
Subjects
Online AccessGet full text
ISSN0957-4174
1873-6793
DOI10.1016/j.eswa.2021.115736

Cover

Abstract Deep learning algorithms for anomaly detection, such as autoencoders, point out the outliers, saving experts the time-consuming task of examining normal cases in order to find anomalies. Most outlier detection algorithms output a score for each instance in the database. The top-k most intense outliers are returned to the user for further inspection; however, the manual validation of results becomes challenging without justification or additional clues. An explanation of why an instance is anomalous enables the experts to focus their investigation on the most important anomalies and may increase their trust in the algorithm. Recently, a game theory-based framework known as SHapley Additive exPlanations (SHAP) was shown to be effective in explaining various supervised learning models. In this paper, we propose a method that uses Kernel SHAP to explain anomalies detected by an autoencoder, which is an unsupervised model. The proposed explanation method aims to provide a comprehensive explanation to the experts by focusing on the connection between the features with high reconstruction error and the features that are most important in terms of their affect on the reconstruction error. We propose a black-box explanation method, because it has the advantage of being able to explain any autoencoder without being aware of the exact architecture of the autoencoder model. The proposed explanation method extracts and visually depicts both features that contribute the most to the anomaly and those that offset it. An expert evaluation using real-world data demonstrates the usefulness of the proposed method in helping domain experts better understand the anomalies. Our evaluation of the explanation method, in which a “perfect” autoencoder is used as the ground truth, shows that the proposed method explains anomalies correctly, using the exact features, and evaluation on real-data demonstrates that (1) our explanation model, which uses SHAP, is more robust than the Local Interpretable Model-agnostic Explanations (LIME) method, and (2) the explanations our method provides are more effective at reducing the anomaly score than other methods. •Explaining anomalies identified by autoencoder using shapley values.•Explain features with high reconstruction error.•Evaluated correctness and robustness of explanations.•Explanations can assist in reducing anomaly score.•Conducted experts evaluation to examine the explanation method.
AbstractList Deep learning algorithms for anomaly detection, such as autoencoders, point out the outliers, saving experts the time-consuming task of examining normal cases in order to find anomalies. Most outlier detection algorithms output a score for each instance in the database. The top-k most intense outliers are returned to the user for further inspection; however, the manual validation of results becomes challenging without justification or additional clues. An explanation of why an instance is anomalous enables the experts to focus their investigation on the most important anomalies and may increase their trust in the algorithm. Recently, a game theory-based framework known as SHapley Additive exPlanations (SHAP) was shown to be effective in explaining various supervised learning models. In this paper, we propose a method that uses Kernel SHAP to explain anomalies detected by an autoencoder, which is an unsupervised model. The proposed explanation method aims to provide a comprehensive explanation to the experts by focusing on the connection between the features with high reconstruction error and the features that are most important in terms of their affect on the reconstruction error. We propose a black-box explanation method, because it has the advantage of being able to explain any autoencoder without being aware of the exact architecture of the autoencoder model. The proposed explanation method extracts and visually depicts both features that contribute the most to the anomaly and those that offset it. An expert evaluation using real-world data demonstrates the usefulness of the proposed method in helping domain experts better understand the anomalies. Our evaluation of the explanation method, in which a "perfect" autoencoder is used as the ground truth, shows that the proposed method explains anomalies correctly, using the exact features, and evaluation on real-data demonstrates that (1) our explanation model, which uses SHAP, is more robust than the Local Interpretable Model-agnostic Explanations (LIME) method, and (2) the explanations our method provides are more effective at reducing the anomaly score than other methods.
Deep learning algorithms for anomaly detection, such as autoencoders, point out the outliers, saving experts the time-consuming task of examining normal cases in order to find anomalies. Most outlier detection algorithms output a score for each instance in the database. The top-k most intense outliers are returned to the user for further inspection; however, the manual validation of results becomes challenging without justification or additional clues. An explanation of why an instance is anomalous enables the experts to focus their investigation on the most important anomalies and may increase their trust in the algorithm. Recently, a game theory-based framework known as SHapley Additive exPlanations (SHAP) was shown to be effective in explaining various supervised learning models. In this paper, we propose a method that uses Kernel SHAP to explain anomalies detected by an autoencoder, which is an unsupervised model. The proposed explanation method aims to provide a comprehensive explanation to the experts by focusing on the connection between the features with high reconstruction error and the features that are most important in terms of their affect on the reconstruction error. We propose a black-box explanation method, because it has the advantage of being able to explain any autoencoder without being aware of the exact architecture of the autoencoder model. The proposed explanation method extracts and visually depicts both features that contribute the most to the anomaly and those that offset it. An expert evaluation using real-world data demonstrates the usefulness of the proposed method in helping domain experts better understand the anomalies. Our evaluation of the explanation method, in which a “perfect” autoencoder is used as the ground truth, shows that the proposed method explains anomalies correctly, using the exact features, and evaluation on real-data demonstrates that (1) our explanation model, which uses SHAP, is more robust than the Local Interpretable Model-agnostic Explanations (LIME) method, and (2) the explanations our method provides are more effective at reducing the anomaly score than other methods. •Explaining anomalies identified by autoencoder using shapley values.•Explain features with high reconstruction error.•Evaluated correctness and robustness of explanations.•Explanations can assist in reducing anomaly score.•Conducted experts evaluation to examine the explanation method.
ArticleNumber 115736
Author Antwarg, Liat
Shapira, Bracha
Rokach, Lior
Miller, Ronnie Mindlin
Author_xml – sequence: 1
  givenname: Liat
  orcidid: 0000-0002-6383-6185
  surname: Antwarg
  fullname: Antwarg, Liat
  email: liatant@post.bgu.ac.il
– sequence: 2
  givenname: Ronnie Mindlin
  surname: Miller
  fullname: Miller, Ronnie Mindlin
  email: Ronniemi@post.bgu.ac.il
– sequence: 3
  givenname: Bracha
  orcidid: 0000-0003-4943-9324
  surname: Shapira
  fullname: Shapira, Bracha
  email: bshapira@bgu.ac.il
– sequence: 4
  givenname: Lior
  surname: Rokach
  fullname: Rokach, Lior
  email: liorrk@post.bgu.ac.il
BookMark eNp9kD1PwzAURS1UJNrCH2CKxJxiO4mdSCxVVT6kSggBs-XYL-AqtYvtFPrvSVomhk5vuec-3TNBI-ssIHRN8Ixgwm7XMwjfckYxJTNCCp6xMzQmJc9SxqtshMa4KniaE55foEkIa4wJx5iP0cvyZ9tKY439SKR1G9kaCImGCCqCTup9IrvowCqnwYekC0Pw9VNuW9gnc61NNDtIDiVWRuNsuETnjWwDXP3dKXq_X74tHtPV88PTYr5KVUbLmKqmkHmteKEJq2lFoclYocpMA6NKZY2sasaqXEqaS1JzTTQ0vNS6aiQUdb9sim6OvVvvvjoIUaxd523_UtCiqnoLjGR9ih5TyrsQPDRi681G-r0gWAzqxFoM6sSgThzV9VD5D1ImHtZFL017Gr07otBP3xnwIijT6wNtfK9UaGdO4b-xJo3v
CitedBy_id crossref_primary_10_51583_IJLTEMAS_2024_130524
crossref_primary_10_1109_JSEN_2023_3236838
crossref_primary_10_1016_j_isatra_2025_01_013
crossref_primary_10_1016_j_jpha_2025_101263
crossref_primary_10_3390_app12136395
crossref_primary_10_1016_j_eswa_2022_119115
crossref_primary_10_1016_j_fuel_2022_126891
crossref_primary_10_1016_j_health_2023_100242
crossref_primary_10_3390_computation12060113
crossref_primary_10_31466_kfbd_1473382
crossref_primary_10_1016_j_aap_2025_107942
crossref_primary_10_1016_j_wasman_2024_09_010
crossref_primary_10_1007_s12145_023_01042_3
crossref_primary_10_1016_j_coastaleng_2025_104722
crossref_primary_10_1016_j_jmrt_2024_06_036
crossref_primary_10_1080_19392699_2024_2341952
crossref_primary_10_1109_TG_2024_3369330
crossref_primary_10_1021_acs_biochem_3c00253
crossref_primary_10_1088_1748_9326_ad959f
crossref_primary_10_1007_s00477_025_02911_7
crossref_primary_10_1016_j_coche_2024_101025
crossref_primary_10_1109_ACCESS_2024_3445308
crossref_primary_10_1117_1_JRS_18_042604
crossref_primary_10_1016_j_asoc_2024_112056
crossref_primary_10_1016_j_envpol_2024_124389
crossref_primary_10_1007_s00170_024_14696_0
crossref_primary_10_1016_j_rineng_2025_104025
crossref_primary_10_1111_poms_13727
crossref_primary_10_1016_j_compag_2023_108456
crossref_primary_10_1016_j_ecolind_2024_112945
crossref_primary_10_3390_ma17205056
crossref_primary_10_1016_j_energy_2022_125704
crossref_primary_10_1038_s41598_022_09613_y
crossref_primary_10_1080_17538947_2024_2413890
crossref_primary_10_1016_j_jobe_2024_108675
crossref_primary_10_1088_1361_6501_ace640
crossref_primary_10_1063_5_0241098
crossref_primary_10_1016_j_osnem_2022_100239
crossref_primary_10_1002_cpe_8334
crossref_primary_10_1145_3701740
crossref_primary_10_1016_j_heliyon_2024_e41517
crossref_primary_10_1016_j_istruc_2025_108259
crossref_primary_10_47582_jompac_1259507
crossref_primary_10_3390_e24121708
crossref_primary_10_3390_s22249684
crossref_primary_10_1016_j_cmpb_2023_107482
crossref_primary_10_1016_j_conbuildmat_2024_137992
crossref_primary_10_1007_s10462_024_10890_4
crossref_primary_10_1016_j_asoc_2024_112678
crossref_primary_10_1016_j_coldregions_2024_104341
crossref_primary_10_3390_machines12070495
crossref_primary_10_1002_aisy_202400495
crossref_primary_10_1007_s00521_024_09967_6
crossref_primary_10_1016_j_eswa_2023_121533
crossref_primary_10_1016_j_ipm_2022_102988
crossref_primary_10_3390_biomedinformatics4010041
crossref_primary_10_1016_j_rineng_2024_102637
crossref_primary_10_3390_jimaging9020033
crossref_primary_10_1007_s40808_024_02063_7
crossref_primary_10_1016_j_bspc_2024_106320
crossref_primary_10_3389_frai_2023_1099521
crossref_primary_10_3390_rs16193582
crossref_primary_10_1016_j_powtec_2023_118416
crossref_primary_10_1109_ACCESS_2023_3325896
crossref_primary_10_1155_2022_2263329
crossref_primary_10_1111_exsy_13722
crossref_primary_10_1016_j_eswa_2022_117144
crossref_primary_10_3390_machines12020121
crossref_primary_10_1002_for_3097
crossref_primary_10_1145_3654665
crossref_primary_10_1016_j_knosys_2025_112970
crossref_primary_10_1515_teme_2022_0097
crossref_primary_10_1038_s41598_024_51374_3
crossref_primary_10_1016_j_apenergy_2024_123289
crossref_primary_10_1016_j_engappai_2024_108046
crossref_primary_10_1038_s41598_024_75062_4
crossref_primary_10_1016_j_jfueco_2022_100078
crossref_primary_10_1109_TNSM_2023_3282740
crossref_primary_10_1016_j_ijepes_2023_109576
crossref_primary_10_3390_polym14214717
crossref_primary_10_1007_s10462_025_11167_0
crossref_primary_10_1016_j_asej_2024_102975
crossref_primary_10_1038_s41598_023_37746_1
crossref_primary_10_1021_acsami_3c17377
crossref_primary_10_1007_s11668_025_02118_6
crossref_primary_10_1016_j_psep_2024_12_027
crossref_primary_10_3390_electronics13173412
crossref_primary_10_1002_batt_202300457
crossref_primary_10_1016_j_scitotenv_2024_175600
crossref_primary_10_1080_15481603_2024_2426598
crossref_primary_10_1007_s11517_024_03073_4
crossref_primary_10_1016_j_cose_2024_103705
crossref_primary_10_1038_s41598_024_66481_4
crossref_primary_10_1016_j_cscm_2023_e02607
crossref_primary_10_1109_JIOT_2023_3234530
crossref_primary_10_1016_j_enbuild_2024_115177
crossref_primary_10_1016_j_eswa_2023_120307
crossref_primary_10_1109_ACCESS_2024_3360691
crossref_primary_10_3390_molecules29020499
crossref_primary_10_1002_adfm_202412901
crossref_primary_10_1109_ACCESS_2024_3485593
crossref_primary_10_3389_fpubh_2024_1445425
crossref_primary_10_1016_j_compind_2023_104044
crossref_primary_10_1080_08839514_2021_2008148
crossref_primary_10_32604_cmc_2024_052323
crossref_primary_10_1186_s13677_024_00712_x
crossref_primary_10_1080_15435075_2024_2326076
crossref_primary_10_2139_ssrn_4147618
crossref_primary_10_1109_ACCESS_2024_3426955
crossref_primary_10_3390_a17060231
crossref_primary_10_3389_fendo_2024_1444282
crossref_primary_10_1016_j_telpol_2023_102598
crossref_primary_10_3390_en16124773
crossref_primary_10_1016_j_apenergy_2024_124117
crossref_primary_10_1007_s10207_023_00763_2
crossref_primary_10_3390_math9212683
crossref_primary_10_1007_s00354_022_00201_2
crossref_primary_10_1016_j_jag_2024_103746
crossref_primary_10_32604_cmc_2024_052599
crossref_primary_10_2196_58455
crossref_primary_10_1016_j_rsase_2024_101208
crossref_primary_10_1016_j_compbiomed_2023_106619
crossref_primary_10_1371_journal_pone_0307721
crossref_primary_10_1007_s00521_023_08929_8
crossref_primary_10_1016_j_fuel_2023_129469
crossref_primary_10_3390_s22176338
crossref_primary_10_1016_j_coldregions_2024_104416
crossref_primary_10_1051_metal_2023075
crossref_primary_10_1016_j_eswa_2022_118721
crossref_primary_10_1186_s44147_024_00428_4
crossref_primary_10_1007_s11628_023_00535_x
crossref_primary_10_1016_j_jhazmat_2024_135853
crossref_primary_10_1080_13467581_2023_2294871
crossref_primary_10_1007_s10207_024_00828_w
crossref_primary_10_1007_s12243_022_00926_7
crossref_primary_10_1016_j_scs_2024_105889
crossref_primary_10_1109_ACCESS_2023_3342868
crossref_primary_10_1145_3609333
crossref_primary_10_1007_s11356_023_29336_5
crossref_primary_10_3390_ai5040117
crossref_primary_10_3390_axioms12060538
crossref_primary_10_1038_s41598_024_70773_0
crossref_primary_10_3390_su14159680
crossref_primary_10_1016_j_prime_2024_100856
crossref_primary_10_3390_a15110431
crossref_primary_10_3390_app12136681
crossref_primary_10_1007_s10922_024_09891_z
crossref_primary_10_1016_j_aej_2024_10_042
crossref_primary_10_1016_j_aei_2024_102823
crossref_primary_10_1016_j_comcom_2023_02_019
crossref_primary_10_1109_ACCESS_2025_3541878
crossref_primary_10_1109_LSP_2024_3520019
crossref_primary_10_1109_JIOT_2023_3296809
crossref_primary_10_1016_j_compeleceng_2024_109246
crossref_primary_10_32604_cmc_2025_059567
crossref_primary_10_1016_j_nhres_2024_11_004
crossref_primary_10_1016_j_artmed_2022_102454
crossref_primary_10_1109_OJCOMS_2022_3215676
crossref_primary_10_1016_j_mlwa_2024_100580
crossref_primary_10_3390_f15111971
crossref_primary_10_1007_s00477_023_02560_8
Cites_doi 10.1609/aimag.v38i3.2741
10.1016/j.datak.2009.01.004
10.1016/j.visinf.2017.01.006
10.1162/neco.2006.18.7.1527
10.1145/3287560.3287574
10.1145/3097983.3098052
10.1145/1541880.1541882
10.1109/TVCG.2018.2865029
10.1126/science.1127647
10.1016/j.ipm.2003.10.006
10.1016/j.artint.2018.07.007
10.1145/3236386.3241340
10.1007/s10994-017-5633-9
10.14569/IJACSA.2019.0101201
10.1016/j.patcog.2020.107198
10.1023/B:AIRE.0000045502.10941.a9
10.1016/j.patcog.2016.03.028
10.1109/TNNLS.2016.2599820
10.1016/0025-5564(75)90047-4
10.1155/2017/8501683
10.1109/ACCESS.2018.2870052
10.1016/j.eswa.2020.113187
10.1145/342009.335388
10.1007/s10462-010-9165-y
10.1016/j.inffus.2019.12.012
ContentType Journal Article
Copyright 2021 Elsevier Ltd
Copyright Elsevier BV Dec 30, 2021
Copyright_xml – notice: 2021 Elsevier Ltd
– notice: Copyright Elsevier BV Dec 30, 2021
DBID AAYXX
CITATION
7SC
8FD
JQ2
L7M
L~C
L~D
DOI 10.1016/j.eswa.2021.115736
DatabaseName CrossRef
Computer and Information Systems Abstracts
Technology Research Database
ProQuest Computer Science Collection
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
DatabaseTitle CrossRef
Computer and Information Systems Abstracts
Technology Research Database
Computer and Information Systems Abstracts – Academic
Advanced Technologies Database with Aerospace
ProQuest Computer Science Collection
Computer and Information Systems Abstracts Professional
DatabaseTitleList Computer and Information Systems Abstracts

DeliveryMethod fulltext_linktorsrc
Discipline Computer Science
EISSN 1873-6793
ExternalDocumentID 10_1016_j_eswa_2021_115736
S0957417421011155
GroupedDBID --K
--M
.DC
.~1
0R~
13V
1B1
1RT
1~.
1~5
4.4
457
4G.
5GY
5VS
7-5
71M
8P~
9JN
9JO
AAAKF
AABNK
AACTN
AAEDT
AAEDW
AAIAV
AAIKJ
AAKOC
AALRI
AAOAW
AAQFI
AARIN
AAXUO
AAYFN
ABBOA
ABFNM
ABMAC
ABMVD
ABUCO
ABYKQ
ACDAQ
ACGFS
ACHRH
ACNTT
ACRLP
ACZNC
ADBBV
ADEZE
ADTZH
AEBSH
AECPX
AEKER
AENEX
AFKWA
AFTJW
AGHFR
AGJBL
AGUBO
AGUMN
AGYEJ
AHHHB
AHJVU
AHZHX
AIALX
AIEXJ
AIKHN
AITUG
AJOXV
ALEQD
ALMA_UNASSIGNED_HOLDINGS
AMFUW
AMRAJ
AOUOD
APLSM
AXJTR
BJAXD
BKOJK
BLXMC
BNSAS
CS3
DU5
EBS
EFJIC
EFLBG
EO8
EO9
EP2
EP3
F5P
FDB
FIRID
FNPLU
FYGXN
G-Q
GBLVA
GBOLZ
HAMUX
IHE
J1W
JJJVA
KOM
LG9
LY1
LY7
M41
MO0
N9A
O-L
O9-
OAUVE
OZT
P-8
P-9
P2P
PC.
PQQKQ
Q38
ROL
RPZ
SDF
SDG
SDP
SDS
SES
SPC
SPCBC
SSB
SSD
SSL
SST
SSV
SSZ
T5K
TN5
~G-
29G
AAAKG
AAQXK
AATTM
AAXKI
AAYWO
AAYXX
ABJNI
ABKBG
ABUFD
ABWVN
ABXDB
ACLOT
ACNNM
ACRPL
ACVFH
ADCNI
ADJOM
ADMUD
ADNMO
AEIPS
AEUPX
AFJKZ
AFPUW
AGQPQ
AIGII
AIIUN
AKBMS
AKRWK
AKYEP
ANKPU
APXCP
ASPBG
AVWKF
AZFZN
CITATION
EFKBS
EJD
FEDTE
FGOYB
G-2
HLZ
HVGLF
HZ~
R2-
SBC
SET
SEW
WUQ
XPP
ZMT
~HD
7SC
8FD
AGCQF
JQ2
L7M
L~C
L~D
ID FETCH-LOGICAL-c328t-cf5a4bc75d16b292ef365c83de62cc3fa9b6694aa24a1b7d1def78dd9fae5b873
IEDL.DBID .~1
ISSN 0957-4174
IngestDate Sun Sep 07 03:35:56 EDT 2025
Sat Oct 25 05:41:17 EDT 2025
Thu Apr 24 23:02:42 EDT 2025
Fri Feb 23 02:40:46 EST 2024
IsPeerReviewed true
IsScholarly true
Keywords XAI
Shapley values
Explainable black-box models
SHAP
Autoencoder
Anomaly detection
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c328t-cf5a4bc75d16b292ef365c83de62cc3fa9b6694aa24a1b7d1def78dd9fae5b873
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ORCID 0000-0003-4943-9324
0000-0002-6383-6185
PQID 2599115613
PQPubID 2045477
ParticipantIDs proquest_journals_2599115613
crossref_primary_10_1016_j_eswa_2021_115736
crossref_citationtrail_10_1016_j_eswa_2021_115736
elsevier_sciencedirect_doi_10_1016_j_eswa_2021_115736
PublicationCentury 2000
PublicationDate 2021-12-30
PublicationDateYYYYMMDD 2021-12-30
PublicationDate_xml – month: 12
  year: 2021
  text: 2021-12-30
  day: 30
PublicationDecade 2020
PublicationPlace New York
PublicationPlace_xml – name: New York
PublicationTitle Expert systems with applications
PublicationYear 2021
Publisher Elsevier Ltd
Elsevier BV
Publisher_xml – name: Elsevier Ltd
– name: Elsevier BV
References Jolliffe (b34) 2011
Pang, Shen, Cao, Hengel (b54) 2020
Guidotti, Monreale, Ruggieri, Turini, Giannotti, Pedreschi (b27) 2018; 51
Kopp, Pevnỳ, Holeňa (b37) 2020; 149
Štrumbelj, Kononenko, Šikonja (b65) 2009; 68
Amarasinghe, Kenney, Manic (b3) 2018
Song, Jiang, Men, Yang (b64) 2017; 2017
External Data Source (b20) 2018
Miller (b47) 2019; 267
Takeishi (b66) 2019
Zhou, C., & Paffenroth, R. C. (2017). Anomaly detection with robust deep autoencoders. In
Lipton (b38) 2018; 16
Breunig, M. M., Kriegel, H.-P., Ng, R. T., & Sander, J. (2000). LOF: identifying density-based local outliers. In
Liu, Motoda (b39) 2013
Takeishi, Kawahara (b67) 2020
Ben-Gal (b7) 2005
Hodge, Austin (b32) 2004; 22
Maaten, Hinton (b45) 2008; 9
Radev, Jing, Styś, Tam (b56) 2004; 40
Arrieta, Díaz-Rodríguez, Del Ser, Bennetot, Tabik, Barbado, García, Gil-López, Molina, Benjamins (b6) 2020; 58
An, Cho (b4) 2015; 2
Goodall, Ragan, Steed, Reed, Richardson, Huffer, Bridges, Laska (b24) 2019; 25
Yang, Kim (b69) 2019
Hoffman, Mueller, Klein, Litman (b33) 2018
Tan, Steinbach, Kumar (b68) 2016
Arp, Spreitzenbarth, Hubner, Gascon, Rieck, Siemens (b5) 2014
Lundberg, Lee (b44) 2017
Goodman, Flaxman (b26) 2017; 38
Hinton, Osindero, Teh (b30) 2006; 18
Paula, Ladeira, Carvalho, Marzagão (b55) 2016
Singh, Upadhyaya (b63) 2012; 9
Ribeiro, Singh, Guestrin (b57) 2016
(pp. 665–674).
Hinton, Salakhutdinov (b31) 2006; 313
Nguyen, Lim, Divakaran, Low, Chan (b50) 2019
Liu, Shin, Hu (b40) 2018
External Data Source (b19) 2018
(pp. 279–288).
Collaris, Vink, van Wijk (b16) 2018
Golan, El-Yaniv (b23) 2018
(pp. 93–104).
Shrikumar, Greenside, Kundaje (b62) 2017
Chandola, Banerjee, Kumar (b14) 2009; 41
Palczewska, Palczewski, Robinson, Neagu (b53) 2014
Carbonera, Olszewska (b13) 2019; 10
Kauffmann, Müller, Montavon (b35) 2020; 101
Melis, Jaakkola (b46) 2018
Liu, Xia, Yu (b42) 2000
Sakurada, Yairi (b59) 2014
Liu, Wang, Liu, Zhu (b41) 2017; 1
Rumelhart, Hinton, Williams (b58) 1985
Erfani, Rajasegarar, Karunasekera, Leckie (b18) 2016; 58
Mittelstadt, B., Russell, C., & Wachter, S. (2019). Explaining explanations in AI. In
Shortliffe, Buchanan (b61) 1975; 23
Bengio, LeCun (b8) 2007; 34
Aggarwal (b2) 2015
Samek, Binder, Montavon, Lapuschkin, Müller (b60) 2017; 28
Lundberg, Erion, Lee (b43) 2018
Doshi-Velez, Kim (b17) 2017; 1050
Kindermans, Hooker, Adebayo, Alber, Schütt, Dähne, Erhan, Kim (b36) 2017; 1050
Montavon, Samek, Müller (b49) 2017
Bertsimas, Orfanoudaki, Wiberg (b11) 2018
Gunning (b28) 2017
Olszewska (b51) 2019
Friedman (b21) 2001
Olvera-López, Carrasco-Ochoa, Martínez-Trinidad, Kittler (b52) 2010; 34
Adadi, Berrada (b1) 2018; 6
Bergman, Hoshen (b9) 2020
Bertsimas, Dunn (b10) 2017; 106
Gilpin, Bau, Yuan, Bajwa, Specter, Kagal (b22) 2018
Chen, Sathe, Aggarwal, Turaga (b15) 2017
Goodfellow, Bengio, Courville (b25) 2016
Hawkins, He, Williams, Baxter (b29) 2002
Goodall (10.1016/j.eswa.2021.115736_b24) 2019; 25
Hinton (10.1016/j.eswa.2021.115736_b30) 2006; 18
Montavon (10.1016/j.eswa.2021.115736_b49) 2017
Golan (10.1016/j.eswa.2021.115736_b23) 2018
Kopp (10.1016/j.eswa.2021.115736_b37) 2020; 149
Shortliffe (10.1016/j.eswa.2021.115736_b61) 1975; 23
Hoffman (10.1016/j.eswa.2021.115736_b33) 2018
External Data Source (10.1016/j.eswa.2021.115736_b19) 2018
Gunning (10.1016/j.eswa.2021.115736_b28) 2017
An (10.1016/j.eswa.2021.115736_b4) 2015; 2
Bengio (10.1016/j.eswa.2021.115736_b8) 2007; 34
Bertsimas (10.1016/j.eswa.2021.115736_b10) 2017; 106
10.1016/j.eswa.2021.115736_b12
Goodman (10.1016/j.eswa.2021.115736_b26) 2017; 38
Hinton (10.1016/j.eswa.2021.115736_b31) 2006; 313
Miller (10.1016/j.eswa.2021.115736_b47) 2019; 267
Pang (10.1016/j.eswa.2021.115736_b54) 2020
Takeishi (10.1016/j.eswa.2021.115736_b67) 2020
Goodfellow (10.1016/j.eswa.2021.115736_b25) 2016
Friedman (10.1016/j.eswa.2021.115736_b21) 2001
Takeishi (10.1016/j.eswa.2021.115736_b66) 2019
Yang (10.1016/j.eswa.2021.115736_b69) 2019
Amarasinghe (10.1016/j.eswa.2021.115736_b3) 2018
Erfani (10.1016/j.eswa.2021.115736_b18) 2016; 58
Hawkins (10.1016/j.eswa.2021.115736_b29) 2002
Liu (10.1016/j.eswa.2021.115736_b41) 2017; 1
Song (10.1016/j.eswa.2021.115736_b64) 2017; 2017
Samek (10.1016/j.eswa.2021.115736_b60) 2017; 28
Jolliffe (10.1016/j.eswa.2021.115736_b34) 2011
Sakurada (10.1016/j.eswa.2021.115736_b59) 2014
Štrumbelj (10.1016/j.eswa.2021.115736_b65) 2009; 68
Arrieta (10.1016/j.eswa.2021.115736_b6) 2020; 58
Liu (10.1016/j.eswa.2021.115736_b40) 2018
Chen (10.1016/j.eswa.2021.115736_b15) 2017
Paula (10.1016/j.eswa.2021.115736_b55) 2016
Palczewska (10.1016/j.eswa.2021.115736_b53) 2014
10.1016/j.eswa.2021.115736_b48
Doshi-Velez (10.1016/j.eswa.2021.115736_b17) 2017; 1050
Melis (10.1016/j.eswa.2021.115736_b46) 2018
Bertsimas (10.1016/j.eswa.2021.115736_b11) 2018
Carbonera (10.1016/j.eswa.2021.115736_b13) 2019; 10
Arp (10.1016/j.eswa.2021.115736_b5) 2014
Singh (10.1016/j.eswa.2021.115736_b63) 2012; 9
Gilpin (10.1016/j.eswa.2021.115736_b22) 2018
Adadi (10.1016/j.eswa.2021.115736_b1) 2018; 6
Ribeiro (10.1016/j.eswa.2021.115736_b57) 2016
Lundberg (10.1016/j.eswa.2021.115736_b43) 2018
Lipton (10.1016/j.eswa.2021.115736_b38) 2018; 16
Kauffmann (10.1016/j.eswa.2021.115736_b35) 2020; 101
Olszewska (10.1016/j.eswa.2021.115736_b51) 2019
Chandola (10.1016/j.eswa.2021.115736_b14) 2009; 41
Hodge (10.1016/j.eswa.2021.115736_b32) 2004; 22
Collaris (10.1016/j.eswa.2021.115736_b16) 2018
Tan (10.1016/j.eswa.2021.115736_b68) 2016
Radev (10.1016/j.eswa.2021.115736_b56) 2004; 40
Aggarwal (10.1016/j.eswa.2021.115736_b2) 2015
Shrikumar (10.1016/j.eswa.2021.115736_b62) 2017
10.1016/j.eswa.2021.115736_b70
Liu (10.1016/j.eswa.2021.115736_b39) 2013
Bergman (10.1016/j.eswa.2021.115736_b9) 2020
External Data Source (10.1016/j.eswa.2021.115736_b20) 2018
Olvera-López (10.1016/j.eswa.2021.115736_b52) 2010; 34
Liu (10.1016/j.eswa.2021.115736_b42) 2000
Lundberg (10.1016/j.eswa.2021.115736_b44) 2017
Nguyen (10.1016/j.eswa.2021.115736_b50) 2019
Ben-Gal (10.1016/j.eswa.2021.115736_b7) 2005
Rumelhart (10.1016/j.eswa.2021.115736_b58) 1985
Guidotti (10.1016/j.eswa.2021.115736_b27) 2018; 51
Kindermans (10.1016/j.eswa.2021.115736_b36) 2017; 1050
Maaten (10.1016/j.eswa.2021.115736_b45) 2008; 9
References_xml – reference: Breunig, M. M., Kriegel, H.-P., Ng, R. T., & Sander, J. (2000). LOF: identifying density-based local outliers. In
– year: 2016
  ident: b25
  article-title: Deep learning
– year: 2017
  ident: b49
  article-title: Methods for interpreting and understanding deep neural networks
  publication-title: Digital Signal Processing
– year: 2020
  ident: b54
  article-title: Deep learning for anomaly detection: A review
– year: 2013
  ident: b39
  article-title: Instance selection and construction for data mining, vol. 608
– volume: 10
  start-page: Paper
  year: 2019
  end-page: 1
  ident: b13
  article-title: Local-set based-on instance selection approach for autonomous object modelling
  publication-title: International Journal of Advanced Computer Science and Applications
– volume: 28
  start-page: 2660
  year: 2017
  end-page: 2673
  ident: b60
  article-title: Evaluating the visualization of what a deep neural network has learned
  publication-title: IEEE Transactions on Neural Networks and Learning Systems
– volume: 9
  start-page: 2579
  year: 2008
  end-page: 2605
  ident: b45
  article-title: Visualizing data using t-SNE
  publication-title: Journal of Machine Learning Research
– year: 2017
  ident: b28
  article-title: Explainable artificial intelligence (xai)
– start-page: 9758
  year: 2018
  end-page: 9769
  ident: b23
  article-title: Deep anomaly detection using geometric transformations
  publication-title: Advances in neural information processing systems
– volume: 38
  start-page: 50
  year: 2017
  end-page: 57
  ident: b26
  article-title: European Union regulations on algorithmic decision-making and a “right to explanation”
  publication-title: AI Magazine
– volume: 58
  start-page: 121
  year: 2016
  end-page: 134
  ident: b18
  article-title: High-dimensional and large-scale anomaly detection using a linear one-class SVM with deep learning
  publication-title: Pattern Recognition
– start-page: 311
  year: 2018
  end-page: 317
  ident: b3
  article-title: Toward explainable deep neural network based anomaly detection
  publication-title: 2018 11th international conference on human system interaction
– start-page: 91
  year: 2019
  end-page: 99
  ident: b50
  article-title: GEE: A gradient-based explainable variational autoencoder for network anomaly detection
  publication-title: 2019 IEEE conference on communications and network security
– volume: 2017
  year: 2017
  ident: b64
  article-title: A hybrid semi-supervised anomaly detection model for high-dimensional data
  publication-title: Computational Intelligence and Neuroscience
– volume: 1
  start-page: 48
  year: 2017
  end-page: 56
  ident: b41
  article-title: Towards better analysis of machine learning models: A visual analytics perspective
  publication-title: Visual Informatics
– volume: 101
  year: 2020
  ident: b35
  article-title: Towards explaining anomalies: a deep taylor decomposition of one-class models
  publication-title: Pattern Recognition
– year: 2020
  ident: b9
  article-title: Classification-based anomaly detection for general data
  publication-title: ICLR 2020
– year: 2018
  ident: b43
  article-title: Consistent individualized feature attribution for tree ensembles
– reference: Zhou, C., & Paffenroth, R. C. (2017). Anomaly detection with robust deep autoencoders. In
– volume: 41
  start-page: 15
  year: 2009
  ident: b14
  article-title: Anomaly detection: A survey
  publication-title: ACM Computing Surveys
– volume: 34
  start-page: 133
  year: 2010
  end-page: 143
  ident: b52
  article-title: A review of instance selection methods
  publication-title: Artificial Intelligence Review
– start-page: 193
  year: 2014
  end-page: 218
  ident: b53
  article-title: Interpreting random forest classification models using a feature contribution method
  publication-title: Integration of reusable systems
– start-page: 4
  year: 2014
  ident: b59
  article-title: Anomaly detection using autoencoders with nonlinear dimensionality reduction
  publication-title: Proceedings of the MLSDA 2014 2nd workshop on machine learning for sensory data analysis
– volume: 34
  start-page: 1
  year: 2007
  end-page: 41
  ident: b8
  article-title: Scaling learning algorithms towards AI
  publication-title: Large-Scale Kernel Machines
– start-page: 4765
  year: 2017
  end-page: 4774
  ident: b44
  article-title: A unified approach to interpreting model predictions
  publication-title: Advances in Neural Information Processing Systems
– year: 2018
  ident: b16
  article-title: Instance-level explanations for fraud detection: A case study
– start-page: 793
  year: 2019
  end-page: 798
  ident: b66
  article-title: Shapley Values of reconstruction errors of PCA for explaining anomaly detection
  publication-title: 2019 international conference on data mining workshops
– start-page: 954
  year: 2016
  end-page: 960
  ident: b55
  article-title: Deep learning anomaly detection as support fraud investigation in brazilian exports and anti-money laundering
  publication-title: Machine learning and applications (ICMLA), 2016 15th IEEE international conference on
– start-page: 23
  year: 2014
  end-page: 26
  ident: b5
  article-title: DREBIN: Effective and explainable detection of android malware in your pocket
  publication-title: Ndss (Vol. 14)
– year: 2018
  ident: b33
  article-title: Metrics for explainable AI: Challenges and prospects
– start-page: 3145
  year: 2017
  end-page: 3153
  ident: b62
  article-title: Learning important features through propagating activation differences
  publication-title: Proceedings of the 34th international conference on machine learning-volume 70
– year: 2011
  ident: b34
  article-title: Principal component analysis
– year: 1985
  ident: b58
  article-title: Learning internal representations by error propagation
– year: 2016
  ident: b68
  article-title: Introduction to data mining
– year: 2020
  ident: b67
  article-title: On anomaly interpretation via Shapley values
– volume: 9
  start-page: 307
  year: 2012
  ident: b63
  article-title: Outlier detection: applications and techniques
  publication-title: International Journal of Computer Science Issues (IJCSI)
– volume: 68
  start-page: 886
  year: 2009
  end-page: 904
  ident: b65
  article-title: Explaining instance classifications with interactions of subsets of feature values
  publication-title: Data & Knowledge Engineering
– year: 2018
  ident: b19
  article-title: Credit card fraud detection
– volume: 23
  start-page: 351
  year: 1975
  end-page: 379
  ident: b61
  article-title: A model of inexact reasoning in medicine
  publication-title: Mathematical Biosciences
– year: 2018
  ident: b11
  article-title: Interpretable clustering via optimal trees
– start-page: 850
  year: 2019
  end-page: 856
  ident: b51
  article-title: Designing transparent and autonomous intelligent vision systems
  publication-title: ICAART (No. 2)
– start-page: 90
  year: 2017
  end-page: 98
  ident: b15
  article-title: Outlier detection with autoencoder ensembles
  publication-title: Proceedings of the 2017 SIAM international conference on data mining
– volume: 313
  start-page: 504
  year: 2006
  end-page: 507
  ident: b31
  article-title: Reducing the dimensionality of data with neural networks
  publication-title: Science
– start-page: 131
  year: 2005
  end-page: 146
  ident: b7
  article-title: Outlier detection
  publication-title: Data mining and knowledge discovery handbook
– start-page: 170
  year: 2002
  end-page: 180
  ident: b29
  article-title: Outlier detection using replicator neural networks
  publication-title: International conference on data warehousing and knowledge discovery
– year: 2019
  ident: b69
  article-title: BIM: Towards quantitative evaluation of interpretability methods with ground truth
– year: 2018
  ident: b20
  article-title: KDD cup 1999 data
– start-page: 237
  year: 2015
  end-page: 263
  ident: b2
  article-title: Outlier analysis
  publication-title: Data mining
– start-page: 2461
  year: 2018
  end-page: 2467
  ident: b40
  article-title: Contextual outlier interpretation
  publication-title: Proceedings of the 27th international joint conference on artificial intelligence
– volume: 6
  start-page: 52138
  year: 2018
  end-page: 52160
  ident: b1
  article-title: Peeking inside the black-box: A survey on explainable artificial intelligence (XAI)
  publication-title: IEEE Access
– volume: 40
  start-page: 919
  year: 2004
  end-page: 938
  ident: b56
  article-title: Centroid-based summarization of multiple documents
  publication-title: Information Processing & Management
– volume: 106
  start-page: 1039
  year: 2017
  end-page: 1082
  ident: b10
  article-title: Optimal classification trees
  publication-title: Machine Learning
– volume: 51
  start-page: 93
  year: 2018
  ident: b27
  article-title: A survey of methods for explaining black box models
  publication-title: ACM Computing Surveys
– reference: (pp. 665–674).
– start-page: 80
  year: 2018
  end-page: 89
  ident: b22
  article-title: Explaining explanations: An overview of interpretability of machine learning
  publication-title: 2018 IEEE 5th international conference on data science and advanced analytics
– start-page: 1189
  year: 2001
  end-page: 1232
  ident: b21
  article-title: Greedy function approximation: a gradient boosting machine
  publication-title: The Annals of Statistics
– volume: 1050
  start-page: 2
  year: 2017
  ident: b36
  article-title: The (UN) reliability of saliency methods
  publication-title: Stat
– reference: (pp. 93–104).
– reference: Mittelstadt, B., Russell, C., & Wachter, S. (2019). Explaining explanations in AI. In
– volume: 22
  start-page: 85
  year: 2004
  end-page: 126
  ident: b32
  article-title: A survey of outlier detection methodologies
  publication-title: Artificial Intelligence Review
– volume: 58
  start-page: 82
  year: 2020
  end-page: 115
  ident: b6
  article-title: Explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
  publication-title: Information Fusion
– start-page: 1135
  year: 2016
  end-page: 1144
  ident: b57
  article-title: Why should i trust you?: Explaining the predictions of any classifier
  publication-title: Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining
– volume: 2
  start-page: 1
  year: 2015
  end-page: 18
  ident: b4
  article-title: Variational autoencoder based anomaly detection using reconstruction probability
  publication-title: Special Lecture on IE
– volume: 267
  start-page: 1
  year: 2019
  end-page: 38
  ident: b47
  article-title: Explanation in artificial intelligence: Insights from the social sciences
  publication-title: Artificial Intelligence
– start-page: 20
  year: 2000
  end-page: 29
  ident: b42
  article-title: Clustering through decision tree construction
  publication-title: Proceedings of the ninth international conference on information and knowledge management
– volume: 25
  start-page: 204
  year: 2019
  end-page: 214
  ident: b24
  article-title: Situ: Identifying and explaining suspicious behavior in networks
  publication-title: IEEE Transactions on Visualization and Computer Graphics
– volume: 18
  start-page: 1527
  year: 2006
  end-page: 1554
  ident: b30
  article-title: A fast learning algorithm for deep belief nets
  publication-title: Neural Computation
– volume: 1050
  start-page: 28
  year: 2017
  ident: b17
  article-title: A roadmap for a rigorous science of interpretability
  publication-title: Stat
– volume: 149
  year: 2020
  ident: b37
  article-title: Anomaly explanation with random forests
  publication-title: Expert Systems with Applications
– volume: 16
  start-page: 31
  year: 2018
  end-page: 57
  ident: b38
  article-title: The mythos of model interpretability
  publication-title: Queue
– start-page: 7786
  year: 2018
  end-page: 7795
  ident: b46
  article-title: Towards robust interpretability with self-explaining neural networks
  publication-title: Advances in neural information processing systems
– reference: (pp. 279–288).
– start-page: 3145
  year: 2017
  ident: 10.1016/j.eswa.2021.115736_b62
  article-title: Learning important features through propagating activation differences
– start-page: 954
  year: 2016
  ident: 10.1016/j.eswa.2021.115736_b55
  article-title: Deep learning anomaly detection as support fraud investigation in brazilian exports and anti-money laundering
– volume: 38
  start-page: 50
  issue: 3
  year: 2017
  ident: 10.1016/j.eswa.2021.115736_b26
  article-title: European Union regulations on algorithmic decision-making and a “right to explanation”
  publication-title: AI Magazine
  doi: 10.1609/aimag.v38i3.2741
– year: 2020
  ident: 10.1016/j.eswa.2021.115736_b54
– start-page: 90
  year: 2017
  ident: 10.1016/j.eswa.2021.115736_b15
  article-title: Outlier detection with autoencoder ensembles
– year: 2016
  ident: 10.1016/j.eswa.2021.115736_b68
– volume: 68
  start-page: 886
  issue: 10
  year: 2009
  ident: 10.1016/j.eswa.2021.115736_b65
  article-title: Explaining instance classifications with interactions of subsets of feature values
  publication-title: Data & Knowledge Engineering
  doi: 10.1016/j.datak.2009.01.004
– start-page: 237
  year: 2015
  ident: 10.1016/j.eswa.2021.115736_b2
  article-title: Outlier analysis
– start-page: 91
  year: 2019
  ident: 10.1016/j.eswa.2021.115736_b50
  article-title: GEE: A gradient-based explainable variational autoencoder for network anomaly detection
– start-page: 193
  year: 2014
  ident: 10.1016/j.eswa.2021.115736_b53
  article-title: Interpreting random forest classification models using a feature contribution method
– volume: 1
  start-page: 48
  issue: 1
  year: 2017
  ident: 10.1016/j.eswa.2021.115736_b41
  article-title: Towards better analysis of machine learning models: A visual analytics perspective
  publication-title: Visual Informatics
  doi: 10.1016/j.visinf.2017.01.006
– start-page: 1135
  year: 2016
  ident: 10.1016/j.eswa.2021.115736_b57
  article-title: Why should i trust you?: Explaining the predictions of any classifier
– volume: 51
  start-page: 93
  issue: 5
  year: 2018
  ident: 10.1016/j.eswa.2021.115736_b27
  article-title: A survey of methods for explaining black box models
  publication-title: ACM Computing Surveys
– year: 2018
  ident: 10.1016/j.eswa.2021.115736_b16
– volume: 18
  start-page: 1527
  issue: 7
  year: 2006
  ident: 10.1016/j.eswa.2021.115736_b30
  article-title: A fast learning algorithm for deep belief nets
  publication-title: Neural Computation
  doi: 10.1162/neco.2006.18.7.1527
– year: 2019
  ident: 10.1016/j.eswa.2021.115736_b69
– year: 2011
  ident: 10.1016/j.eswa.2021.115736_b34
– volume: 9
  start-page: 2579
  issue: Nov
  year: 2008
  ident: 10.1016/j.eswa.2021.115736_b45
  article-title: Visualizing data using t-SNE
  publication-title: Journal of Machine Learning Research
– year: 2016
  ident: 10.1016/j.eswa.2021.115736_b25
– year: 2013
  ident: 10.1016/j.eswa.2021.115736_b39
– ident: 10.1016/j.eswa.2021.115736_b48
  doi: 10.1145/3287560.3287574
– ident: 10.1016/j.eswa.2021.115736_b70
  doi: 10.1145/3097983.3098052
– volume: 41
  start-page: 15
  issue: 3
  year: 2009
  ident: 10.1016/j.eswa.2021.115736_b14
  article-title: Anomaly detection: A survey
  publication-title: ACM Computing Surveys
  doi: 10.1145/1541880.1541882
– start-page: 4
  year: 2014
  ident: 10.1016/j.eswa.2021.115736_b59
  article-title: Anomaly detection using autoencoders with nonlinear dimensionality reduction
– start-page: 850
  year: 2019
  ident: 10.1016/j.eswa.2021.115736_b51
  article-title: Designing transparent and autonomous intelligent vision systems
– volume: 25
  start-page: 204
  issue: 1
  year: 2019
  ident: 10.1016/j.eswa.2021.115736_b24
  article-title: Situ: Identifying and explaining suspicious behavior in networks
  publication-title: IEEE Transactions on Visualization and Computer Graphics
  doi: 10.1109/TVCG.2018.2865029
– volume: 313
  start-page: 504
  issue: 5786
  year: 2006
  ident: 10.1016/j.eswa.2021.115736_b31
  article-title: Reducing the dimensionality of data with neural networks
  publication-title: Science
  doi: 10.1126/science.1127647
– volume: 40
  start-page: 919
  issue: 6
  year: 2004
  ident: 10.1016/j.eswa.2021.115736_b56
  article-title: Centroid-based summarization of multiple documents
  publication-title: Information Processing & Management
  doi: 10.1016/j.ipm.2003.10.006
– start-page: 7786
  year: 2018
  ident: 10.1016/j.eswa.2021.115736_b46
  article-title: Towards robust interpretability with self-explaining neural networks
– volume: 267
  start-page: 1
  year: 2019
  ident: 10.1016/j.eswa.2021.115736_b47
  article-title: Explanation in artificial intelligence: Insights from the social sciences
  publication-title: Artificial Intelligence
  doi: 10.1016/j.artint.2018.07.007
– year: 2018
  ident: 10.1016/j.eswa.2021.115736_b11
– volume: 16
  start-page: 31
  issue: 3
  year: 2018
  ident: 10.1016/j.eswa.2021.115736_b38
  article-title: The mythos of model interpretability
  publication-title: Queue
  doi: 10.1145/3236386.3241340
– volume: 106
  start-page: 1039
  issue: 7
  year: 2017
  ident: 10.1016/j.eswa.2021.115736_b10
  article-title: Optimal classification trees
  publication-title: Machine Learning
  doi: 10.1007/s10994-017-5633-9
– volume: 10
  start-page: Paper
  issue: 12
  year: 2019
  ident: 10.1016/j.eswa.2021.115736_b13
  article-title: Local-set based-on instance selection approach for autonomous object modelling
  publication-title: International Journal of Advanced Computer Science and Applications
  doi: 10.14569/IJACSA.2019.0101201
– volume: 1050
  start-page: 2
  year: 2017
  ident: 10.1016/j.eswa.2021.115736_b36
  article-title: The (UN) reliability of saliency methods
  publication-title: Stat
– year: 2018
  ident: 10.1016/j.eswa.2021.115736_b19
– volume: 101
  year: 2020
  ident: 10.1016/j.eswa.2021.115736_b35
  article-title: Towards explaining anomalies: a deep taylor decomposition of one-class models
  publication-title: Pattern Recognition
  doi: 10.1016/j.patcog.2020.107198
– start-page: 23
  year: 2014
  ident: 10.1016/j.eswa.2021.115736_b5
  article-title: DREBIN: Effective and explainable detection of android malware in your pocket
– start-page: 20
  year: 2000
  ident: 10.1016/j.eswa.2021.115736_b42
  article-title: Clustering through decision tree construction
– volume: 22
  start-page: 85
  issue: 2
  year: 2004
  ident: 10.1016/j.eswa.2021.115736_b32
  article-title: A survey of outlier detection methodologies
  publication-title: Artificial Intelligence Review
  doi: 10.1023/B:AIRE.0000045502.10941.a9
– volume: 58
  start-page: 121
  year: 2016
  ident: 10.1016/j.eswa.2021.115736_b18
  article-title: High-dimensional and large-scale anomaly detection using a linear one-class SVM with deep learning
  publication-title: Pattern Recognition
  doi: 10.1016/j.patcog.2016.03.028
– volume: 1050
  start-page: 28
  year: 2017
  ident: 10.1016/j.eswa.2021.115736_b17
  article-title: A roadmap for a rigorous science of interpretability
  publication-title: Stat
– start-page: 80
  year: 2018
  ident: 10.1016/j.eswa.2021.115736_b22
  article-title: Explaining explanations: An overview of interpretability of machine learning
– volume: 28
  start-page: 2660
  issue: 11
  year: 2017
  ident: 10.1016/j.eswa.2021.115736_b60
  article-title: Evaluating the visualization of what a deep neural network has learned
  publication-title: IEEE Transactions on Neural Networks and Learning Systems
  doi: 10.1109/TNNLS.2016.2599820
– volume: 23
  start-page: 351
  issue: 3–4
  year: 1975
  ident: 10.1016/j.eswa.2021.115736_b61
  article-title: A model of inexact reasoning in medicine
  publication-title: Mathematical Biosciences
  doi: 10.1016/0025-5564(75)90047-4
– year: 2018
  ident: 10.1016/j.eswa.2021.115736_b20
– volume: 2017
  year: 2017
  ident: 10.1016/j.eswa.2021.115736_b64
  article-title: A hybrid semi-supervised anomaly detection model for high-dimensional data
  publication-title: Computational Intelligence and Neuroscience
  doi: 10.1155/2017/8501683
– volume: 6
  start-page: 52138
  year: 2018
  ident: 10.1016/j.eswa.2021.115736_b1
  article-title: Peeking inside the black-box: A survey on explainable artificial intelligence (XAI)
  publication-title: IEEE Access
  doi: 10.1109/ACCESS.2018.2870052
– start-page: 9758
  year: 2018
  ident: 10.1016/j.eswa.2021.115736_b23
  article-title: Deep anomaly detection using geometric transformations
– volume: 149
  year: 2020
  ident: 10.1016/j.eswa.2021.115736_b37
  article-title: Anomaly explanation with random forests
  publication-title: Expert Systems with Applications
  doi: 10.1016/j.eswa.2020.113187
– volume: 9
  start-page: 307
  issue: 1
  year: 2012
  ident: 10.1016/j.eswa.2021.115736_b63
  article-title: Outlier detection: applications and techniques
  publication-title: International Journal of Computer Science Issues (IJCSI)
– start-page: 1189
  year: 2001
  ident: 10.1016/j.eswa.2021.115736_b21
  article-title: Greedy function approximation: a gradient boosting machine
  publication-title: The Annals of Statistics
– start-page: 4765
  year: 2017
  ident: 10.1016/j.eswa.2021.115736_b44
  article-title: A unified approach to interpreting model predictions
– year: 2020
  ident: 10.1016/j.eswa.2021.115736_b67
– start-page: 2461
  year: 2018
  ident: 10.1016/j.eswa.2021.115736_b40
  article-title: Contextual outlier interpretation
– volume: 2
  start-page: 1
  year: 2015
  ident: 10.1016/j.eswa.2021.115736_b4
  article-title: Variational autoencoder based anomaly detection using reconstruction probability
  publication-title: Special Lecture on IE
– start-page: 793
  year: 2019
  ident: 10.1016/j.eswa.2021.115736_b66
  article-title: Shapley Values of reconstruction errors of PCA for explaining anomaly detection
– year: 1985
  ident: 10.1016/j.eswa.2021.115736_b58
– start-page: 311
  year: 2018
  ident: 10.1016/j.eswa.2021.115736_b3
  article-title: Toward explainable deep neural network based anomaly detection
– year: 2017
  ident: 10.1016/j.eswa.2021.115736_b28
– ident: 10.1016/j.eswa.2021.115736_b12
  doi: 10.1145/342009.335388
– start-page: 131
  year: 2005
  ident: 10.1016/j.eswa.2021.115736_b7
  article-title: Outlier detection
– start-page: 170
  year: 2002
  ident: 10.1016/j.eswa.2021.115736_b29
  article-title: Outlier detection using replicator neural networks
– volume: 34
  start-page: 133
  issue: 2
  year: 2010
  ident: 10.1016/j.eswa.2021.115736_b52
  article-title: A review of instance selection methods
  publication-title: Artificial Intelligence Review
  doi: 10.1007/s10462-010-9165-y
– year: 2018
  ident: 10.1016/j.eswa.2021.115736_b43
– year: 2020
  ident: 10.1016/j.eswa.2021.115736_b9
  article-title: Classification-based anomaly detection for general data
– year: 2018
  ident: 10.1016/j.eswa.2021.115736_b33
– volume: 34
  start-page: 1
  issue: 5
  year: 2007
  ident: 10.1016/j.eswa.2021.115736_b8
  article-title: Scaling learning algorithms towards AI
  publication-title: Large-Scale Kernel Machines
– year: 2017
  ident: 10.1016/j.eswa.2021.115736_b49
  article-title: Methods for interpreting and understanding deep neural networks
  publication-title: Digital Signal Processing
– volume: 58
  start-page: 82
  year: 2020
  ident: 10.1016/j.eswa.2021.115736_b6
  article-title: Explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
  publication-title: Information Fusion
  doi: 10.1016/j.inffus.2019.12.012
SSID ssj0017007
Score 2.694973
Snippet Deep learning algorithms for anomaly detection, such as autoencoders, point out the outliers, saving experts the time-consuming task of examining normal cases...
SourceID proquest
crossref
elsevier
SourceType Aggregation Database
Enrichment Source
Index Database
Publisher
StartPage 115736
SubjectTerms Algorithms
Anomalies
Anomaly detection
Autoencoder
Data analysis
Deep learning
Evaluation
Explainable black-box models
Feature extraction
Game theory
Inspection
Machine learning
Outliers (statistics)
Reconstruction
SHAP
Shapley values
XAI
Title Explaining anomalies detected by autoencoders using Shapley Additive Explanations
URI https://dx.doi.org/10.1016/j.eswa.2021.115736
https://www.proquest.com/docview/2599115613
Volume 186
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVESC
  databaseName: Baden-Württemberg Complete Freedom Collection (Elsevier)
  customDbUrl:
  eissn: 1873-6793
  dateEnd: 99991231
  omitProxy: true
  ssIdentifier: ssj0017007
  issn: 0957-4174
  databaseCode: GBLVA
  dateStart: 20110101
  isFulltext: true
  titleUrlDefault: https://www.sciencedirect.com
  providerName: Elsevier
– providerCode: PRVESC
  databaseName: Elsevier SD Complete Freedom Collection [SCCMFC]
  customDbUrl:
  eissn: 1873-6793
  dateEnd: 99991231
  omitProxy: true
  ssIdentifier: ssj0017007
  issn: 0957-4174
  databaseCode: ACRLP
  dateStart: 19950101
  isFulltext: true
  titleUrlDefault: https://www.sciencedirect.com
  providerName: Elsevier
– providerCode: PRVESC
  databaseName: Elsevier SD Freedom Collection
  customDbUrl:
  eissn: 1873-6793
  dateEnd: 99991231
  omitProxy: true
  ssIdentifier: ssj0017007
  issn: 0957-4174
  databaseCode: .~1
  dateStart: 19950101
  isFulltext: true
  titleUrlDefault: https://www.sciencedirect.com
  providerName: Elsevier
– providerCode: PRVESC
  databaseName: Elsevier SD Freedom Collection Journals [SCFCJ] - access via UTK
  customDbUrl:
  eissn: 1873-6793
  dateEnd: 99991231
  omitProxy: true
  ssIdentifier: ssj0017007
  issn: 0957-4174
  databaseCode: AIKHN
  dateStart: 19950101
  isFulltext: true
  titleUrlDefault: https://www.sciencedirect.com
  providerName: Elsevier
– providerCode: PRVLSH
  databaseName: Elsevier Journals
  customDbUrl:
  mediaType: online
  eissn: 1873-6793
  dateEnd: 99991231
  omitProxy: true
  ssIdentifier: ssj0017007
  issn: 0957-4174
  databaseCode: AKRWK
  dateStart: 19900101
  isFulltext: true
  providerName: Library Specific Holdings
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV3PS8MwFA5jXrz4W5zOkYM36bamSZsex3BMxYHMwW4hv6oT7YbbEC_-7ea1qaDIDh5bklBeku-9NN_7HkIXJOUyiTMbOG-kA8qYDLhiLFA6oZpnkksGCc53o3g4oTdTNq2hfpULA7RKj_0lphdo7d90vDU7i9msM3bBgXOH7mgXQr10BonmlCZQxaD9-U3zAPm5pNTbSwJo7RNnSo6XXb6D9hAJ26A5U8g0_-mcfsF04XsGe2jHB424V37XPqrZ_ADtVgUZsN-fh-geGHVlyQcs8_mri7HtEhsLFwXWYPWB5Xo1B-lKoC9j4Lw_4vGTXDhkwD1jCh4RLgYpfxIuj9BkcPXQHwa-ZkKgI8JXgc6YpM7QzISxIimxWRQzzSNjY6J1lMlUxXFKpSRUhioxobFZwo1JM2mZ4kl0jOr5PLcnCJOuZJRQS6UbUrowikehIkRpFza4DraBwspYQntBcahr8SIq5tizAAMLMLAoDdxAl999FqWcxsbWrJoD8WNRCIf3G_s1qwkTfksuhTvnOWCH89LpP4c9Q9vwVMg8dpuovnpb23MXkqxUq1hzLbTVu74djr4ACwnhgQ
linkProvider Elsevier
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV3PT8IwFG4QD3rxtxFF7cGbmbiu3bojIRpUIDFCwq3prylGgQjEePFvt2_rSDTGg9etbZbX9nvvrV-_h9AZSblM4swGzhvpgDImA64YC5ROqOaZ5JLBBeduL24P6O2QDSuoVd6FAVqlx_4C03O09k8a3pqN6WjUeHDBgXOHLrULoV46YytolTKSQAZ28bnkeYD-XFII7iUBNPc3ZwqSl529g_gQCS9AdCbXaf7VO_3A6dz5XG-hDR814mbxYduoYsc7aLOsyID9Bt1F90CpK2o-YDmevLog286wsXBSYA1WH1gu5hPQrgT-MgbS-yN-eJJTBw24aUxOJML5IMVfwtkeGlxf9VvtwBdNCHRE-DzQGZPUWZqZMFYkJTaLYqZ5ZGxMtI4ymao4TqmUhMpQJSY0Nku4MWkmLVM8ifZRdTwZ2wOEyaVklFBLpRtSujiKR6EiRGkXN7gOtobC0lhCe0VxKGzxIkrq2LMAAwswsCgMXEPnyz7TQk_jz9asnAPxbVUIB_h_9quXEyb8npwJl-g5ZIeE6fCfw56itXa_2xGdm97dEVqHN7nm42UdVedvC3vs4pO5OsnX3xdbKeMW
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Explaining+anomalies+detected+by+autoencoders+using+Shapley+Additive+Explanations&rft.jtitle=Expert+systems+with+applications&rft.au=Antwarg%2C+Liat&rft.au=Miller%2C+Ronnie+Mindlin&rft.au=Shapira%2C+Bracha&rft.au=Rokach%2C+Lior&rft.date=2021-12-30&rft.pub=Elsevier+Ltd&rft.issn=0957-4174&rft.eissn=1873-6793&rft.volume=186&rft_id=info:doi/10.1016%2Fj.eswa.2021.115736&rft.externalDocID=S0957417421011155
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0957-4174&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0957-4174&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0957-4174&client=summon