secml: Secure and explainable machine learning in Python
We present secml, an open-source Python library for secure and explainable machine learning. It implements the most popular attacks against machine learning, including test-time evasion attacks to generate adversarial examples against deep neural networks and training-time poisoning attacks against...
Saved in:
| Published in | SoftwareX Vol. 18; p. 101095 |
|---|---|
| Main Authors | , , , , , |
| Format | Journal Article |
| Language | English |
| Published |
Elsevier B.V
01.06.2022
Elsevier |
| Subjects | |
| Online Access | Get full text |
| ISSN | 2352-7110 2352-7110 |
| DOI | 10.1016/j.softx.2022.101095 |
Cover
| Abstract | We present secml, an open-source Python library for secure and explainable machine learning. It implements the most popular attacks against machine learning, including test-time evasion attacks to generate adversarial examples against deep neural networks and training-time poisoning attacks against support vector machines and many other algorithms. These attacks enable evaluating the security of learning algorithms and the corresponding defenses under both white-box and black-box threat models. To this end, secml provides built-in functions to compute security evaluation curves, showing how quickly classification performance decreases against increasing adversarial perturbations of the input data. secml also includes explainability methods to help understand why adversarial attacks succeed against a given model, by visualizing the most influential features and training prototypes contributing to each decision. It is distributed under the Apache License 2.0 and hosted at https://github.com/pralab/secml. |
|---|---|
| AbstractList | We present secml, an open-source Python library for secure and explainable machine learning. It implements the most popular attacks against machine learning, including test-time evasion attacks to generate adversarial examples against deep neural networks and training-time poisoning attacks against support vector machines and many other algorithms. These attacks enable evaluating the security of learning algorithms and the corresponding defenses under both white-box and black-box threat models. To this end, secml provides built-in functions to compute security evaluation curves, showing how quickly classification performance decreases against increasing adversarial perturbations of the input data. secml also includes explainability methods to help understand why adversarial attacks succeed against a given model, by visualizing the most influential features and training prototypes contributing to each decision. It is distributed under the Apache License 2.0 and hosted at https://github.com/pralab/secml. |
| ArticleNumber | 101095 |
| Author | Sotgiu, Angelo Demontis, Ambra Demetrio, Luca Melis, Marco Biggio, Battista Pintor, Maura |
| Author_xml | – sequence: 1 givenname: Maura orcidid: 0000-0002-1944-2875 surname: Pintor fullname: Pintor, Maura organization: DIEE, University of Cagliari, Via Marengo, Cagliari, Italy – sequence: 2 givenname: Luca orcidid: 0000-0001-5104-1476 surname: Demetrio fullname: Demetrio, Luca organization: DIEE, University of Cagliari, Via Marengo, Cagliari, Italy – sequence: 3 givenname: Angelo orcidid: 0000-0003-2100-9517 surname: Sotgiu fullname: Sotgiu, Angelo organization: DIEE, University of Cagliari, Via Marengo, Cagliari, Italy – sequence: 4 givenname: Marco orcidid: 0000-0003-3641-2093 surname: Melis fullname: Melis, Marco organization: DIEE, University of Cagliari, Via Marengo, Cagliari, Italy – sequence: 5 givenname: Ambra orcidid: 0000-0001-9318-6913 surname: Demontis fullname: Demontis, Ambra organization: DIEE, University of Cagliari, Via Marengo, Cagliari, Italy – sequence: 6 givenname: Battista orcidid: 0000-0001-7752-509X surname: Biggio fullname: Biggio, Battista email: battista.biggio@unica.it organization: DIEE, University of Cagliari, Via Marengo, Cagliari, Italy |
| BookMark | eNqNkNtKw0AQhhep4PEJvMkLtO4x2RW8kOKhUFBQr5fJZqJbtpuySdW-vakRES_UqxkGvv9nvgMyik1EQk4YnTDK8tPFpG3q7m3CKefbCzVqh-xzofi4YIyOvu175LhtF5RSprhWXO4T3aJbhrPsHt06YQaxyvBtFcBHKANmS3DPPmIWEFL08SnzMbvbdM9NPCK7NYQWjz_nIXm8unyY3oznt9ez6cV87CST3Tg3VBWuzjkIVTqjMRecm5xxZBIpVaXOK1CiqGRdKKOhFppJLZjSnEpqjDgksyG3amBhV8kvIW1sA95-HJr0ZCF13gW0KlfCcOzLhJaS5lBiyUAbKPrCstJ9lhyy1nEFm1cI4SuQUbuVaRf2Q6bdyrSDzB4zA-ZS07YJa-t8B51vYpfAhz9Y8YP9X-P5QGFv9sVjsq3zGB1WPqHr-tf9r_w7KmWhBA |
| CitedBy_id | crossref_primary_10_1016_j_inffus_2024_102303 crossref_primary_10_1016_j_cose_2022_103006 crossref_primary_10_1007_s10462_024_11005_9 crossref_primary_10_1016_j_softx_2024_101740 |
| Cites_doi | 10.1145/2046684.2046692 10.1109/ACCESS.2019.2909068 10.1016/j.patcog.2018.07.023 10.21105/joss.02607 |
| ContentType | Journal Article |
| Copyright | 2022 The Author(s) |
| Copyright_xml | – notice: 2022 The Author(s) |
| DBID | 6I. AAFTH AAYXX CITATION ADTOC UNPAY DOA |
| DOI | 10.1016/j.softx.2022.101095 |
| DatabaseName | ScienceDirect Open Access Titles Elsevier:ScienceDirect:Open Access CrossRef Unpaywall for CDI: Periodical Content Unpaywall DOAJ Directory of Open Access Journals |
| DatabaseTitle | CrossRef |
| DatabaseTitleList | |
| Database_xml | – sequence: 1 dbid: DOA name: DOAJ Directory of Open Access Journals url: https://www.doaj.org/ sourceTypes: Open Website – sequence: 2 dbid: UNPAY name: Unpaywall url: https://proxy.k.utb.cz/login?url=https://unpaywall.org/ sourceTypes: Open Access Repository |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Computer Science |
| EISSN | 2352-7110 |
| ExternalDocumentID | oai_doaj_org_article_565392ef62384406abeb1a89a7961bd8 10.1016/j.softx.2022.101095 10_1016_j_softx_2022_101095 S2352711022000656 |
| GroupedDBID | 0R~ 457 5VS 6I. AACTN AAEDW AAFTH AALRI AAXUO ABMAC ACGFS ADBBV ADEZE ADVLN AEXQZ AFJKZ AFTJW AGHFR AITUG AKRWK ALMA_UNASSIGNED_HOLDINGS AMRAJ BCNDV EBS EJD FDB GROUPED_DOAJ IPNFZ IXB KQ8 M~E O9- OK1 RIG ROL SSZ AAYWO AAYXX ACVFH ADCNI AEUPX AFPUW AIGII AKBMS AKYEP APXCP CITATION ADTOC UNPAY |
| ID | FETCH-LOGICAL-c414t-69057cf62a35bc98e63229612e14e005b86da537d4f7598af3814831582040993 |
| IEDL.DBID | DOA |
| ISSN | 2352-7110 |
| IngestDate | Fri Oct 03 12:43:03 EDT 2025 Sun Oct 26 03:47:36 EDT 2025 Tue Jul 01 02:31:34 EDT 2025 Thu Apr 24 23:12:38 EDT 2025 Sun Apr 06 06:53:51 EDT 2025 |
| IsDoiOpenAccess | true |
| IsOpenAccess | true |
| IsPeerReviewed | true |
| IsScholarly | true |
| Keywords | Security Explainability Machine learning Python3 Adversarial attacks |
| Language | English |
| License | This is an open access article under the CC BY-NC-ND license. cc-by-nc-nd |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-c414t-69057cf62a35bc98e63229612e14e005b86da537d4f7598af3814831582040993 |
| ORCID | 0000-0002-1944-2875 0000-0003-3641-2093 0000-0003-2100-9517 0000-0001-7752-509X 0000-0001-9318-6913 0000-0001-5104-1476 |
| OpenAccessLink | https://doaj.org/article/565392ef62384406abeb1a89a7961bd8 |
| ParticipantIDs | doaj_primary_oai_doaj_org_article_565392ef62384406abeb1a89a7961bd8 unpaywall_primary_10_1016_j_softx_2022_101095 crossref_citationtrail_10_1016_j_softx_2022_101095 crossref_primary_10_1016_j_softx_2022_101095 elsevier_sciencedirect_doi_10_1016_j_softx_2022_101095 |
| ProviderPackageCode | CITATION AAYXX |
| PublicationCentury | 2000 |
| PublicationDate | June 2022 2022-06-00 2022-06-01 |
| PublicationDateYYYYMMDD | 2022-06-01 |
| PublicationDate_xml | – month: 06 year: 2022 text: June 2022 |
| PublicationDecade | 2020 |
| PublicationTitle | SoftwareX |
| PublicationYear | 2022 |
| Publisher | Elsevier B.V Elsevier |
| Publisher_xml | – name: Elsevier B.V – name: Elsevier |
| References | Tramèr Florian, Zhang Fan, Juels Ari, Reiter Michael K, Ristenpart Thomas. Stealing machine learning models via prediction APIs. In: 25th USENIX security symposium. 2016, p. 601–18. Biggio, Nelson, Laskov (b9) 2012 Papernot, Faghri, Carlini, Goodfellow, Feinman, Kurakin (b13) 2018 Shokri, Stronati, Song, Shmatikov (b8) 2017 Demetrio, Biggio (b20) 2021 Sundararajan, Taly, Yan (b17) 2017 Joseph, Nelson, Rubinstein, Tygar (b12) 2018 Biggio, Corona, Maiorca, Nelson, Šrndić, Laskov (b2) 2013; vol. 8190 Ribeiro, Singh, Guestrin (b16) 2016 Papernot, McDaniel, Jha, Fredrikson, Berkay Celik, Swami (b4) 2016 Rauber, Zimmermann, Bethge, Brendel (b19) 2020; 5 Huang L, Joseph AD, Nelson B, Rubinstein B, Tygar JD. Adversarial machine learning. In: 4th ACM workshop on artificial intelligence and security. 2011, p. 43–57. Szegedy Christian, Zaremba Wojciech, Sutskever Ilya, Bruna Joan, Erhan Dumitru, Goodfellow Ian, et al. Intriguing properties of neural networks. In: International conference on learning representations. 2014. Rauber Jonas, Brendel Wieland, Bethge Matthias. Foolbox: A Python toolbox to benchmark the robustness of machine learning models. In: Reliable machine learning in the wild workshop, 34th international conference on machine learning. 2017. Carlini, Wagner (b5) 2017 Gu, Liu, Dolan-Gavitt, Garg (b11) 2019; 7 Nicolae, Sinn, Tran, Buesser, Rawat, Wistuba (b15) 2018 Biggio, Roli (b10) 2018; 84 Melis, Demontis, Biggio, Brown, Fumera, Roli (b22) 2017 Shumailov, Zhao, Bates, Papernot, Mullins, Anderson (b6) 2021 Koh, Liang (b18) 2017 Demontis, Melis, Pintor, Jagielski, Biggio, Oprea (b21) 2019 Melis (10.1016/j.softx.2022.101095_b22) 2017 Koh (10.1016/j.softx.2022.101095_b18) 2017 Demontis (10.1016/j.softx.2022.101095_b21) 2019 Papernot (10.1016/j.softx.2022.101095_b13) 2018 Papernot (10.1016/j.softx.2022.101095_b4) 2016 Sundararajan (10.1016/j.softx.2022.101095_b17) 2017 Carlini (10.1016/j.softx.2022.101095_b5) 2017 Ribeiro (10.1016/j.softx.2022.101095_b16) 2016 Shumailov (10.1016/j.softx.2022.101095_b6) 2021 Nicolae (10.1016/j.softx.2022.101095_b15) 2018 10.1016/j.softx.2022.101095_b7 Biggio (10.1016/j.softx.2022.101095_b2) 2013; vol. 8190 Joseph (10.1016/j.softx.2022.101095_b12) 2018 Rauber (10.1016/j.softx.2022.101095_b19) 2020; 5 Demetrio (10.1016/j.softx.2022.101095_b20) 2021 Gu (10.1016/j.softx.2022.101095_b11) 2019; 7 10.1016/j.softx.2022.101095_b1 Biggio (10.1016/j.softx.2022.101095_b9) 2012 Biggio (10.1016/j.softx.2022.101095_b10) 2018; 84 10.1016/j.softx.2022.101095_b3 Shokri (10.1016/j.softx.2022.101095_b8) 2017 10.1016/j.softx.2022.101095_b14 |
| References_xml | – start-page: 3 year: 2017 end-page: 18 ident: b8 article-title: Membership inference attacks against machine learning models publication-title: 2017 IEEE symposium on security and privacy – start-page: 39 year: 2017 end-page: 57 ident: b5 article-title: Towards evaluating the robustness of neural networks publication-title: IEEE symposium on security and privacy – year: 2018 ident: b13 article-title: Technical report on the cleverhans V2.1.0 Adversarial examples Library – reference: Tramèr Florian, Zhang Fan, Juels Ari, Reiter Michael K, Ristenpart Thomas. Stealing machine learning models via prediction APIs. In: 25th USENIX security symposium. 2016, p. 601–18. – volume: 5 start-page: 2607 year: 2020 ident: b19 article-title: Foolbox native: Fast adversarial attacks to benchmark the robustness of machine learning models in pytorch, tensorflow, and jax publication-title: J Open Source Softw – start-page: 372 year: 2016 end-page: 387 ident: b4 article-title: The limitations of deep learning in adversarial settings publication-title: Proc. 1st IEEE European symposium on security and privacy – year: 2021 ident: b20 article-title: Secml-malware: Pentesting windows malware classifiers with adversarial exemples in python – reference: Rauber Jonas, Brendel Wieland, Bethge Matthias. Foolbox: A Python toolbox to benchmark the robustness of machine learning models. In: Reliable machine learning in the wild workshop, 34th international conference on machine learning. 2017. – start-page: 1135 year: 2016 end-page: 1144 ident: b16 article-title: Why should I trust you?: Explaining the predictions of any classifier publication-title: 22nd ACM SIGKDD Int’L Conf. Knowl. Disc. data mining – reference: Szegedy Christian, Zaremba Wojciech, Sutskever Ilya, Bruna Joan, Erhan Dumitru, Goodfellow Ian, et al. Intriguing properties of neural networks. In: International conference on learning representations. 2014. – start-page: 1807 year: 2012 end-page: 1814 ident: b9 article-title: Poisoning attacks against support vector machines publication-title: 29th Int’L conf. on machine learning – year: 2018 ident: b15 article-title: Adversarial robustness toolbox v1. 0.0 – year: 2019 ident: b21 article-title: Why do adversarial attacks transfer? Explaining transferability of evasion and poisoning attacks publication-title: 28th USENIX security symposium USENIX security 19 – start-page: 3319 year: 2017 end-page: 3328 ident: b17 article-title: Axiomatic attribution for deep networks publication-title: International conference on machine learning – start-page: 212 year: 2021 end-page: 231 ident: b6 article-title: Sponge examples: Energy-latency attacks on neural networks publication-title: 2021 IEEE European symposium on security and privacy – year: 2018 ident: b12 article-title: Adversarial machine learning – volume: 84 start-page: 317 year: 2018 end-page: 331 ident: b10 article-title: Wild patterns: Ten years after the rise of adversarial machine learning publication-title: Pattern Recognit – start-page: 751 year: 2017 end-page: 759 ident: b22 article-title: Is deep learning safe for robot vision? Adversarial examples against the icub humanoid publication-title: ICCVW vision in practice on autonomous robots – reference: Huang L, Joseph AD, Nelson B, Rubinstein B, Tygar JD. Adversarial machine learning. In: 4th ACM workshop on artificial intelligence and security. 2011, p. 43–57. – volume: vol. 8190 start-page: 387 year: 2013 end-page: 402 ident: b2 article-title: Evasion attacks against machine learning at test time publication-title: Machine learning and knowledge discovery in databases – volume: 7 start-page: 47230 year: 2019 end-page: 47244 ident: b11 article-title: Badnets: Evaluating backdooring attacks on deep neural networks publication-title: IEEE Access – start-page: 1885 year: 2017 end-page: 1894 ident: b18 article-title: Understanding black-box predictions via influence functions publication-title: International conference on machine learning – ident: 10.1016/j.softx.2022.101095_b1 doi: 10.1145/2046684.2046692 – ident: 10.1016/j.softx.2022.101095_b3 – start-page: 751 year: 2017 ident: 10.1016/j.softx.2022.101095_b22 article-title: Is deep learning safe for robot vision? Adversarial examples against the icub humanoid – start-page: 39 year: 2017 ident: 10.1016/j.softx.2022.101095_b5 article-title: Towards evaluating the robustness of neural networks – ident: 10.1016/j.softx.2022.101095_b7 – volume: 7 start-page: 47230 year: 2019 ident: 10.1016/j.softx.2022.101095_b11 article-title: Badnets: Evaluating backdooring attacks on deep neural networks publication-title: IEEE Access doi: 10.1109/ACCESS.2019.2909068 – start-page: 1885 year: 2017 ident: 10.1016/j.softx.2022.101095_b18 article-title: Understanding black-box predictions via influence functions – start-page: 1807 year: 2012 ident: 10.1016/j.softx.2022.101095_b9 article-title: Poisoning attacks against support vector machines – year: 2018 ident: 10.1016/j.softx.2022.101095_b12 – volume: vol. 8190 start-page: 387 year: 2013 ident: 10.1016/j.softx.2022.101095_b2 article-title: Evasion attacks against machine learning at test time – year: 2021 ident: 10.1016/j.softx.2022.101095_b20 – start-page: 212 year: 2021 ident: 10.1016/j.softx.2022.101095_b6 article-title: Sponge examples: Energy-latency attacks on neural networks – year: 2018 ident: 10.1016/j.softx.2022.101095_b15 – volume: 84 start-page: 317 year: 2018 ident: 10.1016/j.softx.2022.101095_b10 article-title: Wild patterns: Ten years after the rise of adversarial machine learning publication-title: Pattern Recognit doi: 10.1016/j.patcog.2018.07.023 – year: 2019 ident: 10.1016/j.softx.2022.101095_b21 article-title: Why do adversarial attacks transfer? Explaining transferability of evasion and poisoning attacks – ident: 10.1016/j.softx.2022.101095_b14 – start-page: 1135 year: 2016 ident: 10.1016/j.softx.2022.101095_b16 article-title: Why should I trust you?: Explaining the predictions of any classifier – year: 2018 ident: 10.1016/j.softx.2022.101095_b13 – start-page: 3319 year: 2017 ident: 10.1016/j.softx.2022.101095_b17 article-title: Axiomatic attribution for deep networks – start-page: 3 year: 2017 ident: 10.1016/j.softx.2022.101095_b8 article-title: Membership inference attacks against machine learning models – volume: 5 start-page: 2607 issue: 53 year: 2020 ident: 10.1016/j.softx.2022.101095_b19 article-title: Foolbox native: Fast adversarial attacks to benchmark the robustness of machine learning models in pytorch, tensorflow, and jax publication-title: J Open Source Softw doi: 10.21105/joss.02607 – start-page: 372 year: 2016 ident: 10.1016/j.softx.2022.101095_b4 article-title: The limitations of deep learning in adversarial settings |
| SSID | ssj0001528524 |
| Score | 2.2801383 |
| Snippet | We present secml, an open-source Python library for secure and explainable machine learning. It implements the most popular attacks against machine learning,... |
| SourceID | doaj unpaywall crossref elsevier |
| SourceType | Open Website Open Access Repository Enrichment Source Index Database Publisher |
| StartPage | 101095 |
| SubjectTerms | Adversarial attacks Explainability Machine learning Python3 Security |
| SummonAdditionalLinks | – databaseName: Elsevier Free Content dbid: IXB link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1LSwMxEA7Si158i_VFDh4NbTfJbuJNRVFBEVTobclm01LZbpdaUf-9M9lstRcRjzskm2Ty-GbC5BtCjnMJmKczzaSwOUP6FpbprMvwFWMiLM-7Bu8h7-7j62dx25f9JXLRvIXBsMpw9tdnuj-tg6QTtNmpRqPOYwS2Q9JDj8UDKdJuc6EwfcNN__z7nkVGSvrctlieYYWGfMiHeb3CafcBfmIUoaSLeSZ-AJTn8V_AqeW3sjKf76YofuDQ1TpZDQYkPav7uEGWXLlJ1prkDDTs1S2iXp0dF6fUX6g7asqcuo-qCI-l6NgHUToaskYM6aikD59IJLBNnq8uny6uWUiTwKzoiRkD_1YmdhBHhsvMauVi2KQaLBfXEw42Wabi3Eie5GKQSK3MAEBaKN6TAP7g3Wm-Q1rlpHS7hCZd7WI7QI4xAXaUNLlUijsJAi25TdokanST2sAhjqksirQJFntJvUJTVGhaK7RNTuaVqppC4_fi56j0eVHkv_aCyXSYhgWQSmTUjRwMmisBNonJAHOM0iaBgWe5apO4mbJ0YTnBr0a_t87mE_yX3u79t6F9soJfddTZAWnNpm_uEOybWXbkF_AXDiDyeg priority: 102 providerName: Elsevier – databaseName: Unpaywall dbid: UNPAY link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1LSwMxEA5aD56sT6yo5ODR7WM32U28qSgiWApaUC9LXluq61q0xeqvd5LNlopQ9Bpmk81MkvkmTL5B6EhT8Hlc8oASpQNL3xJILtuBfcWYEBXptrD3kDfd-KpPru_pvS_15bMq3-H8mXpNurPa67B1GwJaSDo2RnGuM26NdLaMVmIKMLyGVvrd3umDKyZHATWCXMUy5PK5XLcQEIahbWnbghJznsgR9v9wSKuTYiQ-P0Sezzmcyzp6rJ7tlHkmz83JWDbV128Wx__PZR2teRiKT0u5DbRkik1Ur0o8YL_jtxB7N-olP8HuWt5gUWhspqPcP7nCLy4V02Bfe2KAhwXufVo6gm3Uv7y4O78KfLGFQJEOGQcQJdNEZXEoIioVZyaGrc4B_5gOMbBVJYu1oFGiSZZQzkQGrp6wqEMBQkCMyKMdVCteC7OLcNLmJlaZZSojgMao0JSxyFBo4DRSSQOFleJT5ZnIbUGMPK1Szp5Sp7jUWistrdVAx7OPRiURx2LxM2vRmahl0XYNr2-D1BshpZaXNzQw6YgRQDZCgucSjIsEJi41a6C4Wg-pByQl0ICuhotHD2ar5y9_u_dP-X1UG79NzAHgobE89Cv_G-LTBuM priority: 102 providerName: Unpaywall |
| Title | secml: Secure and explainable machine learning in Python |
| URI | https://dx.doi.org/10.1016/j.softx.2022.101095 http://www.softxjournal.com/article/S2352711022000656/pdf https://doaj.org/article/565392ef62384406abeb1a89a7961bd8 |
| UnpaywallVersion | publishedVersion |
| Volume | 18 |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVAFT databaseName: Open Access Digital Library customDbUrl: eissn: 2352-7110 dateEnd: 99991231 omitProxy: true ssIdentifier: ssj0001528524 issn: 2352-7110 databaseCode: KQ8 dateStart: 20150901 isFulltext: true titleUrlDefault: http://grweb.coalliance.org/oadl/oadl.html providerName: Colorado Alliance of Research Libraries – providerCode: PRVAON databaseName: DOAJ Directory of Open Access Journals customDbUrl: eissn: 2352-7110 dateEnd: 99991231 omitProxy: true ssIdentifier: ssj0001528524 issn: 2352-7110 databaseCode: DOA dateStart: 20150101 isFulltext: true titleUrlDefault: https://www.doaj.org/ providerName: Directory of Open Access Journals – providerCode: PRVESC databaseName: Elsevier Free Content customDbUrl: eissn: 2352-7110 dateEnd: 99991231 omitProxy: true ssIdentifier: ssj0001528524 issn: 2352-7110 databaseCode: IXB dateStart: 20150901 isFulltext: true titleUrlDefault: https://www.sciencedirect.com providerName: Elsevier – providerCode: PRVHPJ databaseName: ROAD: Directory of Open Access Scholarly Resources customDbUrl: eissn: 2352-7110 dateEnd: 99991231 omitProxy: true ssIdentifier: ssj0001528524 issn: 2352-7110 databaseCode: M~E dateStart: 20150101 isFulltext: true titleUrlDefault: https://road.issn.org providerName: ISSN International Centre – providerCode: PRVLSH databaseName: Elsevier Journals customDbUrl: mediaType: online eissn: 2352-7110 dateEnd: 99991231 omitProxy: true ssIdentifier: ssj0001528524 issn: 2352-7110 databaseCode: AKRWK dateStart: 20150901 isFulltext: true providerName: Library Specific Holdings |
| link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV07T8MwELZQGWDhjSiPygMjFmliJzZbi6gKFVWFqChT5DguKkpDRVvR_nvOTlKFpTCwZDgljnU--7uz7r5D6DJmgHkiEoRRFRND30IiETnEVDEGVHmxI8095GPXb_fpw4ANSq2-TE5YRg-cKe6aGe5UVw8BpjkF9JERnC6SCxkIvx7FtszX4aIUTGX1wS5ntqOtCx4GCQDkCsohm9w1hTNuAdGh6xqJY7pLlGDJsvf_QKeteTqRyy-ZJCX0ae2hndxtxI1suvtoQ6cHaLdoyYDzHXqI-FSrcXKD7TW6xjKNsV5MkrxECo9t6qTGea-INzxKcW9p6AOOUL9193zbJnlzBKJonc4IRLUsUKAW6bFICa592JqgElfXqYatFXE_lswLYjoMmOByCNBMuVdnAPkQ0wnvGFXSj1SfIBw4QvtqaJjFKHhPTMaMc08zEAjmqaCK3EI3ocqZw00DiyQsUsTeQ6vQ0Cg0zBRaRVerjyYZccb615tG6atXDeu1FYAthLkthL_ZQhX5xZKFuQOROQYw1Gj938lqgf8y29P_mO0Z2jZDZnln56gy-5zrC_BwZlENbTY6Ty-dmjVqeN4PmiDrd3uN128cufWV |
| linkProvider | Directory of Open Access Journals |
| linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV3dT9swED8x9gAvDAaIwhh-4HFW28RObN4GGuo2QJMGUt8sx3FRUQgVtKL899w5ToEXNO31Ysf22b4P6-53AIelRJ2nC82lcCUn-BZe6KLHKYsxFy4te5beIc8vssGV-DWUwyU4aXNhKKwyyv5GpgdpHSndyM3uZDzu_k3Qdsj75LEERZp9gI9ConVCWXzD45eHFpkoGYrbUgdOPVr0oRDn9YDibo6OYpIQpUeFJl5pqADk_0ZRrczqiX16tFX1ShGdrsNatCDZ92aSG7Dk68_wqa3OwOJl3QT14N1tdcTCi7pnti6Zn0-qmC3FbkMUpWexbMQ1G9fszxMhCWzB1emPy5MBj3USuBN9MeXo4MrcjbLEprJwWvkMb6lG08X3hcdbVqistDLNSzHKpVZ2hFpaqLQvUfuje6fTbViu72q_AyzvaZ-5EYGMCTSkpC2lUqmXSNAydXkHkpY3xkUQcaplUZk2WuzGBIYaYqhpGNqBb4tOkwZD4_3mx8T0RVMCwA6Eu_trE0-AkQSpm3hcdKoEGiW2QKVjlbY5LrwoVQeydsvMm_OEvxq_PzpfbPC_zHb3fwc6gJXB5fmZOft58XsPVulLE4L2BZan9zO_j8bOtPgaDvMzWkX1oA |
| linkToUnpaywall | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1LSwMxEA5aD56sT6yo5ODR7WM32U28qSgiWApaUC9LXluq61q0xeqvd5LNlopQ9Bpmk81MkvkmTL5B6EhT8Hlc8oASpQNL3xJILtuBfcWYEBXptrD3kDfd-KpPru_pvS_15bMq3-H8mXpNurPa67B1GwJaSDo2RnGuM26NdLaMVmIKMLyGVvrd3umDKyZHATWCXMUy5PK5XLcQEIahbWnbghJznsgR9v9wSKuTYiQ-P0Sezzmcyzp6rJ7tlHkmz83JWDbV128Wx__PZR2teRiKT0u5DbRkik1Ur0o8YL_jtxB7N-olP8HuWt5gUWhspqPcP7nCLy4V02Bfe2KAhwXufVo6gm3Uv7y4O78KfLGFQJEOGQcQJdNEZXEoIioVZyaGrc4B_5gOMbBVJYu1oFGiSZZQzkQGrp6wqEMBQkCMyKMdVCteC7OLcNLmJlaZZSojgMao0JSxyFBo4DRSSQOFleJT5ZnIbUGMPK1Szp5Sp7jUWistrdVAx7OPRiURx2LxM2vRmahl0XYNr2-D1BshpZaXNzQw6YgRQDZCgucSjIsEJi41a6C4Wg-pByQl0ICuhotHD2ar5y9_u_dP-X1UG79NzAHgobE89Cv_G-LTBuM |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=secml%3A+Secure+and+explainable+machine+learning+in+Python&rft.jtitle=SoftwareX&rft.au=Maura+Pintor&rft.au=Luca+Demetrio&rft.au=Angelo+Sotgiu&rft.au=Marco+Melis&rft.date=2022-06-01&rft.pub=Elsevier&rft.issn=2352-7110&rft.eissn=2352-7110&rft.volume=18&rft.spage=101095&rft_id=info:doi/10.1016%2Fj.softx.2022.101095&rft.externalDBID=DOA&rft.externalDocID=oai_doaj_org_article_565392ef62384406abeb1a89a7961bd8 |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2352-7110&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2352-7110&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2352-7110&client=summon |