A Systematic Review on Model Watermarking for Neural Networks

Machine learning (ML) models are applied in an increasing variety of domains. The availability of large amounts of data and computational resources encourages the development of ever more complex and valuable models. These models are considered the intellectual property of the legitimate parties who...

Full description

Saved in:
Bibliographic Details
Published inFrontiers in big data Vol. 4; p. 729663
Main Author Boenisch, Franziska
Format Journal Article
LanguageEnglish
Published Switzerland Frontiers Media S.A 29.11.2021
Subjects
Online AccessGet full text
ISSN2624-909X
2624-909X
DOI10.3389/fdata.2021.729663

Cover

More Information
Summary:Machine learning (ML) models are applied in an increasing variety of domains. The availability of large amounts of data and computational resources encourages the development of ever more complex and valuable models. These models are considered the intellectual property of the legitimate parties who have trained them, which makes their protection against stealing, illegitimate redistribution, and unauthorized application an urgent need. Digital watermarking presents a strong mechanism for marking model ownership and, thereby, offers protection against those threats. This work presents a taxonomy identifying and analyzing different classes of watermarking schemes for ML models. It introduces a unified threat model to allow structured reasoning on and comparison of the effectiveness of watermarking methods in different scenarios. Furthermore, it systematizes desired security requirements and attacks against ML model watermarking. Based on that framework, representative literature from the field is surveyed to illustrate the taxonomy. Finally, shortcomings and general limitations of existing approaches are discussed, and an outlook on future research directions is given.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
ObjectType-Review-3
content type line 23
Jiazhao Li, University of Michigan, United States
This article was submitted to Cybersecurity and Privacy, a section of the journal Frontiers in Big Data
Edited by: Chaowei Xiao, Arizona State University, United States
Reviewed by: Bo Luo, University of Kansas, United States
ISSN:2624-909X
2624-909X
DOI:10.3389/fdata.2021.729663