A novel obfuscation method based on majority logic for preventing unauthorized access to binary deep neural networks

The significant expansion of deep learning applications has necessitated safeguarding the deep neural network (DNN) model from potential unauthorized access, highlighting its importance as a valuable asset. This study proposes an innovative key-based algorithm-hardware co-design methodology to prote...

Full description

Saved in:
Bibliographic Details
Published inScientific reports Vol. 15; no. 1; pp. 24416 - 19
Main Authors Mohseni, Alireza, Moaiyeri, Mohammad Hossein, Adel, Mohammad Javad
Format Journal Article
LanguageEnglish
Published London Nature Publishing Group UK 08.07.2025
Nature Publishing Group
Nature Portfolio
Subjects
Online AccessGet full text
ISSN2045-2322
2045-2322
DOI10.1038/s41598-025-09722-4

Cover

More Information
Summary:The significant expansion of deep learning applications has necessitated safeguarding the deep neural network (DNN) model from potential unauthorized access, highlighting its importance as a valuable asset. This study proposes an innovative key-based algorithm-hardware co-design methodology to protect deep neural network (DNN) models from unauthorized access. The proposed approach significantly reduces model accuracy when an incorrect key is used, preventing unauthorized users from accessing the design. The significance and advancements of binary neural networks (BNNs) in the hardware implementation of cutting-edge DNN models have led us to develop our methodology for BNNs. However, the proposed technique can be broadly applied to various designs for implementing neural network accelerators. The proposed protective approach increases efficiency more than similar solutions across different BNN architectures and standard datasets. We validate our proposed hardware design using post-layout simulations using the Cadence Virtuoso tool and the well-established TSMC 40 nm CMOS technology. The proposed approach yields 43%, 79%, and 71% reductions in area, average power, and weight modification energy per filter in the neural network structures. Additionally, the security of the key circuit has been analyzed and evaluated against Boolean satisfiability-based attacks, structural attacks, reverse engineering, and power-based side-channel attacks.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:2045-2322
2045-2322
DOI:10.1038/s41598-025-09722-4