A protection method of trained CNN model with a secret key from unauthorized access
Open Access
- 9 July 2021
- journal article
- research article
- Published by Now Publishers in APSIPA Transactions on Signal and Information Processing
- Vol. 10 (1)
- https://doi.org/10.1017/atsip.2021.9
Abstract
In this paper, we propose a novel method for protecting convolutional neural network models with a secret key set so that unauthorized users without the correct key set cannot access trained models. The method enables us to protect not only from copyright infringement but also the functionality of a model from unauthorized access without any noticeable overhead. We introduce three block-wise transformations with a secret key set to generate learnable transformed images: pixel shuffling, negative/positive transformation, and format-preserving Feistel-based encryption. Protected models are trained by using transformed images. The results of experiments with the CIFAR and ImageNet datasets show that the performance of a protected model was close to that of non-protected models when the key set was correct, while the accuracy severely dropped when an incorrect key set was given. The protected model was also demonstrated to be robust against various attacks. Compared with the state-of-the-art model protection with passports, the proposed method does not have any additional layers in the network, and therefore, there is no overhead during training and inference processes.Keywords
This publication has 24 references indexed in Scilit:
- Security evaluation for block scrambling-based ETC systems against extended jigsaw puzzle solver attacksPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2017
- Embedding Watermarks into Deep Neural NetworksPublished by Association for Computing Machinery (ACM) ,2017
- An Encryption-then-Compression System for Lossless Image Compression StandardsIEICE Transactions on Information and Systems, 2017
- Deep Residual Learning for Image RecognitionPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2016
- Deep learningNature Methods, 2015
- Model Inversion Attacks that Exploit Confidence Information and Basic CountermeasuresPublished by Association for Computing Machinery (ACM) ,2015
- Deep learningNature, 2015
- ImageNet Large Scale Visual Recognition ChallengeInternational Journal of Computer Vision, 2015
- Multimedia data-embedding and watermarking technologiesProceedings of the IEEE, 1998
- Resolving rightful ownerships with invisible watermarking techniques: limitations, attacks, and implicationsIEEE Journal on Selected Areas in Communications, 1998