In the era of big data, deep learning techniques provide intelligent solutions for various problems in real-life scenarios. However, deep neural networks depend on large-scale datasets including sensitive data, which causes the potential risk of privacy leakage. In addition, various constantly evolving attack methods are also threatening the data security in deep learning models. Protecting data privacy effectively at a lower cost has become an urgent challenge. This paper proposes an Adaptive Feature Relevance Region Segmentation (AFRRS) mechanism to provide differential privacy preservation. The core idea is to divide the input features into different regions with different relevance according to the relevance between input features and the model output. Less noise is intentionally injected into the region with stronger relevance, and more noise is injected into the regions with weaker relevance. Furthermore, we perturb loss functions by injecting noise into the polynomial coefficients of the expansion of the objective function to protect the privacy of data labels. Theoretical analysis and experiments have shown that the proposed AFRRS mechanism can not only provide strong privacy preservation for the deep learning model, but also maintain the good utility of the model under a given moderate privacy budget compared with existing methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.