Deep learning is one of the advanced approaches of machine learning, and has attracted a growing attention in the recent years. It is used nowadays in different domains and applications such as pattern recognition, medical prediction, and speech recognition. Differently from traditional learning algorithms, deep learning can overcome the dependency on hand-designed features. Deep learning experience is particularly improved by leveraging powerful infrastructures such as clouds and adopting collaborative learning for model training. However, this comes at the expense of privacy, especially when sensitive data are processed during the training and the prediction phases, as well as when training model is shared. In this paper, we provide a review of the existing privacy-preserving deep learning techniques, and propose a novel multilevel taxonomy, which categorizes the current state-of-the-art privacy-preserving deep learning techniques on the basis of privacy-preserving tasks at the top level, and key technological concepts at the base level. This survey further summarizes evaluation results of the reviewed solutions with respect to defined performance metrics. In addition, it derives a set of learned lessons from each privacy-preserving task. Finally, it highlights open research challenges and provides some recommendations as future research directions.
In recent years, deep learning in healthcare applications has attracted considerable attention from research community. They are deployed on powerful cloud infrastructures to process big health data. However, privacy issue arises when sensitive data are offloaded to the remote cloud. In this paper, we focus on pervasive health monitoring applications that allow anywhere and anytime monitoring of patients, such as heart diseases diagnosis, sleep apnea detection, and more recently, early detection of Covid-19. As pervasive health monitoring applications generally operate on constrained client-side environment, it is important to take into consideration these constraints when designing privacy-preserving solutions. This paper aims therefore to review the adequacy of existing privacy-preserving solutions for deep learning in pervasive health monitoring environment. To this end, we identify the privacy-preserving learning scenarios and their corresponding tasks and requirements. Furthermore, we define the evaluation criteria of the reviewed solutions, we discuss them, and highlight open issues for future research.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.