As statistics show that most threats to information security in Internet of Things (IOT) are caused by data leakage, lots of methods have been developed to address the problem of data leakage prevention (DLP). However, most of these methods do not work well when the confidentiality of data changes frequently. We propose an Adaptive Feature Graph Update model (AFGU) to solve the problem by mapping the features of confidential data to the feature graph. First, the feature graph are built to record the features of confidential data which involve the sensitive terms and their context. Then, the improved evaluation method for the importance of each term is employed to update the feature graph according to the importance degree of each term. Finally, the confidentiality of data are determined by matching the features of the data with the feature graph. Experiments results show that the proposed method can detect confidential data effectively and efficiently.
As the scale of federated learning expands, solving the Non‐IID data problem of federated learning has become a key challenge of interest. Most existing solutions generally aim to solve the overall performance improvement of all clients; however, the overall performance improvement often sacrifices the performance of certain clients, such as clients with less data. Ignoring fairness may greatly reduce the willingness of some clients to participate in federated learning. In order to solve the above problem, the authors propose Ada‐FFL, an adaptive fairness federated aggregation learning algorithm, which can dynamically adjust the fairness coefficient according to the update of the local models, ensuring the convergence performance of the global model and the fairness between federated learning clients. By integrating coarse‐grained and fine‐grained equity solutions, the authors evaluate the deviation of local models by considering both global equity and individual equity, then the weight ratio will be dynamically allocated for each client based on the evaluated deviation value, which can ensure that the update differences of local models are fully considered in each round of training. Finally, by combining a regularisation term to limit the local model update to be closer to the global model, the sensitivity of the model to input perturbations can be reduced, and the generalisation ability of the global model can be improved. Through numerous experiments on several federal data sets, the authors show that our method has more advantages in convergence effect and fairness than the existing baselines.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.