2021
DOI: 10.3390/fi13100256
|View full text |Cite
|
Sign up to set email alerts
|

Mobile App Start-Up Prediction Based on Federated Learning and Attributed Heterogeneous Network Embedding

Abstract: At present, most mobile App start-up prediction algorithms are only trained and predicted based on single-user data. They cannot integrate the data of all users to mine the correlation between users, and cannot alleviate the cold start problem of new users or newly installed Apps. There are some existing works related to mobile App start-up prediction using multi-user data, which require the integration of multi-party data. In this case, a typical solution is distributed learning of centralized computing. Howe… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
0
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 23 publications
0
0
0
Order By: Relevance
“…At present, the application of Federated Learning in market segments is also gradually developing [22][23][24]. However, there are few examples of literature on the application of Federated Learning to forecasting [25][26][27][28][29]; especially, the application of Federated Learning to the sustainable development research of e-commerce enterprise demand forecasting has not yet been found.…”
Section: Federated Learningmentioning
confidence: 99%
“…At present, the application of Federated Learning in market segments is also gradually developing [22][23][24]. However, there are few examples of literature on the application of Federated Learning to forecasting [25][26][27][28][29]; especially, the application of Federated Learning to the sustainable development research of e-commerce enterprise demand forecasting has not yet been found.…”
Section: Federated Learningmentioning
confidence: 99%
“…The approach of FL differs from that of distributed machine learning (DML), where the data are initially centralized on a server and subsequently partitioned into subsets for the purpose of learning tasks. In this scenario, the sample size follows a uniform distribution and is both independent and identically distributed (IID) [26]. In contrast, FL distributes the algorithm for processing across edge devices rather than concentrating the data on a central server [27,28], as presented in Figure 1.…”
Section: Introductionmentioning
confidence: 99%
“…Various experiments on the MNIST dataset [1] have confirmed the effectiveness of the proposed scheme. In [24], the authors proposed a mobile application startup prediction model based on FL under heterogeneous network integration, which solves the cold-start problem of new users or new applications while ensuring user privacy. Experiments were done on a LiveLab dataset while their application showed promising performance.…”
Section: Introductionmentioning
confidence: 99%
“…Using consensus algorithm to make the training of the global model able to adapt to: (i) a heavy communication load and (ii) to avoid communication blocking problems that can be caused by standard FL algorithms such as FedAvg or FSVRG [7] MNIST [6] and comprehensive work of William Shakespeare [8] Using a bi-directional compression where computationless compression operators are employed to quantify gradients on both global and local model frameworks [9] MNIST [6] Using a sparse coding algorithm for efficient communication with additive holomorphic encryption including a differential privacy to prevent data leakage Systems heterogeneity [10] MNIST dataset [6], fashion-MNIST dataset [11], and EMNIST dataset [12] Using an iterative node selection algorithm for efficient management of the FL learning tasks by taking into account the non-synchronization of the delivered messages [13] Heartbeat dataset [14] and the Seizure dataset [15] Using hierarchical FL in heterogeneous systems by introducing an optimized solution for user assignment and resource allocation by paying attention to variants based on gradient descent while taking unbalanced data into account [16] IMDB [17], Handwritten [18] and ALLAML [19] datasets Using a semi-supervised FL for heterogeneous transfer learning to take advantage of unlabeled non-overlapping samples and to help reduce overfitting Statistical heterogeneity [20] MNIST [6], Fed-MEx [21], and Fed-Goodreads [22] Using similarity between clients to model their relationships by involving clients similar gradients to provide better coverage [23] MNIST dataset [6] Using MHAT for local model problems containing various network architectures while KD techniques are investigated for information extraction and global model update [24] LiveLab dataset Using a mobile application startup prediction model based on FL Privacy concerns [25] MovieLens [26] and Epinions [27] datasets Using a trust-based mechanism and extensive reinforcement learning for potential recommender planning and candidate selection.…”
mentioning
confidence: 99%