2017
DOI: 10.1145/3017429
|View full text |Cite
|
Sign up to set email alerts
|

Cross-Platform App Recommendation by Jointly Modeling Ratings and Texts

Abstract: Over the last decade, the renaissance of Web technologies has transformed the online world into an application (App) driven society. While the abundant Apps have provided great convenience, their sheer number also leads to severe information overload, making it difficult for users to identify desired Apps. To alleviate the information overloading issue, recommender systems have been proposed and deployed for the App domain. However, existing work on App recommendation has largely focused on one single platform… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
39
0
3

Year Published

2017
2017
2024
2024

Publication Types

Select...
5
4
1

Relationship

3
7

Authors

Journals

citations
Cited by 88 publications
(43 citation statements)
references
References 58 publications
1
39
0
3
Order By: Relevance
“…To evaluate the performance, we adopted root mean square error (RMSE), where a lower RMSE score indicates a be er performance. Note that RMSE has been widely used for evaluating regression tasks such as recommendation with explicit ratings [5,30] and click-through rate prediction [24]. We rounded up the prediction of each model to 1 or −1 if it was out of the range.…”
Section: Evaluation Protocolsmentioning
confidence: 99%
“…To evaluate the performance, we adopted root mean square error (RMSE), where a lower RMSE score indicates a be er performance. Note that RMSE has been widely used for evaluating regression tasks such as recommendation with explicit ratings [5,30] and click-through rate prediction [24]. We rounded up the prediction of each model to 1 or −1 if it was out of the range.…”
Section: Evaluation Protocolsmentioning
confidence: 99%
“…As such, we can alleviate the time-consuming problem of ranking all items for each user during evaluation. In terms of evaluation metrics, we adopt Hit Ratio at rank k (HR@k) [16] and Normalized Discounted Cumulative Gain at rank k (NDCG@k) [1,5,12,14] to evaluate the performance of the ranked list generated by our models. In experimental parts we set k = 10 for both metrics.…”
Section: Experimental Settingsmentioning
confidence: 99%
“…The study [36] was generally focused on recommending independent items to users who were suggested by a hybrid cross-platform app recommendation (STAR) system. Another study [37] introduced recommender systems on mobile platforms based on user profiles generated from the installed apps.…”
Section: Application Recommender System Studiesmentioning
confidence: 99%