Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery &Amp; Data Mining 2019
DOI: 10.1145/3292500.3330874
|View full text |Cite
|
Sign up to set email alerts
|

Factorization Bandits for Online Influence Maximization

Abstract: In this paper, we study the problem of online in uence maximization in social networks. In this problem, a learner aims to identify the set of "best in uencers" in a network by interacting with the network, i.e., repeatedly selecting seed nodes and observing activation feedback in the network. We capitalize on an important property of the in uence maximization problem named network assortativity, which is ignored by most existing works in online in uence maximization. To realize network assortativity, we facto… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
25
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
6
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 26 publications
(25 citation statements)
references
References 28 publications
0
25
0
Order By: Relevance
“…C2IM [28] focuses on influence communities' extraction in order to ease the connection to the influencers. Wu et al [37] propose the factorization of bandits' methods in order to predict influencers through iterations of reward strategies. However, it hardly scales in seed set sizes (generally 50) and, moreover, models are not designed to be time dependent and flexible.…”
Section: Algorithmsmentioning
confidence: 99%
“…C2IM [28] focuses on influence communities' extraction in order to ease the connection to the influencers. Wu et al [37] propose the factorization of bandits' methods in order to predict influencers through iterations of reward strategies. However, it hardly scales in seed set sizes (generally 50) and, moreover, models are not designed to be time dependent and flexible.…”
Section: Algorithmsmentioning
confidence: 99%
“…Incorporation of various factors have been studied, such as influence susceptibility [9], sentiment [10,11], freeloaders [12], targeted ads [13], and engagement [14,15]. There are also bandit-based IM algorithms [16,17] 1047 typically uses feedbacks from actual data. However, the benchmark methods were still based on influence spread, which makes the usefulness in real-world questionable.…”
Section: Introductionmentioning
confidence: 99%
“…The goal is to maximize the influence values received over T rounds, or equivalently, to minimize the cumulative regret compared with the optimal seed set that generates the largest influence. The most widely studied feedback in the literature is the edge-level feedback (Chen, Wang, and Yuan 2013;Chen et al 2016;Wang and Chen 2017;Wen et al 2017;Wu et al 2019), where the learner can observe whether an edge passes through the information received by its start point. The node-level feedback was only investigated very recently in (Vaswani et al 2015;Li et al 2020), where the learner can only observe which nodes receive the information at each time step during a diffusion process.…”
Section: Introductionmentioning
confidence: 99%
“…OIM has been studied extensively in the literature. For edge-level feedback, existing work (Chen, Wang, and Yuan 2013;Lei et al 2015;Chen et al 2016;Wang and Chen 2017;Wen et al 2017;Wu et al 2019) present both theoretical and heuristic results. The node-level feedback was first proposed in (Vaswani et al 2015).…”
Section: Introductionmentioning
confidence: 99%