Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence 2017
DOI: 10.24963/ijcai.2017/546
|View full text |Cite
|
Sign up to set email alerts
|

No Time to Observe: Adaptive Influence Maximization with Partial Feedback

Abstract: Although influence maximization problem has been extensively studied over the past ten years, majority of existing work adopt one of the following models: full-feedback model or zero-feedback model. In the zero-feedback model, we have to commit the seed users all at once in advance, this strategy is also known as non-adaptive policy. In the full-feedback model, we select one seed at a time and wait until the diffusion completes, before selecting the next seed. Full-feedback model has better performance but pot… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
17
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
3
3

Relationship

3
7

Authors

Journals

citations
Cited by 35 publications
(17 citation statements)
references
References 11 publications
0
17
0
Order By: Relevance
“…Recently, there is another type of work called adaptive IM [25][26][27][28][29], which has attracted many researchers' attention. These works assume that the feedback in the real-world is available.…”
Section: Influence Maximization In Static Networkmentioning
confidence: 99%
“…Recently, there is another type of work called adaptive IM [25][26][27][28][29], which has attracted many researchers' attention. These works assume that the feedback in the real-world is available.…”
Section: Influence Maximization In Static Networkmentioning
confidence: 99%
“…Chen and Krause (2013) propose a policy that selects batches of fixed size r, and they show that their policy achieves a bounded approximation ratio compare to the optimal policy which is restricted to selecting batches of fixed size r. However, their approximation ratio becomes arbitrarily bad with respect to the optimal fully adaptive policy. In the context of adaptive viral marketing, Yuan and Tang (2017) develop a partial-adaptive seeding policy that achieves a bounded approximation ratio against the optimal fully adaptive seeding policy. Our study is similar to theirs in that both studies introduce a controlling parameter to balance the performance/adaptivility tradeoff.…”
Section: Related Workmentioning
confidence: 99%
“…Submodular maximization has studied extensively recently due to its applications in a wide range of domains including active learning (Golovin and Krause 2011), virtual marketing Yuan 2020, Yuan andTang 2017b), sensor placement (Krause and Guestrin 2007). Under the non-adaptive setting, where the state of each item is deterministic, Nemhauser et al (1978) show that a classic greedy algorithm achieves a 1 − 1/e approximation ratio when maximizing a monotone submodular function subject to a cardinality constraint.…”
Section: Introductionmentioning
confidence: 99%