Proceedings of the 30th ACM International Conference on Information &Amp; Knowledge Management 2021
DOI: 10.1145/3459637.3482052
|View full text |Cite
|
Sign up to set email alerts
|

Adversarial Learning for Incentive Optimization in Mobile Payment Marketing

Abstract: Many payment platforms hold large-scale marketing campaigns, which allocate incentives to encourage users to pay through their applications. To maximize the return on investment, incentive allocations are commonly solved in a two-stage procedure. After training a response estimation model to estimate the users' mobile payment probabilities (MPP), a linear programming process is applied to obtain the optimal incentive allocation. However, the large amount of biased data in the training set, generated by the pre… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
6
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2
1
1

Relationship

2
2

Authors

Journals

citations
Cited by 4 publications
(6 citation statements)
references
References 25 publications
(22 reference statements)
0
6
0
Order By: Relevance
“…Many classic methods use a two-step framework [5,8,19,23,35,36,39], i.e., a response prediction step and a decision making step. Response prediction models include DNN [8,39] or GNN [23,36], while the decision making step can utilize dual method [40], linear programming [8,36], bisection method [39] or control based method [35]. These methods only optimize the immediate reward and fail to capture the long-term effect.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Many classic methods use a two-step framework [5,8,19,23,35,36,39], i.e., a response prediction step and a decision making step. Response prediction models include DNN [8,39] or GNN [23,36], while the decision making step can utilize dual method [40], linear programming [8,36], bisection method [39] or control based method [35]. These methods only optimize the immediate reward and fail to capture the long-term effect.…”
Section: Related Workmentioning
confidence: 99%
“…As these metrics are hard to directly optimize, conventional methods use immediate user responses, like the coupon redemption rate [36], as surrogates. Typically these methods take a two-step framework [5,8,19,35,39]. They first build a response model, which estimates users' immediate responses to different incentives [8,23], then solve a constrained optimization problem to make the budget allocation [5,40].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Incentive allocation relies on treatment effect estimation models to estimate users' purchase probability with different incentives. PCAN [124] was proposed to learn an an unbiased model by leveraging a small set of unbiased data. Speciőcally, a biased network was built to generate unbiased data representation by controlling the distribution difference to a unbiased network.…”
Section: Marketing Applicationsmentioning
confidence: 99%
“…In addition, with the flexibility of the design of neural networks, it is easy to realize deconfounding of the uplift modeling on the non-RCT data. Several deep learning based methods (Johansson, Shalit, and Sontag 2016;Yao et al 2018;Yu et al 2021;Ma, Li, and Cottrell 2020;Li et al 2021;Künzel et al 2018;Yao et al 2019;Chen et al 2021;Yao et al 2021) successfully extend the traditional approach to combine with deep learning and achieve improvements on the uplift modeling.…”
Section: Related Workmentioning
confidence: 99%