Proceedings of the 14th ACM International Conference on Web Search and Data Mining 2021
DOI: 10.1145/3437963.3441719
|View full text |Cite
|
Sign up to set email alerts
|

Long-Term Effect Estimation with Surrogate Representation

Abstract: There are many scenarios where short-and long-term causal effects of an intervention are different. For example, low-quality ads may increase short-term ad clicks but decrease the long-term revenue via reduced clicks; search engines measured by inappropriate performance metrics may increase search query shares in a short-term but not long-term. This work therefore studies the long-term effect where the outcome of primary interest, or primary outcome, takes months or even years to accumulate. The observational … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
7
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2
2

Relationship

1
7

Authors

Journals

citations
Cited by 9 publications
(7 citation statements)
references
References 38 publications
0
7
0
Order By: Relevance
“…This is because counterfactual outcomes cannot be observed at the individual level, otherwise, ITEs would not be needed. In time series causal effect estimation, data collection is even more challenging because outcomes and time-varying covariates may take months or even years to observe [68,24].…”
Section: Discussionmentioning
confidence: 99%
“…This is because counterfactual outcomes cannot be observed at the individual level, otherwise, ITEs would not be needed. In time series causal effect estimation, data collection is even more challenging because outcomes and time-varying covariates may take months or even years to observe [68,24].…”
Section: Discussionmentioning
confidence: 99%
“…While models developed by Robins et al for estimating effects with time-varying treatments and confounding have existed for decades, they require strong parametric assumptions for estimation (Robins 1994;Robins et al 2000Robins et al , 2009. Within the machine learning community, researchers have begun to extend the above-described representation learning approaches to temporal settings using recurrent neural networks (RNN) (Box 7) (Lim et al 2018;Bica et al 2020a;Cheng et al 2021). For example, Cheng et al (2021) propose to replace each outcome modeling head of CFRNet with an RNN, one modeling the treated outcomes over time and one modeling the control outcomes.…”
Section: Conditioning On Time-varying Confoundingmentioning
confidence: 99%
“…Within the machine learning community, researchers have begun to extend the above-described representation learning approaches to temporal settings using recurrent neural networks (RNN) (Box 7) (Lim et al 2018;Bica et al 2020a;Cheng et al 2021). For example, Cheng et al (2021) propose to replace each outcome modeling head of CFRNet with an RNN, one modeling the treated outcomes over time and one modeling the control outcomes. Bica et al (2020a) more explicitly deal with time-varying confounding by having each unit of the RNN take time-varying X, non-varying features V , T , and representations from the temporally-previous unit as inputs.…”
Section: Conditioning On Time-varying Confoundingmentioning
confidence: 99%
“…While models developed by Robins et al for estimating effects with time-varying treatments and confounding have existed for decades, they require strong parametric assumptions for estimation (Robins 1994;Robins et al 2000Robins et al , 2009. Within the machine learning community, researchers have begun to extend the above-described representation learning approaches to temporal settings using recurrent neural networks (RNN) (Box 7) (Lim et al 2018;Bica et al 2020a;Cheng et al 2021). For example, Cheng et al (2021) propose to replace each outcome modeling head of CFRNet with an RNN, one modeling the treated outcomes over time and one modeling the control outcomes.…”
Section: Conditioning On Time-varying Confoundingmentioning
confidence: 99%
“…Within the machine learning community, researchers have begun to extend the above-described representation learning approaches to temporal settings using recurrent neural networks (RNN) (Box 7) (Lim et al 2018;Bica et al 2020a;Cheng et al 2021). For example, Cheng et al (2021) propose to replace each outcome modeling head of CFRNet with an RNN, one modeling the treated outcomes over time and one modeling the control outcomes. Bica et al (2020a) more explicitly deal with time-varying confounding by having each unit of the RNN take time-varying X, non-varying features V , T , and representations from the temporally-previous unit as inputs.…”
Section: Conditioning On Time-varying Confoundingmentioning
confidence: 99%