2017
DOI: 10.1609/icwsm.v11i1.14975
|View full text |Cite
|
Sign up to set email alerts
|

Estimating the Effect Of Exercising On Users’ Online Behavior

Abstract: This study aims to estimate the influence of offline activity on users’ online behavior, relying on a matching method to reduce the effect of confounding variables. We analyze activities of 850 users who are active on both Twitter and Foursquare social networks. Users’ offline activity is extracted from Foursquare posts and users’ online behavior is extracted from Twitter posts. Users’ interests, representing their online behavior, are extracted with regards to a set of topics in several subsequent time interv… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
5
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(6 citation statements)
references
References 0 publications
0
5
0
Order By: Relevance
“…Text representations used in propensity score models generally do not yet leverage recent breakthroughs in NLP, and roughly fall into three groups: those using uni-and bigram representations (De Choudhury et al 2016;Johansson, Shalit, and Sontag 2016;Olteanu, Varol, and Kiciman 2017), those using LDA or topic modeling (Falavarjani et al 2017;Roberts, Stewart, and Nielsen 2020;Sridhar et al 2018), and those using neural word embeddings such as GLoVe (Pham and Shen 2017), fastText (Joulin et al 2017;Chen, Montano-Campos, and Zadrozny 2020), or BERT (Veitch, Sridhar, and Blei 2019), (Pryzant et al 2018). Three classes of estimators are commonly used to compute the AT E: inverse probability of treatment weighting (IPTW), propensity score stratification, and matching, either using propensity scores or, less frequently, some other distance metric.…”
Section: Background and Related Workmentioning
confidence: 99%
See 3 more Smart Citations
“…Text representations used in propensity score models generally do not yet leverage recent breakthroughs in NLP, and roughly fall into three groups: those using uni-and bigram representations (De Choudhury et al 2016;Johansson, Shalit, and Sontag 2016;Olteanu, Varol, and Kiciman 2017), those using LDA or topic modeling (Falavarjani et al 2017;Roberts, Stewart, and Nielsen 2020;Sridhar et al 2018), and those using neural word embeddings such as GLoVe (Pham and Shen 2017), fastText (Joulin et al 2017;Chen, Montano-Campos, and Zadrozny 2020), or BERT (Veitch, Sridhar, and Blei 2019), (Pryzant et al 2018). Three classes of estimators are commonly used to compute the AT E: inverse probability of treatment weighting (IPTW), propensity score stratification, and matching, either using propensity scores or, less frequently, some other distance metric.…”
Section: Background and Related Workmentioning
confidence: 99%
“…Using the common setting of real social media histories (De Choudhury et al 2016;Olteanu, Varol, and Kiciman 2017;Veitch, Sridhar, and Blei 2019;Choudhury and Kiciman 2017;Falavarjani et al 2017;Kiciman, Counts, and Gasser 2018;Saha et al 2019;Roberts, Stewart, and Nielsen 2020), we identify five challenges consistently present when representing natural language for causal inference:…”
Section: Challenges For Causal Inference With Textmentioning
confidence: 99%
See 2 more Smart Citations
“…When explicit randomization is not an option, there are numerous techniques for measuring causality from observational, historical data (Imbens and Rubin 2015), but many of these methods, such as matching (Falavarjani et al 2017;Ribeiro, Cheng, and West 2022), rely on the strong assumption of unconfoundedness, which stipulates that all relevant variables that affect both the treatment (e.g., civil interactions) and outcome (e.g., subsequent user engagement) are measured. Unconfoundedness is difficult to justify in user behavior studies, where there can be numerous latent attributes that can confound the treatment-outcome relationship of interest (Feder, Riehm, and Mojtabai 2020).…”
Section: Introductionmentioning
confidence: 99%