2019
DOI: 10.1002/sim.8164
|View full text |Cite
|
Sign up to set email alerts
|

Targeted learning with daily EHR data

Abstract: Electronic health records (EHR) data provide a cost and time-effective opportunity to conduct cohort studies of the effects of multiple time-point interventions in the diverse patient population found in real-world clinical settings. Because the computational cost of analyzing EHR data at daily (or more granular) scale can be quite high, a pragmatic approach has been to partition the follow-up into coarser intervals of pre-specified length. Current guidelines suggest employing a 'small' interval, but the feasi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
27
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
6
1

Relationship

2
5

Authors

Journals

citations
Cited by 31 publications
(27 citation statements)
references
References 67 publications
(77 reference statements)
0
27
0
Order By: Relevance
“…(refer Section VI for more detail), thus effectiveness of sequence labeling is limited for extraction of relevant information from EHR data [42]. To sum up, It is usually a difficult task for the traditional sequence labeling algorithms to extract relevant information from EHR data [43] and traditional algorithms also face challenges in analyzing EHR data due as such algorithms are not suitable for dealing with huge columns of data [44].…”
Section: Deep Learning For Ehr Data Analysismentioning
confidence: 99%
“…(refer Section VI for more detail), thus effectiveness of sequence labeling is limited for extraction of relevant information from EHR data [42]. To sum up, It is usually a difficult task for the traditional sequence labeling algorithms to extract relevant information from EHR data [43] and traditional algorithms also face challenges in analyzing EHR data due as such algorithms are not suitable for dealing with huge columns of data [44].…”
Section: Deep Learning For Ehr Data Analysismentioning
confidence: 99%
“…Statistical Analyses-We implemented longitudinal targeted maximum likelihood estimation (longitudinal TMLE) 28,30 to estimate wave-specific associations between having an adult child in the US and outcomes of interest. Briefly, longitudinal TMLE models the expected outcome via a sequence of nested regressions and uses wave-specific inverse probability of treatment and censoring weights to update the resulting estimates.…”
Section: Unmet Needs For Assistancementioning
confidence: 99%
“…Due to computational demands, we only used the gradient boosting machine algorithm for treatment and attrition models for analyses of binary outcomes. Estimation was performed using stremr 30,48 and sl3 49 packages for R. 50 We implemented multiple imputation procedures to address missing data using the Amelia package for R. 51 We created ten multiply imputed datasets for the baseline wave, and incorporated mean values from multiply imputed baseline models into the multiple imputation of time-varying measures at successive follow-up waves. We combined estimates using Rubin's rules.…”
Section: Unmet Needs For Assistancementioning
confidence: 99%
See 1 more Smart Citation
“…This leaves open the question of how to select an appropriate discretization, ideally at a fine enough scale that would capture all time‐dependent confounding while balancing for inflated variance. Even though some concerns and warnings regarding arbitrary discretization have emerged, 11,12 no criteria have yet been proposed to guide an analyst's choice of discretization. We investigate different tools which collectively inform whether there is adequate data support for a given discretization for use with pooled longitudinal targeted maximum likelihood estimation (LTMLE) 13 .…”
Section: Introductionmentioning
confidence: 99%