2011
DOI: 10.1007/978-1-4419-9782-1_22
|View full text |Cite
|
Sign up to set email alerts
|

Targeted Methods for Biomarker Discovery

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
26
0

Year Published

2011
2011
2024
2024

Publication Types

Select...
5
1

Relationship

2
4

Authors

Journals

citations
Cited by 13 publications
(26 citation statements)
references
References 0 publications
0
26
0
Order By: Relevance
“…In addition to looking on a gene by gene basis, one can gain power and possibly aid interpretation by looking for common patterns among sets of genes, either by use of clustering algorithms, for instance (Kaufman and Rousseeuw, 1990), and so-called gene-set enrichment analysis (GSEA) (Subramanian et al, 2005) and (Mootha et al, 2003), as well as by looking for gene ontologies with overrepresented, differentially expressed genes (Zeeberg et al, 2003;Balasubramanian et al, 2004). We believe there is great promise in using semiparametric models developed for causal inference as tools for biomarker discovery (Tuglus and Van der Laan, 2008), which we have begun applying to our benzene microarray data with promising results. Of course, replication either in follow-up studies or built-in replication in a single study is necessary to exclude false-positive findings.…”
Section: Discussionmentioning
confidence: 99%
“…In addition to looking on a gene by gene basis, one can gain power and possibly aid interpretation by looking for common patterns among sets of genes, either by use of clustering algorithms, for instance (Kaufman and Rousseeuw, 1990), and so-called gene-set enrichment analysis (GSEA) (Subramanian et al, 2005) and (Mootha et al, 2003), as well as by looking for gene ontologies with overrepresented, differentially expressed genes (Zeeberg et al, 2003;Balasubramanian et al, 2004). We believe there is great promise in using semiparametric models developed for causal inference as tools for biomarker discovery (Tuglus and Van der Laan, 2008), which we have begun applying to our benzene microarray data with promising results. Of course, replication either in follow-up studies or built-in replication in a single study is necessary to exclude false-positive findings.…”
Section: Discussionmentioning
confidence: 99%
“…However, if one selects the flanking markers too far away from the marker of interest, the flanking markers will not adjust well for the markers that are in between the marker of interest and the flanking markers. This could be a subject-matter driven decision, or one could decide that the flanking markers cannot exceed a correlation of δ with A , and simulations suggest that δ = 0.7 is a good choice (Tuglus and van der Laan, 2008). Another interesting option is to define a set of δ -values, and present the TMLE and corresponding statistical inference for each choice of δ and thereby each δ -specific effect parameter.…”
Section: Tmlementioning
confidence: 99%
“…under the constraint m(A = 0,W |β ) = 0 for all β and W . Analogous to the previously presented tVIM for univariate outcome (Tuglus and van der Laan, 2008), variable A can be binary or continuous. We can also represent this measure in traditional semi-parametric model form…”
Section: Variable Importancementioning
confidence: 99%
“…It then subsequently uses the maximum likelihood estimation (MLE) framework to reduce the bias for the targeted parameter by maximizing the likelihood in a direction that corresponds to fitting the target parameter, while treating the initial estimator as a fixed off-set. Prior applications of tMLE methods have shown great promise and applicability in the epidemiological and medical fields, in particular, for biomarker discovery (Tuglus and van der Laan, 2008). The tVIM-RM method presented here builds upon previous variable importance methodology (Robins, Mark, and Newey, 1992, Robins and Rotnitzky, 2001, Yu and van der Laan, September 2003, van der Laan, 2005, adapting it for repeated measures data and incorporating updates on the methodology to increase efficiency and computational speed.…”
Section: Introductionmentioning
confidence: 99%