2008
DOI: 10.1186/1471-2105-9-140
|View full text |Cite
|
Sign up to set email alerts
|

Normalization of oligonucleotide arrays based on the least-variant set of genes

Abstract: Background: It is well known that the normalization step of microarray data makes a difference in the downstream analysis. All normalization methods rely on certain assumptions, so differences in results can be traced to different sensitivities to violation of the assumptions. Illustrating the lack of robustness, in a striking spike-in experiment all existing normalization methods fail because of an imbalance between up-and down-regulated genes. This means it is still important to develop a normalization metho… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
64
0

Year Published

2009
2009
2019
2019

Publication Types

Select...
7
1
1

Relationship

1
8

Authors

Journals

citations
Cited by 52 publications
(64 citation statements)
references
References 30 publications
0
64
0
Order By: Relevance
“…Probes with an E value of Ͼ0.7 (around 17% of all probes) were selected as the LOS. For each microarray, a LOWESS (locally weighted scatterplot smoothing) (40) polynome with a smoothing parameter of 0.2 was fitted to the raw data from LOS probes and their average expression over the whole time series, similar to a procedure described previously (41). All probe values were normalized regarding these polynomes.…”
Section: Methodsmentioning
confidence: 99%
“…Probes with an E value of Ͼ0.7 (around 17% of all probes) were selected as the LOS. For each microarray, a LOWESS (locally weighted scatterplot smoothing) (40) polynome with a smoothing parameter of 0.2 was fitted to the raw data from LOS probes and their average expression over the whole time series, similar to a procedure described previously (41). All probe values were normalized regarding these polynomes.…”
Section: Methodsmentioning
confidence: 99%
“…The CONSTANd algorithm is also a data-driven normalization method and adopts the expected value (mean) as measure of the central tendency. This global normalization scheme is justified when three key assumptions are fulfilled (53). First, all normalization methods require a reference set of observations that do not vary between the samples.…”
Section: Discussionmentioning
confidence: 99%
“…This type of inaccuracies can be remediated by data normalization. Luckily, a plethora of data normalization methods exist that can be borrowed from micro-array, LC-MS or NMR data analysis (12)(13)(14)(15). Some of these normalization techniques are already implemented in software packages dedicated for mass spectral data, as the case for the DAPAR implemented in R Bioconductor.…”
mentioning
confidence: 99%
“…The proposed method is an adaptation of a similar algorithm developed for Affymetrix mRNA array (Calza et al 2007), to a modified one for the miRNA Agilent platform, with an improved modeling of the data. The main motivation of the joint GLM modeling stands in the heterogeneous variance of intensities across probes for a large proportion of miRNAs, so that a standard RLM would not efficiently estimate array and probe effects, and thus would likely result in suboptimal identification of the reference set for normalization.…”
Section: Discussionmentioning
confidence: 99%