2014
DOI: 10.1080/13803611.2014.985316
|View full text |Cite
|
Sign up to set email alerts
|

Sources of bias in outcome assessment in randomised controlled trials: a case study

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
25
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 10 publications
(26 citation statements)
references
References 14 publications
0
25
0
Order By: Relevance
“…The effect size (in the form of standardised mean difference) for the PIM test was 0.33 and for the SENT-R-B test was 1.11. Ainsworth et al (2015) is predicated on the assumption that these numbers are estimates for the effectiveness of the intervention, should be expected to be similar and, therefore, that the apparently large difference needs explanation. Ainsworth et al (2015) suggests three alternative explanations for the difference in effect size: the timing of the tests, their treatment inherence and the nature of masking.…”
Section: Contextmentioning
confidence: 99%
See 2 more Smart Citations
“…The effect size (in the form of standardised mean difference) for the PIM test was 0.33 and for the SENT-R-B test was 1.11. Ainsworth et al (2015) is predicated on the assumption that these numbers are estimates for the effectiveness of the intervention, should be expected to be similar and, therefore, that the apparently large difference needs explanation. Ainsworth et al (2015) suggests three alternative explanations for the difference in effect size: the timing of the tests, their treatment inherence and the nature of masking.…”
Section: Contextmentioning
confidence: 99%
“…Ainsworth et al (2015) is predicated on the assumption that these numbers are estimates for the effectiveness of the intervention, should be expected to be similar and, therefore, that the apparently large difference needs explanation. Ainsworth et al (2015) suggests three alternative explanations for the difference in effect size: the timing of the tests, their treatment inherence and the nature of masking. The paper presents various arguments as evidence in favour or against each alternative, concluding that "the evidence from this study suggests that the difference in effect size between the primary and secondary outcome is probably due to lack of blinding and nonindependence of teachers administering the tests" (p.12), albeit maintaining that the only way to be sure of this explanation would be to conduct an RCT comparing masked and nonmasked assessor conditions.…”
Section: Contextmentioning
confidence: 99%
See 1 more Smart Citation
“…It is the interaction of the virus with the body's physiology and the treatment. Also, being a randomised intervention, there is a claim that the results have more general applicability and a high degree of validity (Ainsworth et al 2015). This works for things like viruses because they do not have a mind of their own but are of the same 'mind' (e.g.…”
Section: My Virus Has a Mind Of Its Own -On The Ontological Question mentioning
confidence: 99%
“…For example … the Bush government in the USA attempted to construct 'empirically randomised control trials' as the 'gold standard' for assessing educational research and for evaluating all research applications … The situation in the UK has been similar. (Lingard & Gale 2010: 33) The reasons for doing so are (1) ostensibly 'grounded in political and professional concerns about underachievement and educational equity' (Chapter 8) and (2) for purposes of efficiency and effectiveness, so that 'time and money were not wasted on irrelevant or ineffective strategies' (Chapter 8); (3) both pursued in the belief that much education research is unscientific and based in values, opinion and 'bias' that have a disregard for evidence (Hammersley 2005;Ainsworth et al 2015;Pring 2015). In fact, advocates of RCTs at government level have called for their use in educational research precisely because they are presumed to yield knowledge superior and more reliable than other methods (e.g.…”
Section: Doing a Bradbury -On The Ethical Questions Of Distinction Anmentioning
confidence: 99%