2009
DOI: 10.1348/000712608x377117
|View full text |Cite
|
Sign up to set email alerts
|

Standardized or simple effect size: What should be reported?

Abstract: It is regarded as best practice for psychologists to report effect size when disseminating quantitative research findings. Reporting of effect size in the psychological literature is patchy -though this may be changing -and when reported it is far from clear that appropriate effect size statistics are employed. This paper considers the practice of reporting point estimates of standardized effect size and explores factors such as reliability, range restriction and differences in design that distort standardized… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

10
376
0
7

Year Published

2013
2013
2020
2020

Publication Types

Select...
7
3

Relationship

0
10

Authors

Journals

citations
Cited by 465 publications
(397 citation statements)
references
References 34 publications
10
376
0
7
Order By: Relevance
“…Furthermore, with more than 40% items rated as 'unclear' in our risk of bias assessment, incomplete reporting was considered a major problem in our review, which, in turn, affects the confidence in the results from the included studies. Second, guidelines for effect sizes are arbitrary and findings from studies should always be interpreted in terms of their practical and clinical significance (for discussion, see : Baguley, 2009). …”
Section: Discussionmentioning
confidence: 99%
“…Furthermore, with more than 40% items rated as 'unclear' in our risk of bias assessment, incomplete reporting was considered a major problem in our review, which, in turn, affects the confidence in the results from the included studies. Second, guidelines for effect sizes are arbitrary and findings from studies should always be interpreted in terms of their practical and clinical significance (for discussion, see : Baguley, 2009). …”
Section: Discussionmentioning
confidence: 99%
“…Many studies and research syntheses have to create a common scale across disparate tests by converting scores to standard deviation units or z-scores, where a standard deviation is defined as the average deviation from the mean across test-takers on a given assessment. 1 In this case, however, all of the test scores are reported in grade equivalents or in forms that can be easily converted to grade equivalents, so we use these as our common metric, thereby avoiding the need to use standard deviation units for different tests (Baguley, 2009). 2 Grade-level equivalents have the additional benefit of being easily understood by policymakers and practitioners, because one unit is equal to a single, nine-month academic year of learning in a particular content area.…”
Section: Creating a Common Performance Scalementioning
confidence: 99%
“…Although it is recognised that there are arguments against using interval scales and parametric methods with ordinal data, a reasoned assignment of an interval scale to the ordinal categories here can generate a useful raw score measure and aid interpretation and communication of results in the context of the study (Baguley, 2009;Velleman & Wilkinson, 1993). We are concerned here with making judgements on quality improvements, and a scale that gives a stronger positive emphasis to a desired goal of "very good" (rather than just "good"), and a stronger negative emphasis to "very poor" (rather than just "poor") may be useful.…”
Section: Effect Sizementioning
confidence: 99%