2015
DOI: 10.1111/infa.12078
|View full text |Cite
|
Sign up to set email alerts
|

A Missed Opportunity for Clarity: Problems in the Reporting of Effect Size Estimates in Infant Developmental Science

Abstract: Several years ago, the American Psychological Association began requiring that effect size estimates be reported to provide a better indication of the associative strength between factors and dependent measures in empirical studies (Publication manual of the American Psychological Association, 2010, Author, Washington, DC). Accordingly, developmental journals require/ strongly recommend effect size estimates be included in published work. Potentially, this trend has important benefits for infancy research give… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
12
0

Year Published

2015
2015
2022
2022

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 10 publications
(14 citation statements)
references
References 29 publications
1
12
0
Order By: Relevance
“…Nevertheless, when possible, it seems important to consider the paradigm being used, and possibly use a more sensitive way of measuring infants’ capabilities. One reason that researchers do not appear to choose the most robust methods might again be due to a lack of consideration of meta‐analytic effect size estimates, which in turn might be (partially) due to a lack of information on (how to interpret) effect size estimates and lack of experience using them for study planning (Mills‐Smith, Spangler, Panneton, & Fritz, ). We, thus, recommend to change this practice and take into account the possibility that different methods’ sensitivity is reflected in effect size.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…Nevertheless, when possible, it seems important to consider the paradigm being used, and possibly use a more sensitive way of measuring infants’ capabilities. One reason that researchers do not appear to choose the most robust methods might again be due to a lack of consideration of meta‐analytic effect size estimates, which in turn might be (partially) due to a lack of information on (how to interpret) effect size estimates and lack of experience using them for study planning (Mills‐Smith, Spangler, Panneton, & Fritz, ). We, thus, recommend to change this practice and take into account the possibility that different methods’ sensitivity is reflected in effect size.…”
Section: Discussionmentioning
confidence: 99%
“…A possible reason for prospective power calculations and meta‐analyses being rare lies in the availability of data in published reports. Despite longstanding recommendations to move beyond the persistent focus on p values (such as American Psychological Association, ), a shift toward effect sizes or even the reporting of them has not (yet) been widely adopted (Mills‐Smith et al., ).…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…We have followed recent recommendations by rendering our data publicly accessible and open to updates, i.e. we have built a Community‐Augmented Meta‐Analysis (for a summary of the benefits for a research community, see Tsuji, Bergmann & Cristia, ; see also Mills‐Smith, Spangler, Panneton & Fritz, , for a discussion of the importance, in the field of infant developmental science, to incorporate effect size into our interpretations). Here, we limit ourselves to a report on the meta‐analysis; all relevant information on the database and extensive supplementary materials can be found on the companion website (http://inworddb.acristia.org).…”
Section: Introductionmentioning
confidence: 99%
“…Also, there is reason to believe that effect sizes in infancy research are often incorrectly reported; for example, partial eta-squared η 2 p is often misreported as eta-squared η 2 . This confusion is likely to inflate the practical significance of QUANTIFYING VARIABILITY IN INFANT RESEARCH 10 the findings, leading to an overestimation of the statistical magnitude and importance of effects (Mills-Smith, Spangler, Panneton, & Fritz, 2015). Therefore, the mean effect size of 0.67 reported by Dunst et al (2012) is likely an overestimate of the real effect size.…”
Section: The Current Study: Motivations and Goalsmentioning
confidence: 97%