Handbook of Research Methods in Industrial and Organizational Psychology 2004
DOI: 10.1002/9780470756669.ch6
|View full text |Cite
|
Sign up to set email alerts
|

Using Power Analysis to Evaluate and Improve Research

Abstract: One of the most common statistical procedures in the behavioral and social sciences is to test the hypothesis that treatments or interventions have no effect, or that the correlation between two variables is equal to zero, etc. Null hypothesis (H 0 ) tests have long been viewed as a critical part of the research process, and in the mind of some researchers, statistical analyses start and end with these "significance tests." Power analyses deal with the relationships between the structure of these statistical t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2010
2010
2020
2020

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(4 citation statements)
references
References 28 publications
0
4
0
Order By: Relevance
“…Due to the large amount of uncertainty when estimating population parameters, small samples tend to produce unstable and untrustworthy results (Murphy, 2002). If research showed that 60 percent of all members support a name change of their SCM society in order to reflect developments in the field, it would be doubtful whether there is a real majority (at population level) if only ten random members were asked for their opinion in the survey.…”
Section: Methodological Issues Of Small Sample Studiesmentioning
confidence: 99%
“…Due to the large amount of uncertainty when estimating population parameters, small samples tend to produce unstable and untrustworthy results (Murphy, 2002). If research showed that 60 percent of all members support a name change of their SCM society in order to reflect developments in the field, it would be doubtful whether there is a real majority (at population level) if only ten random members were asked for their opinion in the survey.…”
Section: Methodological Issues Of Small Sample Studiesmentioning
confidence: 99%
“…Further, by commonly held standards in social and behavioural sciences (Murphy, 2003), with a power of 0.80 (a = 0.05), and a research design with the capacity to detect even small effects, the results speak for themselves. Because the standard deviation measures the variability in outcomes, independent of standardized meta-assessment training, d = 0.27 indicates that the average effect is more than a quarter of the size of the variability in outcomes that one might expect without standardized meta-assessment training.…”
Section: Resultsmentioning
confidence: 99%
“…First, although we collected data from two samples to test our hypothesized relationships, the size of each sample was relatively small. The samples sizes could have impacted our statistical power and therefore increased the probability of failing to reject the H0 when it was, in fact, untrue (Murphy, 2001). Since some support was found for the hypothesized relationships, we do not think sample size is a major concern.…”
Section: Discussionmentioning
confidence: 99%