1998
DOI: 10.1002/j.1556-6676.1998.tb02702.x
|View full text |Cite
|
Sign up to set email alerts
|

Statistical Significance and Reliability Analyses in Recent Journal of Counseling & Development Research Articles

Abstract: The mission of the Journal of Counseling & Development (JCD) includes serving as “a scholarly record of the counseling profession” (Borders, 1996, p. 3) and as part of the “conscience of the profession.” This ambitious responsibility may require the willingness to engage in occasional self‐study. This study investigated 2 aspects of research analyses in the quantitative research studies reported in 1996 JCD issues.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

5
59
1
2

Year Published

1998
1998
2019
2019

Publication Types

Select...
9

Relationship

0
9

Authors

Journals

citations
Cited by 76 publications
(67 citation statements)
references
References 39 publications
(61 reference statements)
5
59
1
2
Order By: Relevance
“…Such statements are extremely misleading because reliability is a function of scores and not of instruments (Wilkinson and the Task Force on Statistical Inference, 1999;Thompson and Vacha-Haase, 2000;Onwuegbuzie and Daniel, 2002a, 2002b, 2003. It is no wonder then that the majority of quantitative researchers do not report reliability coefficients for data from their samples (Willson, 1980;Meier and Davis, 1990;Thompson and Snyder, 1998;Vacha-Haase et al, 1999;Onwuegbuzie, 2002b;Onwuegbuzie and Daniel, 2002a, 2002b, 2003, even though this has been recommended by authoritative and influential sources (e.g., American Educational Research Association, American Psychological Association, and National Council on Measurement in Education, 1999;Wilkinson and the Task Force on Statistical Inference, 1999).…”
Section: Instrumentationmentioning
confidence: 96%
“…Such statements are extremely misleading because reliability is a function of scores and not of instruments (Wilkinson and the Task Force on Statistical Inference, 1999;Thompson and Vacha-Haase, 2000;Onwuegbuzie and Daniel, 2002a, 2002b, 2003. It is no wonder then that the majority of quantitative researchers do not report reliability coefficients for data from their samples (Willson, 1980;Meier and Davis, 1990;Thompson and Snyder, 1998;Vacha-Haase et al, 1999;Onwuegbuzie, 2002b;Onwuegbuzie and Daniel, 2002a, 2002b, 2003, even though this has been recommended by authoritative and influential sources (e.g., American Educational Research Association, American Psychological Association, and National Council on Measurement in Education, 1999;Wilkinson and the Task Force on Statistical Inference, 1999).…”
Section: Instrumentationmentioning
confidence: 96%
“…However, researchers have repeatedly found that reliability reporting for data at hand was the exception rather than the norm in journals. [16][17][18][19][20] Given that accurate interpretation of test scores is contingent upon reliability data, we attempted to fill the gap in the CAPE literature by conducting the first study that examines reliability estimates of a large sample of CAPE scores. A review was conducted to identify published studies that have utilized CAPE.…”
Section: Psychometric Propertiesmentioning
confidence: 99%
“…[16][17][18][19][20][21][22] Here, we hope to provide new insights concerning the utility of CAPE through a review and meta-analysis of reliability coefficients and factor structures. We also hope to alert readers to the importance of psychometric properties in clinical research and practice.…”
mentioning
confidence: 99%
“…Efectuaram-se análises estatísticas descritivas e foram calculadas correlações no sentido de determinar a associação entre idade desenvolvimental, resultados desenvolvimentais em diferentes áreas, dimensões do comportamento adaptativo e níveis e tipos de envolvimento observado. Embora a "significância estatística" seja relatada para as correlações, os resultados são interpretados de acordo com o seu significado prático, uma vez que os coeficientes de correlação são, por si só, suficientes para determinar a força da associação e os valores p são altamente influenciados pelo tamanho da amostra (Thompson & Snyder, 1998). O significado prático dos resultados foi interpretado de acordo com as convenções definidas por Cohen (Cohen, 1992): um r de .10 foi considerado pequeno e revela uma associação fraca, um r de .30 foi considerado médio e revela uma associação moderada e um r de .50 foi interpretado como grande, revelando uma associação forte.…”
Section: Não Envolvidounclassified