Scientific associations and measurement experts in psychology and education have voiced various standards and best-practice recommendations concerning reliability data over the years. Yet in the counseling psychology literature, there is virtually no single-source compilation and articulation of good practices for reporting, analyzing, and interpreting reliability to guide applied researchers intending to use scales rather than develop them. Therefore, focusing on Cronbach’s alpha internal consistency reliability estimates, this article (a) defines and provides rationales for seven broad categories of good practices for reporting, analyzing, interpreting, and using reliability data and (b) illustrates some pragmatic strategies for implementing the good practices with respect to reliability data in quantitative studies involving already-developed scales. The authors’ recommendations for good rather than best practices acknowledge that additional or alternative practices may be required when scale development is the researcher’s focus. The authors summarize their good practices in tabular form.
Considering the growing racial and ethnic diversity among supervisees, the number of clinical supervision dyads comprised of supervisees and supervisors of Color is likely to increase dramatically. Although extant research has focused on supervision that involves White supervisors paired with racial, ethnic, and linguistic minority supervisees, few authors have explored the supervisory dynamics between clinicians of color and supervisees of Color. This study used a qualitative analysis of structured survey responses provided by supervisees of Color to argue that racial identity (i.e., supervisors' and supervisees' psychological experiences of race), more than race is essential for managing the racial dynamics of supervisory dyads involving two people of Color. Using Helms Racial Identity Social Interaction Model (Helms, 1990(Helms, , 1995, we use a directed content analysis of participants' responses to demonstrate common themes that emerge when race is introduced into the supervision relationship. Based on supervisees' reported experiences, implications for the practice of supervision involving people of Color are offered.
Peer support groups, also known as "self-help groups," provide a unique tool for helping veterans working through the military-to-civilian transition to achieve higher levels of social support and community integration. The number and variety of community-based peer support groups has grown to the point that there are now more visits to these groups each year than to mental health professionals. The focus of these groups on the provision of social support, the number and variety of groups, the lack of cost, and their availability in the community make them a natural transition tool for building community-based social support. A growing literature suggests that these groups are associated with measurable improvements in social support, clinical symptoms, self-efficacy and coping. For clinical populations, the combination of peer support groups and clinical care results in better outcomes than either alone. Given this evidence, we suggest clinical services use active referral strategies to help veterans engage in peer support groups as a means of improving community reintegration and clinical outcomes. Finally, suggestions for identifying appropriate peer support groups and assisting with active referrals are provided. (PsycINFO Database Record
Helms, Henze, Sass, and Mifsud (2006) defined good practices for internal consistency reporting, interpretation, and analysis consistent with an alpha-as-data perspective. Their viewpoint (a) expands on previous arguments that reliability coefficients are group-level summary statistics of samples' responses rather than stable properties of scales or measures and (b) encourages researchers to investigate characteristics of reliability data for their own samples and subgroups within their samples. In Study 1, we reviewed past and current reliability reporting practices in a sample of Psychological Assessment articles published across 3 decades (i.e., from the years 1989, 1996, and 2006). Results suggested that contemporary and past researchers' reliability reporting practices have not improved over time and generally were not consistent with good practices. In Study 2, we analyzed an archival data set to illustrate the real-life repercussions of researchers' ongoing misconstrual and misuse of reliability data. Our analyses suggested that researchers should conduct preliminary analyses of their data to determine whether their data fit the assumptions of their reliability analyses. Also, the results indicated that reliability coefficients varied across racial or ethnic and gender subgroups, and these variations had implications for whether the same depression measure should be used across groups. We concluded that the alpha-as-data perspective has implications for one's choice of psychological measures and interpretation of results, which subsequently affect conclusions and recommendations. We encourage researchers to recognize the people behind their data by adopting better practices in internal consistency reporting, analysis, and interpretation.
The authors discuss the highlights of the 1st annual Diversity Challenge held October 11–12, 2001, at Boston College, Boston, MA. The Challenge's general focus was preparing educators to cope with the resistances encountered when they teach about race and ethnic culture. This introduction (a) provides an overview of the proceedings, (b) summarizes themes of presentations and articles selected, and (c) offers recommendations for subsequent events. Los autores analizan el primer Reto de Diversidad, que tuvo lugar del 11 al 12 de octubre, 2001, en Boston College, Boston, Massachusetts. El próposito del reto fue preparar a los profesores para que puedan enfrentar las dificultades que resultan de enseñar los temas de la raza y la cultura étnica. Esta introducción (a) da un breve resumen de los procedimientos, (b) resume los temas de las presentaciones y los ensayos seleccionados, y, (c) sugiere recommendaciones para eventos posteriores.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.