This article describes some of the issues affecting measures that are translated and/or adapted from an original language and culture to a new one. It addresses steps to ensure (a) that the test continues to measure the same psychological characteristics, (b) that the test content is the same, and (c) that the research procedures needed to document that it effectively meets this goal are available. Specifically, the notions of test validation, fairness, and norms are addressed. An argument that such adaptations may be necessary when assessing members of subpopulations in U.S. culture is proposed.
Numerous changes in higher education (e.g., the demand for accountability, threats to tenure, new modes of instruction) and discontent with narrow definitions of scholarship have created the need for a broader and more precise definition of the nature of scholarship in psychology. The 5-part definition that we propose includes ( a ) original research (creation of knowledge), ( b ) integration of knowledge (synthesis and reorganization), ( c ) application of knowledge, ( d ) the scholarship of pedagogy, and ( e ) the scholarship of teaching in psychology. Scholarly activities require high levels of discipline-specific expertise, are innovative, can be replicated, are documented, can be subject to peer review, and have significance. This broader conceptualization of scholarship will benefit all stakeholders in higher education-students, faculty, colleges and universities, the community, and society at large.
This study investigated both an applicant pool and its resulting class of new hires in an attempt to clarify a number of empirical questions concerning recruiting source effectiveness. A pre‐established database of applicants and hires for the job of life insurance agent in a large insurance company was analyzed for recruiting activity. Differences in applicant quality and new hire survival were found in favor of the informal recruiting sources. A second measure of hire success, new business commission credits, failed to show differences across recruiting sources. The informal recruiting sources yielded significantly higher selection ratios than did formal sources for all groups. Examination of recruiting source use showed significant group differences, with females and blacks using the formal recruiting sources more frequently than males, non‐minorities, and Hispanics. While the informal recruiting sources yielded higher quality applicants and more successful hires for all groups, this research cautions that the implementation of revised recruiting policies must be carefully monitored for adverse effects on protected groups.
In this article, we report the results of a two-part investigation of psychological assessments by psychologists in legal contexts. The first part involves a systematic review of the 364 psychological assessment tools psychologists report having used in legal cases across 22 surveys of experienced forensic mental health practitioners, focusing on legal standards and scientific and psychometric theory. The second part is a legal analysis of admissibility challenges with regard to psychological assessments. Results from the first part reveal that, consistent with their roots in psychological science, nearly all of the assessment tools used by psychologists and offered as expert evidence in legal settings have been subjected to empirical testing (90%). However, we were able to clearly identify only about 67% as generally accepted in the field and only about 40% have generally favorable reviews of their psychometric and technical properties in authorities such as the Mental Measurements Yearbook. Furthermore, there is a weak relationship between general acceptance and favorability of tools’ psychometric properties. Results from the second part show that legal challenges to the admission of this evidence are infrequent: Legal challenges to the assessment evidence for any reason occurred in only 5.1% of cases in the sample (a little more than half of these involved challenges to validity). When challenges were raised, they succeeded only about a third of the time. Challenges to the most scientifically suspect tools are almost nonexistent. Attorneys rarely challenge psychological expert assessment evidence, and when they do, judges often fail to exercise the scrutiny required by law.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.