This study examined whether participants were sensitive to variations in the quality of an experiment discussed by an expert witness and whether they used heuristic cues when evaluating the expert evidence. In the context of a hostile work environment case, different versions of the expert testimony varied the presence of heuristic cues (i.e., whether the expert's research was generally accepted or ecologically valid) and evidence quality (i.e., the construct validity of the expert's research). Men who heard expert testimony were more likely to find that the plaintiff's workplace was hostile than were men who did not hear the expert testimony; expert testimony did not influence women's liability judgments. Heuristic cues influenced participant evaluations of the expert testimony validity, but evidence quality did not. Cross-examination did not increase juror sensitivity to evidence quality. Implications for science in the legal system are discussed.
This experiment examined whether a photoarray administrator's knowledge of a suspect's identity increased false identification rates. Fifty participant-administrators (PAs) presented 50 participant-witnesses (PWs) two perpetrator-absent photoarrays following a live staged crime involving two perpetrators. For one photoarray per trial, the experimenter revealed the suspect's identity to the PA. Each PA presented the photoarrays sequentially or simultaneously in the presence or absence of an observer. When the observer was present, PA knowledge of the suspect's identity had a biasing effect in sequential photoarrays only. This pattern did not emerge when the observer was absent. The experimental manipulations did not affect PAs' and PWs' ratings of photoarray fairness or PWs' ratings of pressure to make an identification. These data suggest that only administrators who are blind to the suspect's identity should present sequential photoarrays.The vagaries of eyewitness identification are well-known... . A major factor contributing to the high incidence of miscarriage of justice from mistaken identification has been the degree of suggestion inherent in the manner in which the prosecution presents the suspect to witnesses for pretrial identification. . . . Suggestion can be created intentionally or unintentionally in many subtle ways.-Supreme Court Justice William Brennan, writing for the majority in United States v. Wade (1967) 940This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
Scientifically trained and untrained judges read descriptions of an expert's research in which the peer review status and internal validity were manipulated. Seventeen percent of the judges said they would admit the expert evidence, irrespective of its internal validity. Publication in a peer-reviewed journal also had no effect on judges' decisions. Training interacted with the internal validity manipulation. Scientifically trained judges rated valid evidence more positively than did untrained judges. Untrained judges rated a study with a confound more positively than did trained judges. Training did not affect judge evaluations of studies with a missing control group or potential experimenter bias. Admissibility decisions were correlated with judges' perceptions of the study's validity, jurors' ability to evaluate scientific evidence, and the effectiveness of cross-examination and opposing experts to highlight flaws in scientific methodology.
This study assessed the impact of some complex question forms frequently used by attorneys who examine and cross-examine witnesses in the courtroom. Fifteen males and 15 females from each of four student populations (kindergarten, fourth grade, ninth grade, and college) viewed a videotaped incident and then responded to questions about the incident. Half the questions were asked in "lawyerese" (i.e., using complex question forms); the remaining half asked for the same information using simply phrased question forms of the same length. Lawyerese confused children, adolescents, and young adults alike. Questions that included multiple parts with mutually exclusive responses were the most difficult to answer; those that included negatives, double negatives, or difficult vocabulary also posed significant problems. Results suggest that complex question forms impede truth-seeking and should be prohibited in court.Lawyers are students of language by profession, and they exercise their power in court by manipulating the thoughts and opinions of others through the skillful use of language (Philbrick, cited in O'Barr, 1982). As one forensic linguist noted, "The most powerful weapon an attorney has in the war of words he wages with the witness is manipulation of question form, and it is a tool frequently referred to in articles and manuals on deposition and trial practice" (Walker, 1987, p. 64).
This study examined the ability of jury-eligible community members (N = 248) to detect internal validity threats in psychological science presented during a trial. Participants read a case summary in which an expert testified about a study that varied in internal validity (valid, missing control group, confound, and experimenter bias) and ecological validity (high, low). Ratings of expert evidence quality and expert credibility were higher for the valid versus missing control group versions only. Internal validity did not influence verdict or ratings of plaintiff credibility and no differences emerged as a function of ecological validity. Expert evidence quality, expert credibility, and plaintiff credibility were positively correlated with verdict. Implications for the scientific reasoning literature and for trials containing psychological science are discussed. KeywordsScientific reasoning; Internal validity; Expert testimony; Juror decision-making Recent advances in DNA, blood type, and fingerprint testing have increased the likelihood that average citizens will confront complex scientific evidence when serving as jurors in civil and criminal cases. Nearly two-thirds (65%) of state court judges responding to a national survey indicated that they had some experience with DNA evidence in their courtrooms (Gatowski, Dobbin, Richardson, Ginsburg, Mertino, & Dahir, 2001). The role of psychological science in the legal system has burgeoned recently as well. Social or behavioral scientists constituted nearly one-quarter of all scientists in U.S. criminal appellate cases involving expert testimony from 1988 to 1998 (Groscup, Penrod, Studebaker, Huss, & O'Neil, 2002).Research examining laypeople's scientific reasoning skills has enjoyed renewed interest among social scientists and legal scholars due to several recent U.S. Supreme Court rulings on the admissibility of expert evidence (Daubert v. Merrell Dow Pharmaceuticals, Inc 1993; General Electric Co. v. Joiner, 1997; Kumho Tire Co. v. Carmichael, 1999). Daubert and its progeny have entrusted judges with a gatekeeping role in which they should base their admissibility decisions on the relevance and reliability of the expert evidence. Despite the Court's confidence in judges' ability to fulfill their gatekeeping role, many judges lack the scientific literacy required for a Daubert analysis (Gatowski et al., 2001) and have difficulty identifying methodologically-flawed expert testimony (Kovera & McAuliff, 2000a). Attorneys also struggle to effectively evaluate expert evidence (Kovera & McAuliff, 2000b). Their ability to make and successfully argue motions to exclude junk science, crossexamine an expert, or consult their own expert may be limited as a result (Kovera, Russano, & McAuliff, 2002).Based on these limitations, it is likely that at least some invalid research will reach laypeople serving as jurors in court. Can they recognize variations in the validity of psychological science? We examined this research question by presenting jury-eligible community m...
This study examined whether need for cognition (NC) moderated jurors' sensitivity to methodological flaws in expert evidence. Jurors read a sexual harassment trial summary in which the plaintiff's expert presented a study that varied in ecological validity, general acceptance, and internal validity. High NC jurors found the defendant liable more often and evaluated expert evidence quality more favorably when the expert's study was internally valid vs. missing a control group; low NC jurors did not. Ecological validity and general acceptance did not affect jurors' judgments. Ratings of expert and plaintiff credibility, plaintiff trustworthiness, and expert evidence quality were positively correlated with verdict. Theoretical implications for the scientific reasoning literature and practical implications for trials containing psychological science are discussed. Mitchel) for granting us access to the jury pool. We also thank dissertation committee members Brian Cutler and Ronald Fisher for their helpful suggestions.
This article describes an active-learning approach to teaching an undergraduate psychology and law course specifically designed to improve critical-thinking skills. After reviewing the concepts of active learning and critical thinking, we describe the course and present data and observations regarding its success. Finally, we discuss strategies for handling problems that may arise when teaching a psychology and law course using this approach.
This study investigated potential differences between expert and lay knowledge of factors influencing witness suggestibility. Expert psychologists (N ¼ 58), jurors (N ¼ 157), and jury-eligible undergraduates (N ¼ 220) estimated the effects of misleading information on witness accuracy for three age groups in various conditions. Respondents possessed similar knowledge of age-related trends in suggestibility, the positive effects of a pre-misinformation warning, and the negative influence of longer delays between the event/misinformation and event/final memory test. Compared to experts, laypeople underestimated the size of suggestibility differences between age groups and lacked knowledge about how event detail centrality, witness participation, and source prestige can increase witness suggestibility. Laypeople rated themselves as being largely unfamiliar with witness suggestibility research and thought that expert testimony would be beneficial. These data shed light on the potential helpfulness of expert testimony in cases involving witness suggestibility.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.