This meta-analysis investigates linguistic cues to deception and whether these cues can be detected with computer programs. We integrated operational definitions for 79 cues from 44 studies where software had been used to identify linguistic deception cues. These cues were allocated to six research questions. As expected, the meta-analyses demonstrated that, relative to truth-tellers, liars experienced greater cognitive load, expressed more negative emotions, distanced themselves more from events, expressed fewer sensory-perceptual words, and referred less often to cognitive processes. However, liars were not more uncertain than truth-tellers. These effects were moderated by event type, involvement, emotional valence, intensity of interaction, motivation, and other moderators. Although the overall effect size was small, theory-driven predictions for certain cues received support. These findings not only further our knowledge about the usefulness of linguistic cues to detect deception with computers in applied settings but also elucidate the relationship between language and deception.
It is well established that own-race faces are recognized more accurately than cross-race faces. However, there are mixed results regarding the developmental consistency of the cross-race effect White and Black kindergarten children, 3rd graders, and young adults viewed a Black and a White target individual. One day later, recognition memory for each target was tested with a 6-person lineup. The interaction of race of participant by race of target face on Ag scores was significant, demonstrating an overall cross-race effect. The 2nd-order interaction with age did not approach significance; for each age group, own-race identification was more accurate than cross-race identification. The age consistency of the cross-race effect in light of the significant main effect of age suggests quantitative but not qualitative differences in face memory processing at various ages. For children, as well as adults, own-race faces are recognized more accurately than cross-race faces.
Situational factors Á in the form of interrogation tactics Á have been reported to unduly influence innocent suspects to confess. This study assessed jurors' perceptions of these factors and tested whether expert witness testimony on confessions informs jury decision making. In Study 1, jurors rated interrogation tactics on their level of coerciveness and likelihood that each would elicit true and false confessions. Most jurors perceived interrogation tactics to be coercive and likely to elicit confessions from guilty, but not from innocent suspects. This result motivated Study 2 in which an actual case involving a disputed confession was used to assess the influence of expert testimony on jurors' perceptions and evaluations of interrogations and confession evidence. The results revealed an important influence of expert testimony on mock-jurors decisions.
A current focus in deception research is on developing cognitive-load approaches (CLAs) to detect deception. The aim is to improve lie detection with evidence-based and ecologically valid procedures. Although these approaches show great potential, research on cognitive processes or mechanisms explaining how they operate is lacking. Potential mechanisms underlying the most popular techniques advocated for field application are highlighted. Cognitive scientists are encouraged to conduct basic research that qualifies the ‘cognitive’ in these new approaches.
Previous deception research on repeated interviews found that liars are not less consistent than truth tellers, presumably because liars use a “repeat strategy” to be consistent across interviews. The goal of this study was to design an interview procedure to overcome this strategy. Innocent participants (truth tellers) and guilty participants (liars) had to convince an interviewer that they had performed several innocent activities rather than committing a mock crime. The interview focused on the innocent activities (alibi), contained specific central and peripheral questions, and was repeated after 1 week without forewarning. Cognitive load was increased by asking participants to reply quickly. The liars’ answers in replying to both central and peripheral questions were significantly less accurate, less consistent, and more evasive than the truth tellers’ answers. Logistic regression analyses yielded classification rates ranging from around 70% (with consistency as the predictor variable), 85% (with evasive answers as the predictor variable), to over 90% (with an improved measure of consistency that incorporated evasive answers as the predictor variable, as well as with response accuracy as the predictor variable). These classification rates were higher than the interviewers’ accuracy rate (54%).
This meta-analysis synthesizes research on interrater reliability of Criteria-Based Content Analysis (CBCA). CBCA is an important component of Statement Validity Assessment (SVA), a forensic procedure used in many countries to evaluate whether statements (e.g., of sexual abuse) are based on experienced or fabricated events. CBCA contains 19 verbal content criteria, which are frequently adapted for research on detecting deception. A total of k = 82 hypothesis tests revealed acceptable interrater reliabilities for most CBCA criteria, as measured with various indices (except Cohen's kappa). However, results were largely heterogeneous, necessitating moderator analyses. Blocking analyses and meta-regression analyses on Pearson's r resulted in significant moderators for research paradigm, intensity of rater training, type of rating scale used, and the frequency of occurrence (base rates) for some CBCA criteria. The use of CBCA summary scores is discouraged. Implications for research vs. field settings, for future research and for forensic practice in the United States and Europe are discussed. (PsycINFO Database Record
For security and justice professionals (e.g., police officers, lawyers, judges), the thousands of peer-reviewed articles on nonverbal communication represent important sources of knowledge. However, despite the scope of the scientific work carried out on this subject, professionals can turn to programs, methods, and approaches that fail to reflect the state of science. The objective of this article is to examine (i) concepts of nonverbal communication conveyed by these programs, methods, and approaches, but also (ii) the consequences of their use (e.g., on the life or liberty of individuals). To achieve this objective, we describe the scope of scientific research on nonverbal communication. A program (SPOT; Screening of Passengers by Observation Techniques), a method (the BAI; Behavior Analysis Interview) and an approach (synergology) that each run counter to the state of science are examined. Finally, we outline five hypotheses to explain why some organizations in the fields of security and justice are turning to pseudoscience and pseudoscientific techniques. We conclude the article by inviting these organizations to work with the international community of scholars who have scientific expertise in nonverbal communication and lie (and truth) detection to implement evidence-based practices. Análisis de la comunicación no verbal: los peligros de la pseudociencia en entornos de seguridad y justicia R E S U M E N Para los profesionales de la seguridad y la justicia (policías, abogados, jueces), los miles de artículos revisados por pares sobre comunicación no verbal representan fuentes importantes de conocimiento. Sin embargo, a pesar del alcance del trabajo científico realizado sobre este tema, los profesionales pueden recurrir a programas, métodos y enfoques que no reflejan el estado real de la ciencia. El objetivo de este artículo es examinar (i) los conceptos de comunicación no verbal transmitidos por estos programas, métodos y enfoques, pero también (ii) las consecuencias de su uso (por ejemplo, sobre la vida o la libertad de las personas). Para lograr estos objetivos, describimos el alcance de la investigación científica sobre la comunicación no verbal. Se examina un programa (SPOT: Evaluación de pasajeros mediante técnicas de observación), un método (BAI: Entrevista de análisis de conducta) y un enfoque (sinergología) que contradicen el estado de la ciencia. Finalmente, presentamos cinco hipótesis para explicar por qué algunas organizaciones en los campos de la seguridad y la justicia están recurriendo a la pseudociencia y a las técnicas pseudocientíficas. Concluimos el artículo invitando a estas organizaciones a trabajar con la comunidad académica internacional especializada en la investigación sobre comunicación no verbal y detección de mentiras (y verdad) para implementar prácticas basadas en la evidencia.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.