Visual expertise in fingerprint examiners was addressed in one behavioral and one electrophysiological experiment. In an X-AB matching task with fingerprint fragments, experts demonstrated better overall performance, immunity to longer delays, and evidence of configural processing when fragments were presented in noise. Novices were affected by longer delays and showed no evidence of configural processing. In Experiment 2, upright and inverted faces and fingerprints were shown to experts and novices. The N170 EEG component was reliably delayed over the right parietal/temporal regions when faces were inverted, replicating an effect that in the literature has been interpreted as a signature of configural processing. The inverted fingerprints showed a similar delay of the N170 over the right parietal/temporal region, but only in experts, providing converging evidence for configural processing when experts view fingerprints. Together the results of both experiments point to the role configural processing in the development of visual expertise, possibly supported by idiosyncratic relational information among fingerprint features.
Perceptual tasks such as object matching, mammogram interpretation, mental rotation, and satellite imagery change detection often require the assignment of correspondences to fuse information across views. We apply techniques developed for machine translation to the gaze data recorded from a complex perceptual matching task modeled after fingerprint examinations. The gaze data provide temporal sequences that the machine translation algorithm uses to estimate the subjects' assumptions of corresponding regions. Our results show that experts and novices have similar surface behavior, such as the number of fixations made or the duration of fixations. However, the approach applied to data from experts is able to identify more corresponding areas between two prints. The fixations that are associated with clusters that map with high probability to corresponding locations on the other print are likely to have greater utility in a visual matching task. These techniques address a fundamental problem in eye tracking research with perceptual matching tasks: Given that the eyes always point somewhere, which fixations are the most informative and therefore are likely to be relevant for the comparison task?
Forensic evidence often involves an evaluation of whether two impressions were made by the same source, such as whether a fingerprint from a crime scene has detail in agreement with an impression taken from a suspect. Human experts currently outperform computer-based comparison systems, but the strength of the evidence exemplified by the observed detail in agreement must be evaluated against the possibility that some other individual may have created the crime scene impression. Therefore, the strongest evidence comes from features in agreement that are also not shared with other impressions from other individuals. We characterize the nature of human expertise by applying two extant metrics to the images used in a fingerprint recognition task and use eye gaze data from experts to both tune and validate the models. The Attention via Information Maximization (AIM) model (Bruce & Tsotsos, 2009) quantifies the rarity of regions in the fingerprints to determine diagnosticity for purposes of excluding alternative sources. The CoVar model (Karklin & Lewicki, 2009) captures relationships between low-level features, mimicking properties of the early visual system. Both models produced classification and generalization performance in the 75%-80% range when classifying where experts tend to look. A validation study using regions identified by the AIM model as diagnostic demonstrates that human experts perform better when given regions of high diagnosticity. The computational nature of the metrics may help guard against wrongful convictions, as well as provide a quantitative measure of the strength of evidence in casework.
During fingerprint comparisons, a latent print examiner visually compares two impressions to determine whether or not they originated from the same source. They consider the amount of perceived detail in agreement or disagreement and accumulate evidence toward same source and different sources propositions. This evidence is then mapped to one of three conclusions: Identification, Inconclusive, or Exclusion. A limitation of this 3‐conclusion scale is it can lose information when translating the conclusion from the internal strength‐of‐evidence value to one of only three possible conclusions. An alternative scale with two additional values, support for different sources and support for common sources, has been proposed by the Friction Ridge Subcommittee of OSAC. The expanded scale could lead to more investigative leads but could produce complex trade‐offs in both correct and erroneous identifications. The aim of the present study was to determine the consequences of a shift to expanded conclusion scales in latent print comparisons. Latent print examiners each completed 60 comparisons using one of the two scales, and the resulting data were modeled using signal detection theory to measure whether the expanded scale changed the threshold for an “Identification” conclusion. When using the expanded scale, examiners became more risk‐averse when making “Identification” decisions and tended to transition both the weaker Identification and stronger Inconclusive responses to the “Support for Common Source” statement. The results demonstrate the utility of an expanded conclusion scale and also provide guidance for the adoption of these or similar scales.
In the pattern comparison disciplines such as fingerprints, footwear, and toolmarks, the results of a comparison are communicated by examiners in the form of categorical conclusions such as Identification or Exclusion. These statements have been criticized as requiring knowledge of prior probabilities by the examiners and being overinterpreted by laypersons. Alternative statements based on strength‐of‐support language have been proposed. The current study compares traditional conclusion scales against strength‐of‐support scales to determine how these new statements might be used by examiners in casework. Each participant completed 60 comparisons within their discipline, which were designed to approximate casework conditions, using either a traditional or a strength‐of‐support conclusion scale. The scale used on each trial was randomly assigned, and participants knew the scale for that trial as they began the comparison. Fingerprint examiners were much less likely to use Extremely Strong Support for Common Source than Identification. Footwear examiners treated the traditional and strength‐of‐support scales similarly, but toolmark examiners were much less likely to use Extremely Strong Support for Common Source than Identification, similar to fingerprint examiners. A separate group of fingerprint examiners used Identification less often when an expanded scale was available. The results demonstrate that expanded scales may result in the highest conclusion category being used less often by examiners when other alternatives are possible, and the term “extremely strong support” may introduce risk aversion on the part of examiners.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.