2018
DOI: 10.1016/j.scijus.2017.12.005
|View full text |Cite
|
Sign up to set email alerts
|

Avoiding overstating the strength of forensic evidence: Shrunk likelihood ratios/Bayes factors

Abstract: A B S T R A C TWhen strength of forensic evidence is quantified using sample data and statistical models, a concern may be raised as to whether the output of a model overestimates the strength of evidence. This is particularly the case when the amount of sample data is small, and hence sampling variability is high. This concern is related to concern about precision. This paper describes, explores, and tests three procedures which shrink the value of the likelihood ratio or Bayes factor toward the neutral value… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
41
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 31 publications
(41 citation statements)
references
References 51 publications
0
41
0
Order By: Relevance
“…However, the size of test data was not taken into consideration and only three sets of sample size of the training data were considered. Similarly, [17] used simulated scores to explore the effectiveness of different calibration methods in shrinking LR output and tested the generalizability using data from real cases. However, this work only compared the effectiveness of different calibration methods using scores that follow Gaussian distributions with equal variance and did not take skewness into consideration.…”
Section: Calibration Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…However, the size of test data was not taken into consideration and only three sets of sample size of the training data were considered. Similarly, [17] used simulated scores to explore the effectiveness of different calibration methods in shrinking LR output and tested the generalizability using data from real cases. However, this work only compared the effectiveness of different calibration methods using scores that follow Gaussian distributions with equal variance and did not take skewness into consideration.…”
Section: Calibration Methodsmentioning
confidence: 99%
“…The current study uses simulated scores from skewed distributions, derived from real data, to investigate the effectiveness of four calibration methods (i.e. logistic regression [10], empirical lower and upper bound (ELUB) [18], Bayesian model [12] and regularised logistic regression [17]) at dealing with issues relating to sample size and sampling variability. With the exception of logistic regression, the calibration methods tested all incorporate uncertainty into the LR itself, such that LRs will be closer to 1 when uncertainty is high (i.e.…”
Section: The Current Studymentioning
confidence: 99%
See 1 more Smart Citation
“…Since we do not have at our disposal a fully exhaustive database of mobile devices/cameras from different manufacturers, we opted for a simpler solution and transformed the similarity scores into LRs using regularized logistic regression with a uniform prior regularization [ 40 ]. The process of calibration using linear logistic regression can be described in the following way: Iterative use of leave-one-out cross validation for both mated and non-mated scores, where each of the left-out scores “plays” the role of the evidence; One-to-one mapping from probability to log-odds domain is performed using a logit function [ 37 ]; Calibrated LRs are calculated iteratively for each evidence score.…”
Section: Experimental Protocolmentioning
confidence: 99%
“…Although the ideas of using penalized (or regularized) logistic regression and kernel density estimation have been previously explored in the context of score-based forensic analysis in applications such as glass (Morrison and Poh, 2018) and voice comparison (Morrison, 2011a), to our knowledge, the particular methods explored in this paper and the R Shiny tool we provide are relatively novel to the forensic sciences.…”
Section: Introductionmentioning
confidence: 99%