2015
DOI: 10.1016/j.scijus.2015.05.003
|View full text |Cite
|
Sign up to set email alerts
|

Sampling variability in forensic likelihood-ratio computation: A simulation study

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 14 publications
(7 citation statements)
references
References 37 publications
0
7
0
Order By: Relevance
“…Therefore, the results should in some ways be treated as the 'base case scenario' for variability in system performance as a function of sampling, and that even wider variability in system performance would be expected where poor quality recordings are used and independent samples are drawn from a much larger database. Meanwhile, the 100 times replication, similar to previous studies (Ali et al, 2015;Morrison & Poh, 2018), is an arbitrary choice in the current thesis. The major limitation in using EER for LR-based FVC system evaluation is that EER only treats LLRs categorially and it does not take the magnitude of evidence into consideration, i.e., what is considered an "error" is not being judged on a threshold of LLR = 0; therefore, a system that consistently yields high contrary-to-fact LLRs could have the same system validity to the one that produces low contrary-to-fact LLRs.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…Therefore, the results should in some ways be treated as the 'base case scenario' for variability in system performance as a function of sampling, and that even wider variability in system performance would be expected where poor quality recordings are used and independent samples are drawn from a much larger database. Meanwhile, the 100 times replication, similar to previous studies (Ali et al, 2015;Morrison & Poh, 2018), is an arbitrary choice in the current thesis. The major limitation in using EER for LR-based FVC system evaluation is that EER only treats LLRs categorially and it does not take the magnitude of evidence into consideration, i.e., what is considered an "error" is not being judged on a threshold of LLR = 0; therefore, a system that consistently yields high contrary-to-fact LLRs could have the same system validity to the one that produces low contrary-to-fact LLRs.…”
Section: Discussionmentioning
confidence: 99%
“…Under real case scenarios, 30 to 40 training and reference speakers are likely to be sampled from a relevant population (e.g., 35 speakers in Rose, 2013b), and the size of the relevant population itself is, in most cases, considerably larger than the number of training and reference speakers sampled. Although it is possible to sample more speakers from the relevant population, empirical studies (e.g., Ali et al, 2015;Morrison & Poh, 2018) and the current Chapter have shown that the effect of sampling variability on both overall performance and individual behaviour is inevitable. It is then a practical consideration for casework, i.e., would we obtain the same results for this particular pair of speakers if the experiment is replicated?…”
Section: Three-feature Systemsmentioning
confidence: 91%
See 1 more Smart Citation
“…logistic regression [10], pool adjacent violators [11], Bayesian model [12], scoring method [13]- [15]) have been developed and the performance has been compared. For example, [16] explored the effectiveness of three calibration methods (i.e. kernel density estimation, logistic regression, pool adjacent violators) in dealing with sampling variability with three sizes of the training scores.…”
Section: Calibration Methodsmentioning
confidence: 99%
“…The most common approach of forensic science to this problem is to estimate a likelihood ratio (LR), i.e., the ratio of the joint probability of occurrence of the two traces under the hypothesis that they arose from the same source and under the hypothesis that they arose from different sources. A convenient solution is to replace the joint probability of the traces by the probability of a distance between the two traces quantifying their dissimilarity (6, 9–14,). If, as is most often the case, the distance is scalar, there is an important loss of information.…”
Section: Introductionmentioning
confidence: 99%