2000
DOI: 10.1159/000047425
|View full text |Cite
|
Sign up to set email alerts
|

Computer–Automated Caries Detection in Digital Bitewings: Consistency of a Program and Its Influence on Observer Agreement

Abstract: The aim of this study was to evaluate a decision–support, caries detection program and its influence on observer agreement in caries diagnosis. 130 patients were examined by digital bitewing radiography (RVG XL sensor, Trophy Radiologie Inc.). Fifty–four approximal surfaces (27 in premolars and 27 in molars) were selected by the author: 24 surfaces (9 in molars and 15 in premolars) scored as sound, 16 surfaces (9 in molars and 7 in premolars) scored as carious in enamel, and 14 surfaces (9 in molars and 5 in p… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2005
2005
2021
2021

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 32 publications
(5 citation statements)
references
References 9 publications
0
5
0
Order By: Relevance
“…However, Logicon appears to be more reliable in ruling out carious lesions than in detecting them whether in the enamel or in dentin ( 17 ). In another study, Wenzel et al ( 18 ) found mean kappa value for inter-observer agreement for caries scores to be 0.47 before and 0.48 after the use of LCD, which did not improve using the program. They also found that the program was not consistent and provided diverse opinions on the caries status.…”
Section: Discussionmentioning
confidence: 95%
“…However, Logicon appears to be more reliable in ruling out carious lesions than in detecting them whether in the enamel or in dentin ( 17 ). In another study, Wenzel et al ( 18 ) found mean kappa value for inter-observer agreement for caries scores to be 0.47 before and 0.48 after the use of LCD, which did not improve using the program. They also found that the program was not consistent and provided diverse opinions on the caries status.…”
Section: Discussionmentioning
confidence: 95%
“…One clinical study assessed the reproducibility of detecting approximal lesions by repeating the automated analysis 10 times for each surface, and concluded that the LCD program was not consistent and provided different outcomes for the same surface in the same image. 215 Also, observer agreement did not improve using the program. 215 In an ex vivo study, specificities for the outcome of the LCD program were significantly lower than when the observers themselves assessed the RVG images.…”
Section: Machine-intelligence Supported Systemsmentioning
confidence: 89%
“…215 Also, observer agreement did not improve using the program. 215 In an ex vivo study, specificities for the outcome of the LCD program were significantly lower than when the observers themselves assessed the RVG images. 216 Sensitivity was also lower for two observers on the diagnostic threshold caries in dentine.…”
Section: Machine-intelligence Supported Systemsmentioning
confidence: 89%
“…The mean specificity is 0.94 and 0.95, respectively. 1 Reported rater agreement varies among radiographic studies of caries detection; [2][3][4][5] this may be owing to differences in underlying sample characteristics, such as lesion depth, dentition, surface location and caries prevalence, and to methodological heterogeneity, such as the number of surfaces, number of raters and scoring categories.…”
Section: Introductionmentioning
confidence: 99%