Introduction
Recent introductions of disease-modifying treatments for Alzheimer’s disease have re-invigorated
the cause of early dementia detection. Cognitive “Paper & pencil” tests represent the bedrock of
clinical assessment, because they are cheap, easy to perform, and don’t require brain imaging or
biological testing. Cognitive tests vary greatly in duration, complexity, sociolinguistic biases, probed
cognitive domains, and their specificity and sensitivity of detecting cognitive impairment (CI).
Consequently, an ecologically valid head-to-head comparison seems essential for evidence-based
dementia screening.
Method
We compared five tests: MoCA, ADAS, ACE-III, Eurotest, and Phototest on a large sample of seniors
(N=456, 77.9 ± 8 years, 71% females). Their specificity and sensitivity was estimated in a novel way
by contrasting each test’s outcome to the majority outcome across the remaining tests (Comparative
Specificity & Sensitivity Calculation – CSSC). This obviates the need for an a priori gold standard such
as a clinically clear-cut sample of dementia/MCI/controls. We posit that the CSSC results in a more
ecologically valid estimation of clinical performance while precluding biases resulting from different
dementia/MCI diagnostic criteria, and the proficiency of detecting these conditions.
Results
There exists a stark trade-off between behavioral test specificity and sensitivity. The test with the
highest specificity had the lowest sensitivity, and vice versa. The comparative specificities and
sensitivities were, respectively: Phototest (97%, 47%), Eurotest (94%, 55%), ADAS (90%, 68%), ACE-III
(72%, 77%), MoCA (55%, 95%).
Conclusion
Assuming a CI prevalence of 10%, the shortest (~3 min) and the simplest instrument - the Phototest,
was shown to have the best overall performance (accuracy 92%, PPV 66%, NPV 94%).