Background SARS-CoV-2 antigen rapid diagnostic tests (Ag-RDTs) are increasingly being integrated in testing strategies around the world. Studies of the Ag-RDTs have shown variable performance. In this systematic review and meta-analysis, we assessed the clinical accuracy (sensitivity and specificity) of commercially available Ag-RDTs. Methods and findings We registered the review on PROSPERO (registration number: CRD42020225140). We systematically searched multiple databases (PubMed, Web of Science Core Collection, medRvix, bioRvix, and FIND) for publications evaluating the accuracy of Ag-RDTs for SARS-CoV-2 up until 30 April 2021. Descriptive analyses of all studies were performed, and when more than 4 studies were available, a random-effects meta-analysis was used to estimate pooled sensitivity and specificity in comparison to reverse transcription polymerase chain reaction (RT-PCR) testing. We assessed heterogeneity by subgroup analyses, and rated study quality and risk of bias using the QUADAS-2 assessment tool. From a total of 14,254 articles, we included 133 analytical and clinical studies resulting in 214 clinical accuracy datasets with 112,323 samples. Across all meta-analyzed samples, the pooled Ag-RDT sensitivity and specificity were 71.2% (95% CI 68.2% to 74.0%) and 98.9% (95% CI 98.6% to 99.1%), respectively. Sensitivity increased to 76.3% (95% CI 73.1% to 79.2%) if analysis was restricted to studies that followed the Ag-RDT manufacturers’ instructions. LumiraDx showed the highest sensitivity, with 88.2% (95% CI 59.0% to 97.5%). Of instrument-free Ag-RDTs, Standard Q nasal performed best, with 80.2% sensitivity (95% CI 70.3% to 87.4%). Across all Ag-RDTs, sensitivity was markedly better on samples with lower RT-PCR cycle threshold (Ct) values, i.e., <20 (96.5%, 95% CI 92.6% to 98.4%) and <25 (95.8%, 95% CI 92.3% to 97.8%), in comparison to those with Ct ≥ 25 (50.7%, 95% CI 35.6% to 65.8%) and ≥30 (20.9%, 95% CI 12.5% to 32.8%). Testing in the first week from symptom onset resulted in substantially higher sensitivity (83.8%, 95% CI 76.3% to 89.2%) compared to testing after 1 week (61.5%, 95% CI 52.2% to 70.0%). The best Ag-RDT sensitivity was found with anterior nasal sampling (75.5%, 95% CI 70.4% to 79.9%), in comparison to other sample types (e.g., nasopharyngeal, 71.6%, 95% CI 68.1% to 74.9%), although CIs were overlapping. Concerns of bias were raised across all datasets, and financial support from the manufacturer was reported in 24.1% of datasets. Our analysis was limited by the included studies’ heterogeneity in design and reporting. Conclusions In this study we found that Ag-RDTs detect the vast majority of SARS-CoV-2-infected persons within the first week of symptom onset and those with high viral load. Thus, they can have high utility for diagnostic purposes in the early phase of disease, making them a valuable tool to fight the spread of SARS-CoV-2. Standardization in conduct and reporting of clinical accuracy studies would improve comparability and use of data.
Background: Nasopharyngeal (NP) swabs are considered the highest-yield sample for diagnostic testing for respiratory viruses, including SARS-CoV-2. The need to increase capacity for SARS-CoV-2 testing in a variety of settings, combined with shortages of sample collection supplies, have motivated a search for alternative sample types with high sensitivity. We systematically reviewed the literature to understand the performance of alternative sample types compared to NP swabs. Methods: We systematically searched PubMed, Google Scholar, medRxiv, and bioRxiv (last retrieval October 1st, 2020) for comparative studies of alternative specimen types [saliva, oropharyngeal (OP), and nasal (NS) swabs] versus NP swabs for SARS-CoV-2 diagnosis using nucleic acid amplification testing (NAAT). A logistic-normal random-effects meta-analysis was performed to calculate % positive alternative-specimen, % positive NP, and % dual positives overall and in sub-groups. The QUADAS 2 tool was used to assess bias. Results: From 1,253 unique citations, we identified 25 saliva, 11 NS, 6 OP, and 4 OP/NS studies meeting inclusion criteria. Three specimen types captured lower % positives [NS (82%, 95% CI: 73-90%), OP (84%, 95% CI: 57-100%), saliva (88%, 95% CI: 81 – 93%)] than NP swabs, while combined OP/NS matched NP performance (97%, 95% CI: 90-100%). Absence of RNA extraction (saliva) and utilization of a more sensitive NAAT (NS) substantially decreased alternative-specimen yield. Conclusions: NP swabs remain the gold standard for diagnosis of SARS-CoV-2, although alternative specimens are promising. Much remains unknown about the impact of variations in specimen collection, processing protocols, and population (pediatric vs. adult, late vs. early in disease course) and head-to head studies of sampling strategies are urgently needed.
In developed nations, monitoring for drug-induced liver injury via serial measurements of serum transaminases (aspartate aminotransferase (AST) and alanine aminotransferase (ALT)) in at-risk individuals is the standard of care. Despite the need, monitoring for drug-related hepatotoxicity in resource-limited settings is often limited by expense and logistics, even for patients at highest risk. This manuscript describes the development and clinical testing of a paper-based, multiplexed microfluidic assay designed for rapid, semi-quantitative measurement of AST and ALT in a fingerstick specimen. Using 223 clinical specimens obtained by venipuncture and 10 fingerstick specimens from healthy volunteers, we have shown that our assay can, in 15 minutes, provide visual measurements of AST and ALT in whole blood or serum which allow the user to place those values into one of three readout “bins” (<3x upper limit of normal (ULN), 3-5x ULN, and >5x ULN, corresponding to tuberculosis/HIV treatment guidelines) with >90% accuracy. These data suggest that the ultimate point-of-care fingerstick device will have high impact on patient care in low-resource settings.
SUMMARYLaboratory diagnosis of Ebola virus disease plays a critical role in outbreak response efforts; however, establishing safe and expeditious testing strategies for this high-biosafety-level pathogen in resource-poor environments remains extremely challenging. Since the discovery of Ebola virus in 1976 via traditional viral culture techniques and electron microscopy, diagnostic methodologies have trended toward faster, more accurate molecular assays. Importantly, technological advances have been paired with increasing efforts to support decentralized diagnostic testing capacity that can be deployed at or near the point of patient care. The unprecedented scope of the 2014-2015 West Africa Ebola epidemic spurred tremendous innovation in this arena, and a variety of new diagnostic platforms that have the potential both to immediately improve ongoing surveillance efforts in West Africa and to transform future outbreak responses have reached the field. In this review, we describe the evolution of Ebola virus disease diagnostic testing and efforts to deploy field diagnostic laboratories in prior outbreaks. We then explore the diagnostic challenges pervading the 2014-2015 epidemic and provide a comprehensive examination of novel diagnostic tests that are likely to address some of these challenges moving forward.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.