Background and Objective: At-home rapid antigen tests provide a convenient and expedited resource to learn about severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection status. However, low sensitivity of at-home antigen tests presents a challenge. This study examines the accuracy of at-home tests, when combined with computer-facilitated symptom screening. Methods: The study used primary data sources with data collected during 2 phases at different periods (phase 1 and phase 2): one during the period in which the alpha variant of SARS-CoV-2 was predominant in the United States and another during the surge of the delta variant. Four hundred sixty-one study participants were included in the analyses from phase 1 and 374 subjects from phase 2. Phase 1 data were used to develop a computerized symptom screening tool, using ordinary logistic regression with interaction terms, which predicted coronavirus disease-2019 (COVID-19) reverse transcription polymerase chain reaction (RT-PCR) test results. Phase 2 data were used to validate the accuracy of predicting COVID-19 diagnosis with (1) computerized symptom screening; (2) at-home rapid antigen testing; (3) the combination of both screening methods; and (4) the combination of symptom screening and vaccination status. The McFadden pseudo-R 2 was used as a measure of percentage of variation in RT-PCR test results explained by the various screening methods. Results: The McFadden pseudo-R 2 for the first at-home test, the second at-home test, and computerized symptom screening was 0.274, 0.140, and 0.158, respectively. Scores between 0.2 and 0.4 indicated moderate levels of accuracy. The first athome test had low sensitivity (0.587) and high specificity (0.989). Adding a second at-home test did not improve the sensitivity of the first test. Computerized symptom screening improved the accuracy of the first at-home test (added 0.131 points to sensitivity and 6.9% to pseudo-R 2 of the first at-home test). Computerized symptom screening and vaccination status was the most accurate method to screen patients for COVID-19 or an active infection with SARS-CoV-2 in the community (pseudo-R 2 = 0.476). Conclusion: Computerized symptom screening could either improve, or in some situations, replace at-home antigen tests for those individuals experiencing COVID-19 symptoms.
This study uses two existing data sources to examine how patients’ symptoms can be used to differentiate COVID-19 from other respiratory diseases. One dataset consisted of 839,288 laboratory-confirmed, symptomatic, COVID-19 positive cases reported to the Centers for Disease Control and Prevention (CDC) from March 1, 2019, to September 30, 2020. The second dataset provided the controls and included 1,814 laboratory-confirmed influenza positive, symptomatic cases, and 812 cases with symptomatic influenza-like-illnesses. The controls were reported to the Influenza Research Database of the National Institute of Allergy and Infectious Diseases (NIAID) between January 1, 2000, and December 30, 2018. Data were analyzed using case-control study design. The comparisons were done using 45 scenarios, with each scenario making different assumptions regarding prevalence of COVID-19 (2%, 4%, and 6%), influenza (0.01%, 3%, 6%, 9%, 12%) and influenza-like-illnesses (1%, 3.5% and 7%). For each scenario, a logistic regression model was used to predict COVID-19 from 2 demographic variables (age, gender) and 10 symptoms (cough, fever, chills, diarrhea, nausea and vomiting, shortness of breath, runny nose, sore throat, myalgia, and headache). The 5-fold cross-validated Area under the Receiver Operating Curves (AROC) was used to report the accuracy of these regression models. The value of various symptoms in differentiating COVID-19 from influenza depended on a variety of factors, including (1) prevalence of pathogens that cause COVID-19, influenza, and influenza-like-illness; (2) age of the patient, and (3) presence of other symptoms. The model that relied on 5-way combination of symptoms and demographic variables, age and gender, had a cross-validated AROC of 90%, suggesting that it could accurately differentiate influenza from COVID-19. This model, however, is too complex to be used in clinical practice without relying on computer-based decision aid. Study results encourage development of web-based, stand-alone, artificial Intelligence model that can interview patients and help clinicians make quarantine and triage decisions.
Background and Objectives: COVID-19 symptoms change after onset-some show early, others later. This article examines whether the order of occurrence of symptoms can improve diagnosis of COVID-19 before test results are available. Methods: In total, 483 individuals who completed a COVID-19 test were recruited through Listservs. Participants then completed an online survey regarding their symptoms and test results. The order of symptoms was set according to (a) whether the participant had a "history of the symptom" due to a prior condition; and (b) whether the symptom "occurred first," or prior to, other symptoms of COVID-19. Two LASSO (Least Absolute Shrinkage and Selection Operator) regression models were developed. The first model, referred to as "time-invariant," used demographics and symptoms but not the order of symptom occurrence. The second model, referred to as "time-sensitive," used the same data set but included the order of symptom occurrence. Results: The average cross-validated area under the receiver operating characteristic (AROC) curve for the time-invariant model was 0.784. The time-sensitive model had an AROC curve of 0.799. The difference between the 2 accuracy levels was statistically significant (α < .05). Conclusion: The order of symptom occurrence made a statistically significant, but small, improvement in the accuracy of the diagnosis of COVID-19.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.