2021
DOI: 10.1177/0272989x211021603
|View full text |Cite
|
Sign up to set email alerts
|

Maximizing the Efficiency of Active Case Finding for SARS-CoV-2 Using Bandit Algorithms

Abstract: Even as vaccination for severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) expands in the United States, cases will linger among unvaccinated individuals for at least the next year, allowing the spread of the coronavirus to continue in communities across the country. Detecting these infections, particularly asymptomatic ones, is critical to stemming further transmission of the virus in the months ahead. This will require active surveillance efforts in which these undetected cases are proactively soug… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1

Relationship

3
2

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 26 publications
0
7
0
Order By: Relevance
“…The algorithm used to direct pop-up SARS-CoV-2 testing for this project has been described in detail elsewhere [8][9][10]. The algorithm is based on Thompson sampling, which uses a Bayesian updating process involving iteratively sampling from prior probability distributions of all potential testing sites-the set of all locations at which testing is being considered-to home in on those with the highest probability over the long run in finding new cases of SARS-CoV-2 [16,17].…”
Section: Statistical Approachmentioning
confidence: 99%
See 2 more Smart Citations
“…The algorithm used to direct pop-up SARS-CoV-2 testing for this project has been described in detail elsewhere [8][9][10]. The algorithm is based on Thompson sampling, which uses a Bayesian updating process involving iteratively sampling from prior probability distributions of all potential testing sites-the set of all locations at which testing is being considered-to home in on those with the highest probability over the long run in finding new cases of SARS-CoV-2 [16,17].…”
Section: Statistical Approachmentioning
confidence: 99%
“…How to optimize resource allocation over time is a well-studied problem in sequential decision-making and reinforcement learning. The introduction of a spatial component to these kinds of dilemmas has been applied in a variety of settings, from military search and rescue to oil exploration [8]. We have previously described the use of one set of tools, bandit algorithms, to address these kinds of problems for detection of HIV and SARS-CoV-2 in the community [8][9][10].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…To determine which sites would be visited on which days, we developed a Bayesian bandit algorithm, which are well-studied in other search-and-rescue and needle-in-haystack scenarios, including aircraft recovery, oil exploration, and slot machine (i.e., one-armed bandits) winnings optimization. Details of the algorithm and our site selection process are detailed elsewhere (Gonsalves et. al, 2021).…”
Section: Designing Faast Testing Programmentioning
confidence: 99%