2019
DOI: 10.1561/9781680835977
|View full text |Cite
|
Sign up to set email alerts
|

Group Testing: An Information Theory Perspective

Abstract: The group testing problem concerns discovering a small number of defective items within a large population by performing tests on pools of items. A test is positive if the pool contains at least one defective, and negative if it contains no defectives. This is a sparse inference problem with a combinatorial flavour, with applications in medical testing, biology, telecommunications, information technology, data science, and more.In this monograph, we survey recent developments in the group testing problem from … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
72
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 68 publications
(73 citation statements)
references
References 189 publications
(497 reference statements)
1
72
0
Order By: Relevance
“…However, the number of tests in each search scales as log 2 n ∼ log 2 (1/p) for the optimal value of n, at low p. In contrast, the most efficient known parallel searches, called "noiseless, nonadaptive" tests in Ref. [18], require a factor of e ln 2 more tests (see their equations (2.8) and (2.10)), just as our algorithm does.…”
Section: Appendix B: Information Theory Boundsmentioning
confidence: 80%
See 1 more Smart Citation
“…However, the number of tests in each search scales as log 2 n ∼ log 2 (1/p) for the optimal value of n, at low p. In contrast, the most efficient known parallel searches, called "noiseless, nonadaptive" tests in Ref. [18], require a factor of e ln 2 more tests (see their equations (2.8) and (2.10)), just as our algorithm does.…”
Section: Appendix B: Information Theory Boundsmentioning
confidence: 80%
“…The advantage of our approach over Dorfman's is that of going to higher dimensions, and using group testing uniformly. Whereas testing procedures may be represented as matrices in Dorfman's and other approaches [18], in our approach higher dimensional tensors are required.…”
mentioning
confidence: 99%
“…It should be acknowledged that this valuable boost does not explore the theoretical and practically achievable rates of compressive sampling capabilities. While theoretically perfect, reconstruction of original test results available with not much more than k log 2 (N/k) measurements ( 25 ), where N is the number of samples and k is the number of positive cases, with the use of modern decoding algorithms, the achievable rates are close to the theoretical bounds. This means a 10- to 20-fold rate increase for a 1–0.1% prevalence band is possible with more sophisticated pooling schemes and decoding algorithms.…”
Section: Population Level Scanning For Covid-19mentioning
confidence: 89%
“…As collecting blood samples and performing a single Wassermann test for each man appeared to be quite resource demanding in the circumstances of World War II, pooling blood samples and performing group tests was observed to be quite effective since the disease was relatively rare. Later on, group testing has become a popular topic in the information theory field, enabling orders of magnitude saving from the test numbers while being able to pinpoint sparse positives accurately ( 25 ). Similarly, the attractiveness of recovering sparse signals from a small number of measurements led, in the mid-2000s, to the birth of an entire research area called compressive sampling (compressed sensing) in the signal processing field around ( 26 ).…”
Section: Population Level Scanning For Covid-19mentioning
confidence: 99%
“…Because the positive samples are indistinguishable from negative samples, a test must be performed on a sample or a group of samples in order to determine their status. The test is typically assumed to always be accurate, even when many samples are tested together (in practice, this is often not the case and approaches that consider test error and constraints on the number of samples per pool have been examined [ 19 , 20 ]). In the worst case, all of the samples would need to be tested individually requiring N tests.…”
Section: Introductionmentioning
confidence: 99%