2013
DOI: 10.3390/e15041202
|View full text |Cite
|
Sign up to set email alerts
|

Pushing for the Extreme: Estimation of Poisson Distribution from Low Count Unreplicated Data—How Close Can We Get?

Abstract: Studies of learning algorithms typically concentrate on situations where potentially ever growing training sample is available. Yet, there can be situations (e.g., detection of differentially expressed genes on unreplicated data or estimation of time delay in non-stationary gravitationally lensed photon streams) where only extremely small samples can be used in order to perform an inference. On unreplicated data, the inference has to be performed on the smallest sample possible-sample of size 1. We study wheth… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2017
2017
2017
2017

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 13 publications
0
1
0
Order By: Relevance
“…Such a comparison can be done by evaluating how close each predictive density f p (y|x) is to the true density f (y|x; θ), where θ is a vector of unknown parameters. To judge the goodness-of-fit of a given predictive method [23][24][25], a common approach has been to assess the relative closeness with the average Kullback-Leibler (KL) divergence [26], which is defined by…”
Section: Kullback-leibler Divergencementioning
confidence: 99%
“…Such a comparison can be done by evaluating how close each predictive density f p (y|x) is to the true density f (y|x; θ), where θ is a vector of unknown parameters. To judge the goodness-of-fit of a given predictive method [23][24][25], a common approach has been to assess the relative closeness with the average Kullback-Leibler (KL) divergence [26], which is defined by…”
Section: Kullback-leibler Divergencementioning
confidence: 99%