Proceedings of 1995 IEEE International Symposium on Information Theory
DOI: 10.1109/isit.1995.531122
|View full text |Cite
|
Sign up to set email alerts
|

On the context tree maximizing algorithm

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
13
0

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 12 publications
(13 citation statements)
references
References 1 publication
0
13
0
Order By: Relevance
“…A tree with the same number of leaves but a higher order requires more bits to detail all the layers of the longer branches. This condition often arises in compression where an optimal trade-off between the code length of the sequence and the cost of the model is desired (Volf and Willems 1995). With this penalty term, we construct our objective function as a trade-off between the model complexity and finding a tree model that maximizes the a posteriori probability:…”
Section: Context Tree Estimationmentioning
confidence: 99%
See 2 more Smart Citations
“…A tree with the same number of leaves but a higher order requires more bits to detail all the layers of the longer branches. This condition often arises in compression where an optimal trade-off between the code length of the sequence and the cost of the model is desired (Volf and Willems 1995). With this penalty term, we construct our objective function as a trade-off between the model complexity and finding a tree model that maximizes the a posteriori probability:…”
Section: Context Tree Estimationmentioning
confidence: 99%
“…minimized among set I. This objective function can be solved recursively, and the penalty term can be readily broken down and incorporated into the recursive optimization process (Volf and Willems 1995). We define the maximized probability P s ‫ء‬ at node s as P s…”
Section: Context Tree Estimationmentioning
confidence: 99%
See 1 more Smart Citation
“…As we found in extensive simulation studies, the bias of the CTW estimator converges much faster than the biases of the LZ estimators, while keeping the advantage of dealing with long-range dependence. Moreover, from the CTW we can obtain an explicit statistical model for the data, the "maximum a posteriori probability" (MAP) tree described in [14]. The importance of these models comes from the fact that, in the information-theoretic context, they can be operationally interpreted as the "best" tree models for the data at hand.…”
Section: A Entropy Estimatesmentioning
confidence: 99%
“…We computed the MAP tree models [14] derived from spike train data using the CTW algorithm with depth D = 100. Figure 3 shows the suffix sets of two cells' MAP trees, sorted in descending order of suffix frequency.…”
Section: B Map Tree Models For Spike Trainsmentioning
confidence: 99%