2012
DOI: 10.1109/tpami.2012.25
|View full text |Cite
|
Sign up to set email alerts
|

A Model-Based Sequence Similarity with Application to Handwritten Word Spotting

Abstract: This paper proposes a novel similarity measure between vector sequences. We work in the framework of model-based approaches, where each sequence is first mapped to a Hidden Markov Model (HMM) and then a measure of similarity is computed between the HMMs. We propose to model sequences with semicontinuous HMMs (SC-HMMs). This is a particular type of HMM whose emission probabilities in each state are mixtures of shared Gaussians. This crucial constraint provides two major benefits. First, the a priori information… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
59
0

Year Published

2012
2012
2023
2023

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 89 publications
(60 citation statements)
references
References 36 publications
0
59
0
Order By: Relevance
“…We first compare with our reimplementation of the exemplar SVM-based approach of [1], where, at query time, a classifier is trained using the query as a positive sample and the training set as negative samples, and the model is used to rank the dataset. We also compare [25] are not exactly comparable, we use partitions of the same size and very similar protocols. We also report results using the character HMM of [7], as well as the results reported on [9] using that method with a simpler subset.…”
Section: Methodsmentioning
confidence: 99%
See 3 more Smart Citations
“…We first compare with our reimplementation of the exemplar SVM-based approach of [1], where, at query time, a classifier is trained using the query as a positive sample and the training set as negative samples, and the model is used to rank the dataset. We also compare [25] are not exactly comparable, we use partitions of the same size and very similar protocols. We also report results using the character HMM of [7], as well as the results reported on [9] using that method with a simpler subset.…”
Section: Methodsmentioning
confidence: 99%
“…Note that this approach restricts one to keywords that need to be learned offline, usually with large amounts of data. In [25], this problem is solved by learning a semicontinuous HMM (SC-HMM). The parameters of the SC-HMM are learned on a pool of unsupervised samples.…”
Section: Supervised Word Representation With Phoc Attributesmentioning
confidence: 99%
See 2 more Smart Citations
“…Finally, by using a similarity measure -commonly a Dynamic Time Warping (DTW) or a Hidden Markov Model (HMM)-based similarity -, the query word is compared and candidates are ranked according to this similarity. Examples of this framework are the works of Rath and Manmatha [16] and Rodríguez-Serrano and Perronnin [18]. One of the main drawbacks of these systems is that they need to perform a costly and error prone segmentation step to select candidate windows.…”
Section: Introductionmentioning
confidence: 99%