2019
DOI: 10.1007/s00521-018-3937-8
|View full text |Cite
|
Sign up to set email alerts
|

A GA based hierarchical feature selection approach for handwritten word recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
45
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
3

Relationship

3
5

Authors

Journals

citations
Cited by 140 publications
(45 citation statements)
references
References 40 publications
0
45
0
Order By: Relevance
“…Compared to the other sets of values in consideration, these values provide better results in pre-processing, as also depicted pictorially in Figure 3. Finally, the contrast-normalized image's intensity value of each pixel (i.e., ( , )) is given by Equation (3). Figure 4 shows the contrast-normalized output, corresponding to the input word image.…”
Section: Contrast Normalizationmentioning
confidence: 99%
See 1 more Smart Citation
“…Compared to the other sets of values in consideration, these values provide better results in pre-processing, as also depicted pictorially in Figure 3. Finally, the contrast-normalized image's intensity value of each pixel (i.e., ( , )) is given by Equation (3). Figure 4 shows the contrast-normalized output, corresponding to the input word image.…”
Section: Contrast Normalizationmentioning
confidence: 99%
“…Such efforts of obtaining the underlying machine-readable text from handwritten documents have opened up a new research domain known as handwritten text recognition (HTR). Despite some notable success in HTR as found in the literature [1][2][3], many uncertain problems associated with HTR remain unresolved. These problems are mainly related to variations in writing styles among different individuals, as well as that within a single individual, due to changes in mood, age, time, environment, or situation, etc.…”
Section: Introductionmentioning
confidence: 99%
“…This ultimately stretches the dimension of feature set and brings down the overall accuracy in making predictions. Feature selection (FS)-an initiative to optimize feature dimensions undertaken by various researchers-aims in extricating redundant/irrelevant features that do not make any significant contribution in the overall prediction process [7,8]. FS is a useful way for substantial reduction of the size of original-feature vectors used to predict target facial emotions expressed by humans.…”
Section: Introductionmentioning
confidence: 99%
“…Depending on the criteria of evaluation. FS is broadly divided into three separate categories namely filter [6][7][8], wrapper [9][10][11] and embedded [12][13][14] models. Filter methods use the statistical characteristics and intrinsic properties of features to evaluate candidate solutions, whereas wrapper methods take help of a learning algorithm (a classifier) to evaluate the solutions at every iteration.…”
Section: Introductionmentioning
confidence: 99%
“…Flowchart of CGA model consisting of two different segments, namely EGA and Coalition game interacting with each other. dataset[11] consists of six classes, namely Boxing, Hand-clapping, Hand-waving, Jogging, Running, Walking. The dataset consists of 599 videos which are equally divided among the classes (100 videos each) except Hand-clapping which consists of 99 videos.…”
mentioning
confidence: 99%