2017
DOI: 10.1007/s10916-017-0819-z
|View full text |Cite
|
Sign up to set email alerts
|

Extricating Manual and Non-Manual Features for Subunit Level Medical Sign Modelling in Automatic Sign Language Classification and Recognition

Abstract: Subunit segmenting and modelling in medical sign language is one of the important studies in linguistic-oriented and vision-based Sign Language Recognition (SLR). Many efforts were made in the precedent to focus the functional subunits from the view of linguistic syllables but the problem is implementing such subunit extraction using syllables is not feasible in real-world computer vision techniques. And also, the present recognition systems are designed in such a way that it can detect the signer dependent ac… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 12 publications
(2 citation statements)
references
References 30 publications
0
2
0
Order By: Relevance
“…In decoding phase, a Transformer decoder is used to generate the translation. Elakkiya, R., et al [109] extracted manual and non-manual features by BPaHMM [116], and denoised and reduced dimension by variational autoencoder (VAE). The LSTM and 3D-CNN are employed as generator and discriminator.…”
Section: C: Other Methodsmentioning
confidence: 99%
“…In decoding phase, a Transformer decoder is used to generate the translation. Elakkiya, R., et al [109] extracted manual and non-manual features by BPaHMM [116], and denoised and reduced dimension by variational autoencoder (VAE). The LSTM and 3D-CNN are employed as generator and discriminator.…”
Section: C: Other Methodsmentioning
confidence: 99%
“…The HMM model was mainly used in the field of speech recognition in the early days [13,14]. Although the HMM has achieved great success in speech recognition, its performance in SLR is not satisfactory.…”
Section: Introductionmentioning
confidence: 99%