2020
DOI: 10.3390/s20185151
|View full text |Cite
|
Sign up to set email alerts
|

British Sign Language Recognition via Late Fusion of Computer Vision and Leap Motion with Transfer Learning to American Sign Language

Abstract: In this work, we show that a late fusion approach to multimodality in sign language recognition improves the overall ability of the model in comparison to the singular approaches of image classification (88.14%) and Leap Motion data classification (72.73%). With a large synchronous dataset of 18 BSL gestures collected from multiple subjects, two deep neural networks are benchmarked and compared to derive a best topology for each. The Vision model is implemented by a Convolutional Neural Network and optimised A… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
25
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
7
2

Relationship

1
8

Authors

Journals

citations
Cited by 64 publications
(27 citation statements)
references
References 33 publications
0
25
0
Order By: Relevance
“…-Sentiment Analysis of Text (Bird et al 2019b)-The participant requests to instantiate a sentiment analysis classification algorithm for a given text. -Sign Language Recognition (Bird et al 2020a)-The participant requests to converse via sign language, a camera and Leap Motion and Leap Motion are activated for multi-modality classification. Sign language is now accepted as input to the task-classification layer of the chatbot.…”
Section: Y2mentioning
confidence: 99%
See 1 more Smart Citation
“…-Sentiment Analysis of Text (Bird et al 2019b)-The participant requests to instantiate a sentiment analysis classification algorithm for a given text. -Sign Language Recognition (Bird et al 2020a)-The participant requests to converse via sign language, a camera and Leap Motion and Leap Motion are activated for multi-modality classification. Sign language is now accepted as input to the task-classification layer of the chatbot.…”
Section: Y2mentioning
confidence: 99%
“…• Sign Language Recognition [42] -The participant requests to converse via sign language, a camera and Leap Motion and Leap Motion are activated for multi-modality classification. Sign language is now accepted as input to the task-classification layer of the chatbot.…”
Section: Proposed Approachmentioning
confidence: 99%
“…A methodology of fine-tuning network brings simple and fast learning network, compared to conventional network initialized from the grass-root. Researchers identified the potential of neural-network-based TL [ 59 , 60 ]. In this paper, TL approach based on multi-stack deep BiLSTM network as shown in Figure 6 is implemented to recognize dynamic double-hand ASL words.…”
Section: Methodsmentioning
confidence: 99%
“…They trained a model on ImageNet and transferred the knowledge to Bangla sign alphabet recognition [12]. Observing similar hand gestures in British and American sign languages, Bird et al [13] conducted transfer learning from British to American sign languages, based on color modality and bone modality (finger joints). However, the background of the color modality may affect transfer learning in this method.…”
Section: Figurementioning
confidence: 99%