2019
DOI: 10.1007/s00371-019-01758-8
|View full text |Cite
|
Sign up to set email alerts
|

Gestural flick input-based non-touch interface for character input

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
1
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 8 publications
(5 citation statements)
references
References 17 publications
0
1
0
Order By: Relevance
“…To obtain the cepstrum coefficient C i , we applied a Fourier transform to the logarithm of the power spectrum, therefore, we obtain the signal cepstrum by following Equation (9).…”
Section: Feature Extraction Of Emg Sensorsmentioning
confidence: 99%
See 1 more Smart Citation
“…To obtain the cepstrum coefficient C i , we applied a Fourier transform to the logarithm of the power spectrum, therefore, we obtain the signal cepstrum by following Equation (9).…”
Section: Feature Extraction Of Emg Sensorsmentioning
confidence: 99%
“…Human-computer interaction (HCI) is an ever-evolving advancement in the development of technology as a new method of communication between people and computers in the modern world [1,2]. Several new assistive methods, such as virtual reality [3], sign language recognition [4,5], speech recognition [6], visual analysis [7], brain activity [8], touch-free writing [9], have emerged in recent years to achieve this goal. Hand gesture recognition implies the importance of performing various visual tasks and working in an unobtrusive environment.…”
Section: Introductionmentioning
confidence: 99%
“…Moreover, handwriting is vital in identifying patients in medical fields, like identifying Parkinson's disease and autistic children [6]. Researchers are investigating handwritten characters from many sources, such as paper documents [7], images, touch and non-touch screens [8], and other devices. It is easy to collect, less stressful for humans, and suitable for classification.…”
Section: Introductionmentioning
confidence: 99%
“…However, Kinect can only locate joint hand points, making capturing hand details difficult. After using the depth image information obtained by Kinect to locate the hand, it has to segment the human hand area, select and extract effective features, etc., to complete the recognition of hand gestures [10][11]. The gesture recognition using the data source obtained from Kienct data will improve the recognition accuracy, robustness, and naturalness of the interaction compared to the use of peripheral equipment, such as a general depth camera.…”
Section: Introductionmentioning
confidence: 99%