2016 IEEE EMBS Conference on Biomedical Engineering and Sciences (IECBES) 2016
DOI: 10.1109/iecbes.2016.7843469
|View full text |Cite
|
Sign up to set email alerts
|

Speech control of robotic hand augmented with 3D animation using neural network

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 14 publications
0
4
0
Order By: Relevance
“…For fast transmission data between SRF and host computer, the selected baud rate is 115.200 bit/s. Based on the previous research, the virtual hand or 3D animation of robotic hand can run simultaneously with robotic hand [6,7]. The 3D CAD model of SRF as shown in figure 1 (a) is exported from SolidWorks software to SimMechanics First Generation.…”
Section: Srf With 3d Animationmentioning
confidence: 99%
“…For fast transmission data between SRF and host computer, the selected baud rate is 115.200 bit/s. Based on the previous research, the virtual hand or 3D animation of robotic hand can run simultaneously with robotic hand [6,7]. The 3D CAD model of SRF as shown in figure 1 (a) is exported from SolidWorks software to SimMechanics First Generation.…”
Section: Srf With 3d Animationmentioning
confidence: 99%
“…Electric powered prosthetic hands can receive commands from users by one or few ways such as push-button, joystick, keyboard, text, electroencephalography (EEG) [9], Electroneurography (ENG) [10][11][12], electromyography (EMG) [13][14][15][16][17][18][19][20], vision [21][22][23], and speech [24][25][26][27][28]. Among these ways, electromyography is the most convenient for amputees.…”
Section: Introductionmentioning
confidence: 99%
“…Ismail et al designed a speech-based controller for a robotic hand using 13 selected features that comprise eight features in the frequency domain and five features in the time domain along with a multilayer perceptron (MLP). They performed training and testing in an identical room to reduce the influence of external noise [24]. Vijayaragavan et al used Google speech-to-text to control prosthetic hand by android devices [26].…”
Section: Introductionmentioning
confidence: 99%
“…To control a hand exoskeleton, developers used a combination of discrete wavelet transforms and hidden Markov models [28]. Developers used a multi-layer perceptron to control a robotic hand with 13 speech features that compromise five features in the time domain and eight features in the frequency domain [29]. To reduce the influence of external noise, they performed training and testing in an identical room.…”
Section: Introductionmentioning
confidence: 99%