Proceedings 2006 IEEE International Conference on Robotics and Automation, 2006. ICRA 2006.
DOI: 10.1109/robot.2006.1642253
|View full text |Cite
|
Sign up to set email alerts
|

A decision fusion classification architecture for mapping of tongue movements based on aural flow monitoring

Abstract: A complete signal processing strategy is presented to detect and precisely recognize tongue movement by monitoring changes in airflow that occur in the ear canal. Tongue movements within the human oral cavity create unique, subtle pressure signals in the ear that can be processed to produce command signals in response to that movement. The strategy developed for the human machine interface architecture includes energy-based signal detection and segmentation to extract ear pressure signals due to tongue movemen… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(5 citation statements)
references
References 8 publications
0
5
0
Order By: Relevance
“…Vaidyanathan et al have developed the approach of the airflow pressure changes due to the tongue movements in the ear canal. Although the classification accuracy of 97 % was achieved with decision fusion classification algorithm in the study, the ear listening performances and comforts might be degraded for the patients due to the microphone attached in the ear canal [17][18][19]. However, in our study, glossokinetic potential based approach has simple tongue contacts on the buccal walls and does not affect listening fulfillments.…”
Section: Introductionmentioning
confidence: 62%
“…Vaidyanathan et al have developed the approach of the airflow pressure changes due to the tongue movements in the ear canal. Although the classification accuracy of 97 % was achieved with decision fusion classification algorithm in the study, the ear listening performances and comforts might be degraded for the patients due to the microphone attached in the ear canal [17][18][19]. However, in our study, glossokinetic potential based approach has simple tongue contacts on the buccal walls and does not affect listening fulfillments.…”
Section: Introductionmentioning
confidence: 62%
“…However, our study, glossokinetic potential responses on a tongue-machine interface may offer handicapped people in a natural and easy-to-use control in assistive devices. The other design approach of a TMI by Vaidyanathan et al is based on airflow pressure changes created by the tongue movements, therefore attaching a microphone to the air canal [18][19][20][21][22]. However, GKP-based TMI can handle to manage an AT without affecting listening performance due to the acquisition the signals over the scalp.…”
Section: Introductionmentioning
confidence: 99%
“…Specifically, we have introduced a non-intrusive tongue-movement HMI concept [1][2][3][4][5][6], and shown tongue movements within the oral cavity create unique pressure signals in the ear (dubbed tongue-movement-ear-pressure (TMEP) signals). We have further developed and implemented new pattern classification strategies that have accurately recognized TMEP signals with over 97% accuracy over a range of users [3], hence providing an unobtrusive, completely noninvasive method of controlling peripheral or assist mechanisms through tongue movement.…”
Section: Introductionmentioning
confidence: 99%