2016 IEEE/AIAA 35th Digital Avionics Systems Conference (DASC) 2016
DOI: 10.1109/dasc.2016.7778024
|View full text |Cite
|
Sign up to set email alerts
|

Reducing controller workload with automatic speech recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
23
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
5
3
1

Relationship

4
5

Authors

Journals

citations
Cited by 35 publications
(23 citation statements)
references
References 5 publications
0
23
0
Order By: Relevance
“…Multimodal HMIs combine different interaction modalities, aiming to support a natural [1] and efficient way of human communication [2,3]. Recent research has revealed that reasonable interaction technologies [4] for a CWP should recognize touch, speech, and gaze [5][6][7][8]. In accordance with these findings, the German Aerospace Center (DLR) has developed the multimodal CWP TriControl concept, which combines automatic speech recognition, multitouch gestures with one or multiple fingers on a touch input device, and eye-tracking via infrared sensors located at the bottom of the monitor.…”
Section: Of 26mentioning
confidence: 99%
“…Multimodal HMIs combine different interaction modalities, aiming to support a natural [1] and efficient way of human communication [2,3]. Recent research has revealed that reasonable interaction technologies [4] for a CWP should recognize touch, speech, and gaze [5][6][7][8]. In accordance with these findings, the German Aerospace Center (DLR) has developed the multimodal CWP TriControl concept, which combines automatic speech recognition, multitouch gestures with one or multiple fingers on a touch input device, and eye-tracking via infrared sensors located at the bottom of the monitor.…”
Section: Of 26mentioning
confidence: 99%
“…Input is predominantly provided manually by mouse, as well as with keyboard inputs. As shown in [30] ASR could be a solution to reduce controllers' workload significantly. The input needed for the radar labels could be directly extracted from the controller-pilot communication.…”
Section: Other Benefitting Applicationsmentioning
confidence: 99%
“…DLR and Saarland University have shown in the AcListant® project that command recognition rates of 95% with command recognition error rates below 2% are possible [29]. Its follow-up project, AcListant®-Strips validated that Assistant Based Speech Recognition (ABSR) can reduce controllers' workload for radar label maintenance by a factor of three [30] and that fuel savings of 60 liters of kerosene per flight are possible [11]. In this exercise Air Navigation Services of Czech Republic (ANS CR), DLR, the aviation consultant Integra and Thales ATM Group concentrated on the safety aspects of ASR when used as input device for radar label maintenance instead of a mouse.…”
Section: B Safety Assessment Of Asr For Radar Label Maintenancementioning
confidence: 99%
“…The ABSR system developed by Saarland University (USAAR) and DLR analyses the controller pilot communication and shows the recognized commands in the radar label directly to the ATCo [2]. As command recognition rates better than 95% were achieved for Dusseldorf approach area, the controller only needs to manually correct the output of the speech recognizer in less than one of twenty cases [3]. The controller gets additional free cognitive resources, which increase safety.…”
Section: A Problemmentioning
confidence: 99%