2018 IEEE/AIAA 37th Digital Avionics Systems Conference (DASC) 2018
DOI: 10.1109/dasc.2018.8569879
|View full text |Cite
|
Sign up to set email alerts
|

Semi-supervised Adaptation of Assistant Based Speech Recognition Models for different Approach Areas

Abstract: Air Navigation Service Providers (ANSPs) replace paper flight strips through different digital solutions. The instructed commands from an air traffic controller (ATCos) are then available in computer readable form. However, those systems require manual controller inputs, i.e. ATCos' workload increases. The Active Listening Assistant (AcListant®) project has shown that Assistant Based Speech Recognition (ABSR) is a potential solution to reduce this additional workload. However, the development of an ABSR applic… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
14
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
7
3

Relationship

5
5

Authors

Journals

citations
Cited by 19 publications
(19 citation statements)
references
References 17 publications
0
14
0
Order By: Relevance
“…Meanwhile, spoken ATC communications from LiveATC.net were collected and transcribed; LiveATC uses similar VHF receivers as the ones that are intended for use. Currently, the call sign detection performance is approximately six times worse than what MALORCA has demonstrated [7,8] (although the contextual information is so far avoided). We attribute this to the low (signal-to-noise) ratio of the collected speech segments.…”
Section: Motivationmentioning
confidence: 88%
“…Meanwhile, spoken ATC communications from LiveATC.net were collected and transcribed; LiveATC uses similar VHF receivers as the ones that are intended for use. Currently, the call sign detection performance is approximately six times worse than what MALORCA has demonstrated [7,8] (although the contextual information is so far avoided). We attribute this to the low (signal-to-noise) ratio of the collected speech segments.…”
Section: Motivationmentioning
confidence: 88%
“…In addition to the well-known language assistants from Apple (Siri), Amazon (Alexa), Samsung (S Voice) and the Google assistant, there exist also less common assistants, such as Cortana from Microsoft and Facebook's M [14]. IVA platforms apply speech recognition as input interface of their speech assistants [15]. This involves direct human-machine communication, which triggers a defined action by sending a command to the speech assistant [16].…”
Section: Integrated Voice Assistantsmentioning
confidence: 99%
“…every few seconds) of the airspace and hence can be used to provide a situational context for the ASR engine. Given a dataset of airspace situations (encoded in the radar data) and the corresponding ground-truth commands issued by controllers, we build a Command Prediction Model (CPM) [22,23] that provides a list of plausible commands for a given airspace situation. This list of commands is called the dynamic context and is used for iterative semi-supervised learning.…”
Section: Command Prediction Modelmentioning
confidence: 99%