Proceedings of the 25th International Conference on Intelligent User Interfaces 2020
DOI: 10.1145/3377325.3377490
|View full text |Cite
|
Sign up to set email alerts
|

User-in-the-loop adaptive intent detection for instructable digital assistant

Abstract: People are becoming increasingly comfortable using Digital Assistants (DAs) to interact with services or connected objects. However, for non-programming users, the available possibilities for customizing their DA are limited and do not include the possibility of teaching the assistant new tasks. To make the most of the potential of DAs, users should be able to customize assistants by instructing them through Natural Language (NL). To provide such functionalities, NL interpretation in traditional assistants sho… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
0
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 34 publications
(44 reference statements)
0
0
0
Order By: Relevance
“…The main SDSs components involve speech recognition, language understanding, dialogue management, communication with external systems, response generation, and speech output [32]. Similar to SDSs, spoken language understanding (SLU), involves automatic speech recognition, natural language processing, understanding, and synthesis [6,25]. Voice interfaces have followed a more commercial deployment approach [31], like those present in smart speakers, acting as voice agents or virtual assistants and being ever more present in everyday life [41].…”
Section: Challenges In Natural Language Interaction For Hrimentioning
confidence: 99%
“…The main SDSs components involve speech recognition, language understanding, dialogue management, communication with external systems, response generation, and speech output [32]. Similar to SDSs, spoken language understanding (SLU), involves automatic speech recognition, natural language processing, understanding, and synthesis [6,25]. Voice interfaces have followed a more commercial deployment approach [31], like those present in smart speakers, acting as voice agents or virtual assistants and being ever more present in everyday life [41].…”
Section: Challenges In Natural Language Interaction For Hrimentioning
confidence: 99%
“…For studies exploring the contribution of the visual modality, we will refer to Alishahi and Fazly (2010) for models operating on image/caption pairs, or and Nikolaus et al (2022) for models operating on videos -see also Chrupała (2022) for a recent review. Similarly, embodied or socially grounded language learning agents have been proposed in Yu and Ballard (2003), Hermann et al (2017), Lair et al (2019), andOudeyer et al (2019).…”
Section: The Environment Model: From What Is Language Learned?mentioning
confidence: 99%