Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conferen 2019
DOI: 10.18653/v1/d19-3021
|View full text |Cite
|
Sign up to set email alerts
|

LIDA: Lightweight Interactive Dialogue Annotator

Abstract: Dialogue systems have the potential to change how people interact with machines but are highly dependent on the quality of the data used to train them. It is therefore important to develop good dialogue annotation tools which can improve the speed and quality of dialogue data annotation. With this in mind, we introduce LIDA, an annotation tool designed specifically for conversation data. As far as we know, LIDA is the first dialogue annotation system that handles the entire dialogue annotation pipeline from ra… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 7 publications
(8 citation statements)
references
References 9 publications
0
7
0
Order By: Relevance
“…Previous work required a minimum of 100 positive examples for each code [47], while participants in our evaluation, on average, only created 133 (MAXQDA) or 182 (Cody) positive examples overall. Our participant Kelly reported the most interaction with ML suggestions 9 , while others barely noticed them. We believe that the barriers we set for Cody to providing ML suggestions, namely defning cut-of values for prediction confdence and requiring labels to be predicted correctly for all test instances, helped flter out many wrong suggestions.…”
Section: Working With Automated Suggestionsmentioning
confidence: 75%
“…Previous work required a minimum of 100 positive examples for each code [47], while participants in our evaluation, on average, only created 133 (MAXQDA) or 182 (Cody) positive examples overall. Our participant Kelly reported the most interaction with ML suggestions 9 , while others barely noticed them. We believe that the barriers we set for Cody to providing ML suggestions, namely defning cut-of values for prediction confdence and requiring labels to be predicted correctly for all test instances, helped flter out many wrong suggestions.…”
Section: Working With Automated Suggestionsmentioning
confidence: 75%
“…Another dialogue annotation tool is called LIDA (Collins et al 2019). The authors argue that the quality of a dataset has a significant effect on the quality of a dialogue system, hence, a good dialogue annotation tool is essential to create the best annotated dialogue dataset.…”
Section: Datasets For Task-oriented Dialogue Systemsmentioning
confidence: 99%
“…TWIST (Pluss, 2012) and LIDA (Collins et al, 2019) are intended for dialogue annotation, which has been mostly focused on task-oriented dialogue for dialogue systems. Task-oriented dialogues already suppose a predefined topic and predefined roles (e.g., customer support tasks) and little noise.…”
Section: Annotation Toolsmentioning
confidence: 99%