Proceedings of the Ninth Conference on European Chapter of the Association for Computational Linguistics - 1999
DOI: 10.3115/977035.977085
|View full text |Cite
|
Sign up to set email alerts
|

μ-TBL lite

Abstract: This short paper describes-and in fact gives the complete source for-a tiny Prolog program implementing a flexible and fairly efficient Transformation-Based Learning (TBL) system:

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
5
0

Year Published

2011
2011
2023
2023

Publication Types

Select...
2
1
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(5 citation statements)
references
References 0 publications
0
5
0
Order By: Relevance
“…Because of their flexibility, machine learning approaches allow for multiple different cues. While word sequences, encoded through a bag‐of‐words approach, are dominant (e.g., Grau et al., 2004; Ribeiro et al., 2015; Sridhar et al., 2009; Surendran & Levow, 2006), syntactic cues (Di Eugenio et al., 2010; Lager & Zinovjeva, 1999; Novielli & Strapparava, 2009; Verbree et al., 2006) and semantic cues, through a latent semantic analysis (Di Eugenio et al., 2010; Novielli & Strapparava, 2009), have also been used. Some machine learning studies did not consider context (Ang et al., 2005; Grau et al., 2004; Novielli & Strapparava, 2009), but most others used contextual cues, such as surface information of previous utterances (Ribeiro et al., 2015; Sridhar et al., 2009), cues on the speakers of the utterances (Di Eugenio et al., 2010; Lager & Zinovjeva, 1999; Sridhar et al., 2009), and cues related to the organization of the discourse, through encoding turns with a hierarchical structure, such as subdialogs (Di Eugenio et al., 2010).…”
Section: Identifying Cues In Existing Dialog Act Classification Studiesmentioning
confidence: 99%
See 4 more Smart Citations
“…Because of their flexibility, machine learning approaches allow for multiple different cues. While word sequences, encoded through a bag‐of‐words approach, are dominant (e.g., Grau et al., 2004; Ribeiro et al., 2015; Sridhar et al., 2009; Surendran & Levow, 2006), syntactic cues (Di Eugenio et al., 2010; Lager & Zinovjeva, 1999; Novielli & Strapparava, 2009; Verbree et al., 2006) and semantic cues, through a latent semantic analysis (Di Eugenio et al., 2010; Novielli & Strapparava, 2009), have also been used. Some machine learning studies did not consider context (Ang et al., 2005; Grau et al., 2004; Novielli & Strapparava, 2009), but most others used contextual cues, such as surface information of previous utterances (Ribeiro et al., 2015; Sridhar et al., 2009), cues on the speakers of the utterances (Di Eugenio et al., 2010; Lager & Zinovjeva, 1999; Sridhar et al., 2009), and cues related to the organization of the discourse, through encoding turns with a hierarchical structure, such as subdialogs (Di Eugenio et al., 2010).…”
Section: Identifying Cues In Existing Dialog Act Classification Studiesmentioning
confidence: 99%
“…In both these studies, the models were evaluated on the Map Task corpus where the instruction giver and instruction follower might use words differently. In (other) machine learning and deep learning approaches, speaker cues have typically been encoded using a feature that tells the model who the speaker is (Bothe, Weber et al., 2018 ; Cerisara et al., 2018; Di Eugenio et al., 2010) or whether the previous utterance is uttered by the same or different speaker from the current utterance (e.g., Lager & Zinovjeva, 1999; Liu et al., 2017; Yano et al., 2021; Zhao & Kawahara, 2019). Interestingly, many deep learning models, despite encoding contextual cues such as the structure of turns directly into the model, do not take into account who uttered which utterance, and still achieve a good performance.…”
Section: Identifying Cues In Existing Dialog Act Classification Studiesmentioning
confidence: 99%
See 3 more Smart Citations