Proceedings of the 13th International Workshop on Semantic Evaluation 2019
DOI: 10.18653/v1/s19-2019
|View full text |Cite
|
Sign up to set email alerts
|

L2F/INESC-ID at SemEval-2019 Task 2: Unsupervised Lexical Semantic Frame Induction using Contextualized Word Representations

Abstract: Building large datasets annotated with semantic information, such as FrameNet, is an expensive process. Consequently, such resources are unavailable for many languages and specific domains. This problem can be alleviated by using unsupervised approaches to induce the frames evoked by a collection of documents. That is the objective of the second task of SemEval 2019, which comprises three subtasks: clustering of verbs that evoke the same frame and clustering of arguments into both frame-specific slots and sema… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
16
0
1

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
1
1

Relationship

1
5

Authors

Journals

citations
Cited by 13 publications
(17 citation statements)
references
References 32 publications
0
16
0
1
Order By: Relevance
“…Task B.2 employed hand crafted features, a method to encode syntactic information, and again an agglomerative clustering method. Ribeiro et al (2019) also reported results for all subtasks using similar techniques to those reported in the other two submitted papers. Ribeiro et al (2019) used the bidirectional neural language model BERT, which also used.…”
Section: System Descriptionsmentioning
confidence: 62%
See 3 more Smart Citations
“…Task B.2 employed hand crafted features, a method to encode syntactic information, and again an agglomerative clustering method. Ribeiro et al (2019) also reported results for all subtasks using similar techniques to those reported in the other two submitted papers. Ribeiro et al (2019) used the bidirectional neural language model BERT, which also used.…”
Section: System Descriptionsmentioning
confidence: 62%
“…Ribeiro et al (2019) also reported results for all subtasks using similar techniques to those reported in the other two submitted papers. Ribeiro et al (2019) used the bidirectional neural language model BERT, which also used. Task A employed contextualized word representations proposed in (Ustalov et al, 2018), and Biemann's clustering algorithm (Biemann, 2006).…”
Section: System Descriptionsmentioning
confidence: 62%
See 2 more Smart Citations
“…While this allows to compare the average displacement due to semantic change across words, it does not give us a good sense of the overall structure of word vectors. In addition, the embedding space produced by BERT has been analyzed in terms of syntactic features, such as parse-trees (Coenen et al, 2019;Jawahar et al, 2019), part-of-speech, verbs and arguments (Shi et al, 2019;Ribeiro et al, 2019).…”
Section: Related Workmentioning
confidence: 99%