Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics 2020
DOI: 10.18653/v1/2020.acl-main.83
|View full text |Cite
|
Sign up to set email alerts
|

A Frame-based Sentence Representation for Machine Reading Comprehension

Abstract: Sentence representation (SR) is the most crucial and challenging task in Machine Reading Comprehension (MRC). MRC systems typically only utilize the information contained in the sentence itself, while human beings can leverage their semantic knowledge. To bridge the gap, we proposed a novel Frame-based Sentence Representation (FSR) method, which employs frame semantic knowledge to facilitate sentence modelling. Specifically, different from existing methods that only model lexical units (LUs), Frame Representat… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
18
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
2
1

Relationship

1
5

Authors

Journals

citations
Cited by 22 publications
(18 citation statements)
references
References 21 publications
0
18
0
Order By: Relevance
“…Thus, instead of simply mixing up them in sentence level, like above two methods, we design a novel Location-wise Fusion Method (LFM) to coherently integrate both syntax and frame semantic information at token level, obtaining a better sentence representation, shown in Figure 4. (Hermann et al, 2015) 46.3 41.9 Neural Reasoner (Peng et al, 2015) 47.6 45.6 Parallel-Hierarchical (Trischler et al, 2016) 74.58 71.00 Reading Strategies (Sun et al, 2018) 81.7 82.0 BERT+DCMN+ (Zhang et al, 2019) 85.0 86.5 XLNet+DCMN+ (Zhang et al, 2019) 86.2 86.6 FSR (Guo et al, 2020)…”
Section: Location-wise Fusion Methods (Lfm)mentioning
confidence: 99%
See 2 more Smart Citations
“…Thus, instead of simply mixing up them in sentence level, like above two methods, we design a novel Location-wise Fusion Method (LFM) to coherently integrate both syntax and frame semantic information at token level, obtaining a better sentence representation, shown in Figure 4. (Hermann et al, 2015) 46.3 41.9 Neural Reasoner (Peng et al, 2015) 47.6 45.6 Parallel-Hierarchical (Trischler et al, 2016) 74.58 71.00 Reading Strategies (Sun et al, 2018) 81.7 82.0 BERT+DCMN+ (Zhang et al, 2019) 85.0 86.5 XLNet+DCMN+ (Zhang et al, 2019) 86.2 86.6 FSR (Guo et al, 2020)…”
Section: Location-wise Fusion Methods (Lfm)mentioning
confidence: 99%
“…Input module takes in source context x and external feature text x sf , i.e., syntactic context x s and frame semantic context x f . In particular, syntactic context x s are produced by replacing words with their dependency labels, while frame semantic context x f are produced by replacing words with frames and frame elements (Guo et al, 2020). Then Bert (Devlin et al, 2018) is employed to encode the source context x into a vector g x .…”
Section: Syntax and Frame Semantics Labelingmentioning
confidence: 99%
See 1 more Smart Citation
“…a buyer, French fries is a kind of food. It gives us a clue that semantic knowledge can help machine readers understand the meaning of a sentence [13].…”
Section: Introductionmentioning
confidence: 99%
“…To our knowledge, some studies have incorporated external semantic information to help their models better understand natural language text. Guo et al [13] employ semantic information by modeling lexical units based on the FrameNet [16] knowledge base whose semantic label form is completely different from the PropBank frame [17]. We adopt the PropBank-style semantic frame because it can cover every verb in a sentence.…”
Section: Introductionmentioning
confidence: 99%