Proceedings of the 2nd Workshop on Machine Reading for Question Answering 2019
DOI: 10.18653/v1/d19-5825
|View full text |Cite
|
Sign up to set email alerts
|

Question Answering Using Hierarchical Attention on Top of BERT Features

Abstract: Machine Comprehension (MC) tests the ability of the machine to answer a question about a given passage. It requires modeling complex interactions between the passage and the question. Recently, attention mechanisms have been successfully extended to machine comprehension. In this work, the question and passage are encoded using BERT language embeddings to better capture the respective representations at a semantic level. Then, attention and fusion are conducted horizontally and vertically across layers at diff… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
5
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(5 citation statements)
references
References 7 publications
0
5
0
Order By: Relevance
“…The submission is built based on the provided BERT baselines. (Osama et al, 2019) The submission from Alexandria University uses the BERT-Base model to provide feature representations. Unlike other models which allowed finetuning of the language model parameters during training, this submission only trains model parameters associated with the question answering task, while keeping language model parameters frozen.…”
Section: Clermentioning
confidence: 99%
See 3 more Smart Citations
“…The submission is built based on the provided BERT baselines. (Osama et al, 2019) The submission from Alexandria University uses the BERT-Base model to provide feature representations. Unlike other models which allowed finetuning of the language model parameters during training, this submission only trains model parameters associated with the question answering task, while keeping language model parameters frozen.…”
Section: Clermentioning
confidence: 99%
“…Harbin Institute of Technology HLTC (Su et al, 2019) Hong Kong University of Science & Technology BERT-cased-whole-word Aristo @ AI2 CLER (Takahashi et al, 2019) Fuji Xerox Co., Ltd. Adv. Train (Lee et al, 2019) 42Maru and Samsung Research BERT-Multi-Finetune Beijing Language and Culture University PAL IN DOMAIN University of California Irvine HierAtt (Osama et al, 2019) Alexandria University (Longpre et al, 2019) 82.3 68.5 66.9 74.6 70.8 FT XLNet 82.9 68.0 66.7 74.4 70.5 HLTC (Su et al, 2019) 81.0 65.9 65.0 72.9 69.0 BERT-cased-whole-word 79.4 61.1 61.4 71.2 66.3 CLER (Takahashi et al, 2019) 80.2 62.7 62.5 69.7 66.1 Adv. Train (Lee et al, 2019) 76.8 57.1 57.9 66.5 62.…”
Section: Ft Xlnetmentioning
confidence: 99%
See 2 more Smart Citations
“…Beijing Language and Culture University PAL IN DOMAIN University of California Irvine HierAtt (Osama et al, 2019) Alexandria University which is a 10.7 point absolute improvement over our baseline, and 11.5 and 10.0 point improvements, respectively, on Split II (with the development portions provided) and Split III datasets (completely hidden to the participants).…”
Section: Ft Xlnetmentioning
confidence: 67%