Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Langua 2016
DOI: 10.18653/v1/n16-1099
|View full text |Cite
|
Sign up to set email alerts
|

Dynamic Entity Representation with Max-pooling Improves Machine Reading

Abstract: We propose a novel neural network model for machine reading, DER Network, which explicitly implements a reader building dynamic meaning representations for entities by gathering and accumulating information around the entities as it reads a document. Evaluated on a recent large scale dataset (Hermann et al., 2015), our model exhibits better results than previous research, and we find that max-pooling is suited for modeling the accumulation of information on entities. Further analysis suggests that our model ca… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
62
0
1

Year Published

2016
2016
2022
2022

Publication Types

Select...
4
4
2

Relationship

0
10

Authors

Journals

citations
Cited by 35 publications
(64 citation statements)
references
References 9 publications
1
62
0
1
Order By: Relevance
“…Other Entity-Centric Study. There are several studies that consider the notion entity in other areas: text comprehension (Kobayashi et al, 2016;Henaff et al, 2016) and language modeling (Ji et al, 2017).…”
Section: Related Workmentioning
confidence: 99%
“…Other Entity-Centric Study. There are several studies that consider the notion entity in other areas: text comprehension (Kobayashi et al, 2016;Henaff et al, 2016) and language modeling (Ji et al, 2017).…”
Section: Related Workmentioning
confidence: 99%
“…What concerns ensembles, we used simple averaging of the answer probabilities predicted by ensemble members. (Hermann et al, 2015), results of models marked with ‡ are taken from (Hill et al, 2015) and results marked with are taken from (Kobayashi et al, 2016). Performance of ‡ and models was evaluated only on CNN dataset.…”
Section: Evaluation Methodsmentioning
confidence: 99%
“…Character-level embeddings are a typical way deep learning models handle this issue, whether used on their own [14], or in conjunction with word-level embedding Recursive Neural Networks (RNNs) [22], or in conjunction with an n-gram model [6]. Another approach is to learn new word embeddings on-the-fly from context [16].…”
Section: Representing Code As a Graphmentioning
confidence: 99%