Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) 2014
DOI: 10.3115/v1/p14-1062
|View full text |Cite
|
Sign up to set email alerts
|

A Convolutional Neural Network for Modelling Sentences

Abstract: The ability to accurately represent sentences is central to language understanding. We describe a convolutional architecture dubbed the Dynamic Convolutional Neural Network (DCNN) that we adopt for the semantic modelling of sentences. The network uses Dynamic k-Max Pooling, a global pooling operation over linear sequences. The network handles input sentences of varying length and induces a feature graph over the sentence that is capable of explicitly capturing short and long-range relations. The network does n… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
1,673
0
8

Year Published

2015
2015
2023
2023

Publication Types

Select...
7
2
1

Relationship

0
10

Authors

Journals

citations
Cited by 2,892 publications
(1,745 citation statements)
references
References 26 publications
1
1,673
0
8
Order By: Relevance
“…They further extend the method to a syntactic treebank annotated with sentiment labels (Socher et al, 2013). More recently, Kalchbrenner et al (2014) use a dynamic pooling network to include the structure…”
Section: Related Workmentioning
confidence: 99%
“…They further extend the method to a syntactic treebank annotated with sentiment labels (Socher et al, 2013). More recently, Kalchbrenner et al (2014) use a dynamic pooling network to include the structure…”
Section: Related Workmentioning
confidence: 99%
“…Deep neural networks have shown great promises at capturing salient features for these complex tasks (Mikolov et al, 2013b;Severyn and Moschitti, 2015a). Particularly successful for sentiment classification were Convolutional Neural Networks (CNN) (Kim, 2014;Kalchbrenner et al, 2014;Severyn and Moschitti, 2015a;Severyn and Moschitti, 2015b;Johnson and Zhang, 2015), on which our work builds upon. * These authors contributed equally to this work These networks typically have a large number of parameters and are especially effective when trained on large amounts of data.…”
Section: Introductionmentioning
confidence: 99%
“…Although a paragraph vector did not work efficiently, our model has a tentative model which does not have interaction between relational, paragraph, and word embeddings such as in (Denil et al, 2015), which is one immediate challenge. Then, other challenge includes replacement of a paragraph vector model with a convolutional sentence vector model (Kalchbrenner et al, 2014) and RNN-LSTM model (Le and Zuidema, 2015). The former approach is related to the supervised learning instead of unsupervised learning.…”
Section: Resultsmentioning
confidence: 99%