2020
DOI: 10.1007/978-3-030-45439-5_37
|View full text |Cite
|
Sign up to set email alerts
|

Recognizing Semantic Relations: Attention-Based Transformers vs. Recurrent Models

Abstract: Automatically recognizing an existing semantic relation (such as "is a", "part of", "property of", "opposite of" etc.) between two arbitrary words (phrases, concepts, etc.) is an important task affecting many information retrieval and artificial intelligence tasks including query expansion, common-sense reasoning, question answering, and database federation. Currently, two classes of approaches exist to classify a relation between words (concepts) X and Y: (1) path-based and (2) distributional. While the path-… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2020
2020
2020
2020

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 19 publications
0
1
0
Order By: Relevance
“…There has been some research on different ways for integration of LSTMs and CNNs, since the two methods for building the neural networks are somewhat complementary. Roussinov et al (2020) studied the use of LSTMs (or pre-trained transformers) with convolution filters to predict the lexical relations for pairs of concepts, for example, Tom Cruise is an actor or a rat has a tail. Most similar to our work is the study by (Zhou et al, 2016), which also used a stacked architecture with LSTM followed by twodimensional pooling to obtain a fixed-length representation for text classification tasks.…”
Section: Related Workmentioning
confidence: 99%
“…There has been some research on different ways for integration of LSTMs and CNNs, since the two methods for building the neural networks are somewhat complementary. Roussinov et al (2020) studied the use of LSTMs (or pre-trained transformers) with convolution filters to predict the lexical relations for pairs of concepts, for example, Tom Cruise is an actor or a rat has a tail. Most similar to our work is the study by (Zhou et al, 2016), which also used a stacked architecture with LSTM followed by twodimensional pooling to obtain a fixed-length representation for text classification tasks.…”
Section: Related Workmentioning
confidence: 99%