2020
DOI: 10.1016/j.patcog.2019.107159
|View full text |Cite
|
Sign up to set email alerts
|

Encoding features robust to unseen modes of variation with attentive long short-term memory

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
1
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 13 publications
0
1
0
Order By: Relevance
“…LSTM+LR. It is the combination of two typical methods: long short-term memory (LSTM) model [46] and logistic regression (LoR) model [47]. The LSTM is a sequential modelling method, and its role is to extract semantic features for posts.…”
Section: Experimental Settingsmentioning
confidence: 99%
“…LSTM+LR. It is the combination of two typical methods: long short-term memory (LSTM) model [46] and logistic regression (LoR) model [47]. The LSTM is a sequential modelling method, and its role is to extract semantic features for posts.…”
Section: Experimental Settingsmentioning
confidence: 99%
“…Recurrent neural networks (RNNs), as one of the most widespread neural network architectures, mainly focus on a wide variety of applications where data is sequential, such as text classification [38], language modeling [60], speech recognition [64], machine translation [56]. On the other hand, RNNs have also shown their notable success in image processing [17], including but not limited to text recognition in scenes [55], facial expression recognition [6], visual question answering [4], handwriting recognition [52]. As a well-known architecture of RNNs, LSTM [27] has been widely utilized to encode various input (e.g., image, text, audio and video) to improve the recognition performance [60].…”
Section: Introductionmentioning
confidence: 99%