2017
DOI: 10.1007/978-3-319-55753-3_28
|View full text |Cite
|
Sign up to set email alerts
|

Multi-Granularity Neural Sentence Model for Measuring Short Text Similarity

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
13
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 12 publications
(13 citation statements)
references
References 20 publications
0
13
0
Order By: Relevance
“…Models in this category typically enhance the inputs by extending it from words to phrases/n-grams or sentences, apply certain single-granularity architectures over each input form, and aggregate all the granularity for final relevance output. For example, in [94], a CNN and an LSTM are applied to obtain the character-level, word-level, and sentence-level representations of the inputs, and each level representations are then interacted and aggregated by the evaluation function g to produce the final relevance score. Similar ideas can be found in Conv-KNRM [84] and MIX [95].…”
Section: Single-granularitymentioning
confidence: 99%
“…Models in this category typically enhance the inputs by extending it from words to phrases/n-grams or sentences, apply certain single-granularity architectures over each input form, and aggregate all the granularity for final relevance output. For example, in [94], a CNN and an LSTM are applied to obtain the character-level, word-level, and sentence-level representations of the inputs, and each level representations are then interacted and aggregated by the evaluation function g to produce the final relevance score. Similar ideas can be found in Conv-KNRM [84] and MIX [95].…”
Section: Single-granularitymentioning
confidence: 99%
“…The hybrid model can capture multiple layers of feature information for short text representation. Huang et al 62 proposed a multiple‐granularity neural sentence model, which uses CNN to extract character‐level and word‐level features and uses LSTM to model sentence‐level semantic representations to obtain fine‐grained features, semantic representation, and important contextual and grammatical features. Zheng et al 26 introduced a hybrid bidirectional recurrent convolutional neural network, which captures contexts and long text information by BiLSTM.…”
Section: Semantic Similarity Measuresmentioning
confidence: 99%
“…In addition, according to the mainstream DL methods, we summarize the DL similarity measures. These DL similarity measures are divided into three categories: general model, 21,22 attention model, 23,24 and hybrid model 25,26 . The main contributions of this article are summarized as follows.…”
Section: Introductionmentioning
confidence: 99%
“…Based on the experiments, they concluded that by overlapping word/n gram, word alignment by METEOR, scores of BLEU and Edit Distance, one can find semantic information of Twitter dataset at a low cost. In [20], the authors presented a novel deep learning model for detecting paraphrases and semantics similarity among short texts like tweets. They also studied how different levels (character, word and sentence) of features could be extracted for modeling sentence representation.…”
Section: Short Text Similaritymentioning
confidence: 99%
“…Their measured cross entropy loss was 0.29568. Likewise, a deep learning model presented by [20] for detecting paraphrases and semantics similarity among short texts like tweets. For evaluating their model, they used a twitter dataset, and their results showed that character level features play an important role in finding similarity between tweets.…”
Section: Figurementioning
confidence: 99%