2019
DOI: 10.1162/tacl_a_00278
|View full text |Cite
|
Sign up to set email alerts
|

Multiattentive Recurrent Neural Network Architecture for Multilingual Readability Assessment

Abstract: We present a multiattentive recurrent neural network architecture for automatic multilingual readability assessment. This architecture considers raw words as its main input, but internally captures text structure and informs its word attention process using other syntax- and morphology-related datapoints, known to be of great importance to readability. This is achieved by a multiattentive strategy that allows the neural network to focus on specific parts of a text for predicting its reading level. We conducted… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
6
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 22 publications
(13 citation statements)
references
References 31 publications
1
6
0
Order By: Relevance
“…Another version of a hierarchical RNN with the attention mechanism was proposed by Azpiazu and Pera (2019). Their system, named Vec2Read, is a multi-attentive RNN capable of leveraging hierarchical text structures with the help of word and sentence level attention mechanisms and a custom-built aggregation mechanism.…”
Section: Neural Classification Approachesmentioning
confidence: 99%
“…Another version of a hierarchical RNN with the attention mechanism was proposed by Azpiazu and Pera (2019). Their system, named Vec2Read, is a multi-attentive RNN capable of leveraging hierarchical text structures with the help of word and sentence level attention mechanisms and a custom-built aggregation mechanism.…”
Section: Neural Classification Approachesmentioning
confidence: 99%
“…Thus, R. Balyan et al [3] showed that applying machine learning methods increased accuracy by more than 10% as compared to classic readability metrics (e.g., Flesch-Kincaid formula). To date, a number of studies confirmed the effectiveness of various machine learning techniques for text difficulty estimation, such as support vector machine (SVM) [36,39], random forest [26], and neural networks [2,7,35].…”
Section: Related Workmentioning
confidence: 99%
“…Mohammadi and Khasteh (2019) simplified the process of feature extraction with GloVe model for word embedding and reinforcement learning for English and Persian readability assessment. Azpiazu and Pera (2019) presented a multiattentive recurrent neural network model that considers raw words as input and incorporates attention mechanism with POS and morphological tags. Deutsh et al ( 2020) proposed a fusion model by adding the numerical output from transformer to the linguistic features as input into SVM classifiers for readability prediction.…”
Section: Automatic Readability Assessmentmentioning
confidence: 99%
“…While neural network models take word embedding as input, they in general discard linguistic features traditionally used in machine learning models (Deutsch et al, 2020). If ever incorporated, linguistic features such as POS and morphological tags are only used to guide attention mechanism for embedding representation of the text (Azpiazu and Pera, 2019). Pre-trained models such as BERT (Devlin et al, 2019) learn dense representations of text by informing the models with semantically neighboring words, sentences, or context.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation