2019 IEEE Fourth International Conference on Data Science in Cyberspace (DSC) 2019
DOI: 10.1109/dsc.2019.00069
|View full text |Cite
|
Sign up to set email alerts
|

A Simple but Effective Way to Improve the Performance of RNN-Based Encoder in Neural Machine Translation Task

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
1
0
2

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 16 publications
0
1
0
2
Order By: Relevance
“…RNN is a method that is often used in sequential data processing such as text processing and others [28]. An RNN represents a type of neural machine translation (NMT) system with three layers: an initial layer that assigns each word to a vector, like a word embedding or a one-hot word index; a looping hidden layer that continually computes and modifies the hidden state as it processes each word; and a final layer that predicts the likelihood of upcoming words while retaining the current hidden state [29].…”
Section: Training Corpus Using Rnn-grumentioning
confidence: 99%
“…RNN is a method that is often used in sequential data processing such as text processing and others [28]. An RNN represents a type of neural machine translation (NMT) system with three layers: an initial layer that assigns each word to a vector, like a word embedding or a one-hot word index; a looping hidden layer that continually computes and modifies the hidden state as it processes each word; and a final layer that predicts the likelihood of upcoming words while retaining the current hidden state [29].…”
Section: Training Corpus Using Rnn-grumentioning
confidence: 99%
“…Lapisan Encoder-Decoder terdiri dari sebuah jaringan proses pembelajaran dengan metode Recurrent Neural Network (RNN) [11]. RNN merupakan sebuah metode yang sering digunakan dalam pengolahan data sekuensial seperti pengolahan teks dan lainnya [12]. RNN memiliki salah satu pengembangan variasi yaitu Long Short-Term Memory (LSTM) yang memiliki struktur relatif kompleks dengan penggunaan sebuah sel memori untuk menghubungkan setiap data masukan pertama dan seterusnya.…”
Section: Pendahuluanunclassified
“…Matriks BLEU dirancang untuk mengukur seberapa dekat keluaran yang dihasilkan dengan melakukan pencocokan panjang frasa variabel keluaran dari mesin penterjemah dengan referensi terjemahan. Matriks dasar memerlukan sebuah kalkulasi brevity penalty dengan perhitungan pada Persamaan (12) dan Persamaan (13).…”
Section: Bleu Scoreunclassified