2018 International Conference on Artificial Intelligence and Data Processing (IDAP) 2018
DOI: 10.1109/idap.2018.8620899
|View full text |Cite
|
Sign up to set email alerts
|

Spelling Correction Using Recurrent Neural Networks and Character Level N-gram

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
2
2
1

Relationship

0
5

Authors

Journals

citations
Cited by 7 publications
(4 citation statements)
references
References 14 publications
0
4
0
Order By: Relevance
“…Combined together with other methods they have many various applications, like: spell checking (e.g. in search engines) [12], [13], word correction [14], [15], text categorization [16] or word based sentiment classification [17]. One advantage of the n-gram method is that it is language independent.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Combined together with other methods they have many various applications, like: spell checking (e.g. in search engines) [12], [13], word correction [14], [15], text categorization [16] or word based sentiment classification [17]. One advantage of the n-gram method is that it is language independent.…”
Section: Related Workmentioning
confidence: 99%
“…The algorithm proposed in [21] is a combination of n-gram characters with a neural network. Ngrams and recurrent neural network (LSTM) are used for spell checking of the Punjabi language [22] and for the spelling correction process in Turkish [14].…”
Section: Related Workmentioning
confidence: 99%
“…The Long Short-Term Memory model (LSTM) proposed in [14] encodes the input word at the character level, which also uses word and POS tag contexts as features for Indonesian text. Similarly, [15] trained a recurrent neural network with dictionary words. For a given misspelled word, they retrieve a candidate list from that dictionary word, then the list gets expanded using a characterlevel bi-gram model and the trained model.…”
Section: Literature Reviewmentioning
confidence: 99%
“…Zhang and Zhang [21] stated that the task of similarity joining is to find all pairs of strings for which similarities are above a predetermined threshold, where the similarity of two strings is measured by a specific distance function. Kernighan et al [22] proposed a simplification to restrict the candidate list to words that differ with just one edit operation of the Damerau-Levenshtein edit distance-substitution, insertion, deletion, or replacement of succeeding letters [23].…”
Section: Candidate Generationmentioning
confidence: 99%