2024
DOI: 10.1109/access.2024.3390048
|View full text |Cite
|
Sign up to set email alerts
|

Exploring Character Trigrams for Robust Arabic Text Classification: A Comparative Analysis in the Face of Vocabulary Expansion and Misspelled Words

Dorieh Alomari,
Irfan Ahmad

Abstract: Tokenization is an important early step in natural language processing (NLP) tasks. The idea is to split the input sentence into smaller units, called tokens, for further processing. Words are the most commonly used tokens in text classification tasks but other tokenization ideas are also popular such as subword and character tokens. The increasing availability of training corpora has posed challenges for the word tokenization technique, primarily due to vocabulary size expansion. This has underscored the impo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 41 publications
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?