2020
DOI: 10.21105/joss.01956
|View full text |Cite
|
Sign up to set email alerts
|

WordTokenizers.jl: Basic tools for tokenizing natural language in Julia

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
2
1

Relationship

1
5

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 2 publications
0
3
0
Order By: Relevance
“…Tokenizing is the process of separating each word in a sentence [15]. In this case, the question that sent to the chatbot will be processed into a word/token…”
Section: Pre-processingmentioning
confidence: 99%
“…Tokenizing is the process of separating each word in a sentence [15]. In this case, the question that sent to the chatbot will be processed into a word/token…”
Section: Pre-processingmentioning
confidence: 99%
“…Bert multitask model predicts using the [CLS] representation from Bert (Devlin et al, 2019). We also build an LSTM model (Hochreiter and Schmidhuber, 1997) with GloVe embedding (Pennington et al, 2014), and twitter-tokenization using Word- Tokenizers package (Kaushal et al, 2020).…”
Section: Slot-fillingmentioning
confidence: 99%
“…As computers play an increasing role, there is a pressing desire to be able to manipulate them in a natural way. People can interact with machines in the same way that they interact with each other in everyday life (voice, gestures, and expressions) [8][9][10][11][12]. As a result, there has been an increasing amount of research in recent years on interacting with computers through multiple modalities, such as simple voice, gesture, and expression interaction.…”
Section: Introductionmentioning
confidence: 99%