2020
DOI: 10.14569/ijacsa.2020.0111216
|View full text |Cite
|
Sign up to set email alerts
|

Enhancing Convolutional Neural Network using Hu’s Moments

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(3 citation statements)
references
References 26 publications
0
3
0
Order By: Relevance
“…Where 𝑓(𝑥, 𝑦) is the pixel intensity at (𝑥, 𝑦), and (𝑥̅ , 𝑦 ̅) is the centroid of the image [15], [16].…”
Section: Image Segmentation: Cannymentioning
confidence: 99%
“…Where 𝑓(𝑥, 𝑦) is the pixel intensity at (𝑥, 𝑦), and (𝑥̅ , 𝑦 ̅) is the centroid of the image [15], [16].…”
Section: Image Segmentation: Cannymentioning
confidence: 99%
“…Think about a medical robot that can correctly classify a health question and give a possibly lifesaving answer, or a virtual tourist helper that can tell the difference between questions about food and questions about historical sites. It's not just handy that the good effects happen; they often have big effects [8], [9], [10]. However, the complexity of human language, which includes subtleties in syntax, meaning, and pragmatics, makes it very hard to get very accurate question classification [11], [12].…”
Section: Introductionmentioning
confidence: 99%
“…Support Vector Machines, Random Forests, and other machine learning models have been used for this, but new developments in deep learning and transformer models like BERT, RoBERTa, and ELECTRA have shown that they work even better than expected [13]. These models are very good at understanding the meanings and contexts of words and sentences, which is a key part of question classification [1], [14], [15], [16] and [17]. Here, we show a new method that combines three strong tools: the ELECTRA model for contextual embeddings based on transformers; Global Vectors for Word Representation (GloVe) for creating semantically rich word vectors; and Long Short-Term Memory (LSTM) networks for capturing sequence dependencies.…”
Section: Introductionmentioning
confidence: 99%