2020
DOI: 10.1016/j.compeleceng.2020.106718
|View full text |Cite
|
Sign up to set email alerts
|

Performance evaluation of optimized and adaptive neuro fuzzy inference system for predictive modeling in agriculture

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 12 publications
(6 citation statements)
references
References 12 publications
0
6
0
Order By: Relevance
“…Among the investigated optimisers, first-order optimisers, especially Adam, has been the dominant optimiser, used for most DL research. It is the best performing optimiser for training very DNN architectures as validated by numerous researchers [80,81]. However, the gradient descent-based optimisers and other adaptive optimisers have also performed remarkably in different application areas.…”
Section: Discussionmentioning
confidence: 98%
“…Among the investigated optimisers, first-order optimisers, especially Adam, has been the dominant optimiser, used for most DL research. It is the best performing optimiser for training very DNN architectures as validated by numerous researchers [80,81]. However, the gradient descent-based optimisers and other adaptive optimisers have also performed remarkably in different application areas.…”
Section: Discussionmentioning
confidence: 98%
“…This leads to an increase in the number of neurons and the number of fuzzy rules in the knowledge base, which implies the need to increase the volume of the training sample for setting up such a system. Neuro-fuzzy systems that use the backpropagation algorithm for training are characterized by a low speed, which makes their application in sequential data processing ineffective [11]. These disadvantages can be avoided by using hybrid systems that combine both the theory of artificial neural networks, which allows obtaining universal approximating properties and the ability to learn, and the theory of fuzzy logic, which makes it possible to provide the system with linguistic interpretability [12].…”
Section: Methods For Processing Non-stationary Multivariate Time Seriesmentioning
confidence: 99%
“…For this purpose, we choose the Adam optimizer instead of stochastic gradient descent to update the model weights when training data. 45 It combines the advantages of Adagrad and RMSprop optimizers which handle respectively with sparse gradients and non-stationary settings. It ensures achieving performant results through a first-order gradient, hence requiring efficient computational complexity and little memory requirement.…”
Section: Transfer Learning and Fine-tuningmentioning
confidence: 99%
“…At the same time, retinal images with different cataract grades correspond to varying noise. For this purpose, we choose the Adam optimizer instead of stochastic gradient descent to update the model weights when training data 45 . It combines the advantages of Adagrad and RMSprop optimizers which handle respectively with sparse gradients and non‐stationary settings.…”
Section: Ensemble Learning Framework For Cataract Severity Gradingmentioning
confidence: 99%