2017 International Conference on Networking and Network Applications (NaNA) 2017
DOI: 10.1109/nana.2017.22
View full text |Buy / Rent full text
|
Sign up to set email alerts
|
Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2020
2020
2020
2020

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
references
References 11 publications
0
1
0
Order By: Relevance
“…SGD is an iterative optimization algorithm that is often used to solve and optimize model parameters of machine learning algorithms. SGD is a deformed form of the gradient descent algorithm, which has been successfully applied to text classification [33] and large-scale sparse machine learning problems in natural language processing [34]- [35]. The gradient is to obtain the partial derivative of the unknown parameters of a multivariate function and obtain the vector composed of these partial derivative functions.…”
Section: ) Fault Classification and Prediction Model Based On Stochastic Gradient Descentmentioning
confidence: 99%
“…SGD is an iterative optimization algorithm that is often used to solve and optimize model parameters of machine learning algorithms. SGD is a deformed form of the gradient descent algorithm, which has been successfully applied to text classification [33] and large-scale sparse machine learning problems in natural language processing [34]- [35]. The gradient is to obtain the partial derivative of the unknown parameters of a multivariate function and obtain the vector composed of these partial derivative functions.…”
Section: ) Fault Classification and Prediction Model Based On Stochastic Gradient Descentmentioning
confidence: 99%