Applied Optimal Control 2018
DOI: 10.1201/9781315137667-8
|View full text |Cite
|
Sign up to set email alerts
|

Singular solutions of optimization and control problems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
214
0
6

Year Published

2018
2018
2023
2023

Publication Types

Select...
9
1

Relationship

0
10

Authors

Journals

citations
Cited by 106 publications
(221 citation statements)
references
References 0 publications
1
214
0
6
Order By: Relevance
“…This backpropagation algorithm utilizes a partial derivative approach to refine the AI execution result, which is implemented in a propositional and symbolic way and is designed to improve the AI self-execution algorithm. [ 24 ] Through the concept of machine learning, AI moves from the stage that was used for the Turing test implementation and mathematical and logical verification to the upper level of real-life use.…”
Section: Deep Learning and Cognitive Sciencementioning
confidence: 99%
“…This backpropagation algorithm utilizes a partial derivative approach to refine the AI execution result, which is implemented in a propositional and symbolic way and is designed to improve the AI self-execution algorithm. [ 24 ] Through the concept of machine learning, AI moves from the stage that was used for the Turing test implementation and mathematical and logical verification to the upper level of real-life use.…”
Section: Deep Learning and Cognitive Sciencementioning
confidence: 99%
“…We used a feed-forward neural network with one hidden layer with ten nodes to impute those values. This graph of interconnected nodes (neurons) is capable of learning by adjusting the weights of the paths connecting its inputs to outputs ( Bryson and Ho 1969 ). In the datasets for patients 3 and 9 several observations are missing for both predictors, and since the neural network is trained using backward propagation of errors, we recode all the values to fall between with missing values coded as 0, such that the weights for these inputs are also shrunk to 0 during back-propagation and effectively no learning is done on those branches.…”
Section: Methodsmentioning
confidence: 99%
“…The most popular method for learning in multilayer networks is called backpropagation. It was first invented by Bryson and Ho [ 28 ]. But there are some drawbacks to backpropagation.…”
Section: Methodsmentioning
confidence: 99%