2012
DOI: 10.1007/978-94-007-4902-3_17
|View full text |Cite
|
Sign up to set email alerts
|

Workspace Identification Using Neural Network for an Optimal Designed 2-DOF Orientation Parallel Device

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 7 publications
0
4
0
Order By: Relevance
“…Neurons in the first layer receive external inputs as follows: 24) provides the starting point for (23). The neural network shown in Figure 6 consists of 2 inputs, three hidden layers with 15 neurons per layer, and an output layer of 8 neurons and 8 outputs, according to (22). This network has associated in the neurons of the hidden layer transfer functions (also called activation function) hyperbolic tangent sigmoid and in the neurons of the output layer linear transfer functions, and so, maintains the properties of nonlinearity and linearity between the input and output data of the network.…”
Section: Neural Network Architecturementioning
confidence: 99%
See 2 more Smart Citations
“…Neurons in the first layer receive external inputs as follows: 24) provides the starting point for (23). The neural network shown in Figure 6 consists of 2 inputs, three hidden layers with 15 neurons per layer, and an output layer of 8 neurons and 8 outputs, according to (22). This network has associated in the neurons of the hidden layer transfer functions (also called activation function) hyperbolic tangent sigmoid and in the neurons of the output layer linear transfer functions, and so, maintains the properties of nonlinearity and linearity between the input and output data of the network.…”
Section: Neural Network Architecturementioning
confidence: 99%
“…Among the algorithms available to train neural networks are several high-performance algorithms that can converge ten to one hundred times faster than other algorithms, such as the Levenberg-Marquardt algorithm which is considered the fastest method to train moderately sized feedforward neural networks (up to several hundred weights) [55]; it also has an efficient implementation in the MATLAB software ® . For neural network training, this work used the Levenberg-Marquardt algorithm; the data used are those described in (22). The dataset was not divided for testing and validation because the network is intended to fully learn all the data from the area (AT)) portion considered; the target mse that the network must achieve in its performance was set to 1e -6 and the maximum number of epochs was 3000.…”
Section: Neural Network Trainingmentioning
confidence: 99%
See 1 more Smart Citation
“…The result is repeated for 10 training sessions, each with 10 3 epochs done on the training set. Figure (6) presents the prediction error of an independent test set of 10 3 samples evaluated on the best performing session after training. With a filter function of threshold value 0.5, the network is able to predict over 99% of the samples correctly.…”
Section: Experimental Setup: 6-dof Serial-link Manipulator With Spher...mentioning
confidence: 99%