2015
DOI: 10.1515/itms-2015-0007
|View full text |Cite
|
Sign up to set email alerts
|

Optimal Training Parameters and Hidden Layer Neuron Number of Two-Layer Perceptron for Generalised Scaled Object Classification Problem

Abstract: -The research is focused on optimising two-layer perceptron for generalised scaled object classification problem. The optimisation criterion is minimisation of inaccuracy. The inaccuracy depends on training parameters and hidden layer neuron number. After its statistics is accumulated, minimisation is executed by a numerical search. Perceptron is optimised additionally by extra training. As it is done, the classification error percentage does not exceed 3 % in case of the worst scale distortion.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
17
0
1

Year Published

2017
2017
2023
2023

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 9 publications
(18 citation statements)
references
References 38 publications
(53 reference statements)
0
17
0
1
Order By: Relevance
“…For datasets whose entries are tiny and simple images on a monotonous background, the best MPL allocation is "11...100...0" (e.g., see [15]). Versions of MPL allocations with fewer "ones" should be nonetheless tried in the first turn.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…For datasets whose entries are tiny and simple images on a monotonous background, the best MPL allocation is "11...100...0" (e.g., see [15]). Versions of MPL allocations with fewer "ones" should be nonetheless tried in the first turn.…”
Section: Discussionmentioning
confidence: 99%
“…Thus, this is 3 for CIFAR-10 and 1 for EEACL26. This is 1 for NORB as well due to splitting the stereo-images and averaging (it is not training set expansion [15]). The filter's depth of a successive ConvL is equal to the number of filters of the antecedent ConvL.…”
mentioning
confidence: 99%
“…However, yielding to PSTSSD 0.08 r  on average (see Fig. 2 Before verification, the best one of 100 2LPs (30) must be trained further until its performance becomes unimprovable [5], [8], [28]. The best 2LP (30) performs at an average CEP, which is 9.31 %.…”
Section: Models Of Stsm6080i and Stsm6080i Ndpdmentioning
confidence: 99%
“…Such a CEP is tolerable. Figure 6 visualizes STSOs by maximal SDs in (31), which form STSM6080Is of EEACLs successfully classified (the left-side images) with the best 2LP (30) further-trained [4], [28] with the 438 additional passes (438AP further-trained 2LP). Successful classification under lesser SDs is shown below.…”
Section: Models Of Stsm6080i and Stsm6080i Ndpdmentioning
confidence: 99%
See 1 more Smart Citation