2003
DOI: 10.1109/tsmcb.2003.808183
|View full text |Cite
|
Sign up to set email alerts
|

Visualization of learning in multilayer perceptron networks using principal component analysis

Abstract: This paper is concerned with the use of scientific visualization methods for the analysis of feedforward neural networks (NNs). Inevitably, the kinds of data associated with the design and implementation of neural networks are of very high dimensionality, presenting a major challenge for visualization. A method is described using the well-known statistical technique of principal component analysis (PCA). This is found to be an effective and useful method of visualizing the learning trajectories of many learnin… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
19
0

Year Published

2004
2004
2016
2016

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 38 publications
(19 citation statements)
references
References 15 publications
0
19
0
Order By: Relevance
“…Visualization is an important part of understanding the inner workings of many systems, but particularly those of learning systems [2,6,11,16,17,30]. This paper focuses on visualizing reward properties to aid in both agent reward evaluation and design.…”
Section: Reward Visualizationmentioning
confidence: 99%
“…Visualization is an important part of understanding the inner workings of many systems, but particularly those of learning systems [2,6,11,16,17,30]. This paper focuses on visualizing reward properties to aid in both agent reward evaluation and design.…”
Section: Reward Visualizationmentioning
confidence: 99%
“…The Multilayer Perceptron (MLP) is the most widely applied and researched artificial neural network (ANN) model. MLP networks implement mappings from input space to output space and are normally applied to supervised learning tasks [24]. The Sigmoidal function was selected as the MLP activation function, with a range of values in the interval [0, 1].…”
Section: B Second Phase Of the Mechanism Of Classificationmentioning
confidence: 99%
“…Using the same approach to generate the second direction results in several vertices that lie very close after the projection. Optimization of the average spacing leads to W (2) = (-0.1, 0.2, -0.5, -0.33, 0.75), with all vertices clearly separated (Fig. 1).…”
Section: How To Visualize?mentioning
confidence: 99%
“…They are not helpful to see what type of internal representations have developed for a given data set, or how well the network performs on some data. A few attempts to visualize the training process are restricted to network trajectories in the weight space ( [2], Kordos and Duch, in print). The usefulness of visualization of network outputs has been shown recently [3].…”
Section: Introductionmentioning
confidence: 99%