2016
DOI: 10.1007/978-3-319-33747-0_16
|View full text |Cite
|
Sign up to set email alerts
|

Łukasiewicz Equivalent Neural Networks

Abstract: In this paper we propose a particular class of multilayer perceptrons, which describes possibly non-linear phenomena, linked with Łukasiewicz logic; we show how we can name a neural network with a formula and, viceversa, how we can associate a class of neural networks to each formula. Moreover, we introduce the definition of Łukasiewicz Equivalent Neural Networks to stress the strong connection between different neural networks via Łukasiewicz logical objects.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
4
2

Relationship

1
5

Authors

Journals

citations
Cited by 8 publications
(1 citation statement)
references
References 4 publications
0
1
0
Order By: Relevance
“…The reduced Feature set is fed to the MLP(Multi Layer Perceptron) Classifier and resultanalyzed. MLP is an algorithm that learns in a supervised manner [28]. It learns a function : → , ∈ 50, 100, 316, 733, 1523 , 14.The model consists of a 3 Layer Architecteure, Input Layer with neurons equals to the number of featuresand hidden layer with 100 neurons.Optimizer is Stochastic Gradient Descent .Values from the input layer are transformed by each neuron in the hidden layer with a weighted linear summation followed by ReLU activation function [30] defined as max 0, .…”
Section: A Neural Networkmentioning
confidence: 99%
“…The reduced Feature set is fed to the MLP(Multi Layer Perceptron) Classifier and resultanalyzed. MLP is an algorithm that learns in a supervised manner [28]. It learns a function : → , ∈ 50, 100, 316, 733, 1523 , 14.The model consists of a 3 Layer Architecteure, Input Layer with neurons equals to the number of featuresand hidden layer with 100 neurons.Optimizer is Stochastic Gradient Descent .Values from the input layer are transformed by each neuron in the hidden layer with a weighted linear summation followed by ReLU activation function [30] defined as max 0, .…”
Section: A Neural Networkmentioning
confidence: 99%