2021
DOI: 10.3390/app11083563
|View full text |Cite
|
Sign up to set email alerts
|

Deep Neural Networks Classification via Binary Error-Detecting Output Codes

Abstract: One-hot encoding is the prevalent method used in neural networks to represent multi-class categorical data. Its success stems from its ease of use and interpretability as a probability distribution when accompanied by a softmax activation function. However, one-hot encoding leads to very high dimensional vector representations when the categorical data’s cardinality is high. The Hamming distance in one-hot encoding is equal to two from the coding theory perspective, which does not allow detection or error-corr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
1

Relationship

2
5

Authors

Journals

citations
Cited by 12 publications
(8 citation statements)
references
References 32 publications
(48 reference statements)
0
8
0
Order By: Relevance
“…The following theorem discusses this property. It is an extension of Theorem 1 in [17] to the ternary codespace. Theorem 1.…”
Section: Classification Fuzzy Explanatormentioning
confidence: 80%
See 3 more Smart Citations
“…The following theorem discusses this property. It is an extension of Theorem 1 in [17] to the ternary codespace. Theorem 1.…”
Section: Classification Fuzzy Explanatormentioning
confidence: 80%
“…• Use them directly as used by the non-linear classifier 𝑦 𝑖 = 𝑓 𝑖 • Use features relevance measure 𝑦 𝑖 = 𝜌(𝑓 𝑖 ) = 𝑅 𝑖 . In both cases, the feature or its relevance needs normalising to a unit interval to interpret its value as the truth value of the fuzzy logic statement [17]. For the values, we normalise 𝑦 𝑖 ∈ ℛ, 𝑖 ∈ {1, … , 𝑀}, and use min-max linear normalisation:…”
Section: B Features' Relevance Measurementioning
confidence: 99%
See 2 more Smart Citations
“…For the temporal and calendric feature categories, we use two different approaches. In the first one, we generate one-hot encoded values [24] of the current hour (h i , i ∈ {0, 1, ..., 23}, weekday (d i , d ∈ {1, 2, ..., 7}) and month (m i , d ∈ {1, 2, ..., 12}). In addition to that, we generate two cyclic features by applying the trigonometric functions to the hour of the day and the month of the year.…”
Section: Methodsmentioning
confidence: 99%