2017
DOI: 10.48550/arxiv.1705.07706
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

An Out-of-the-box Full-network Embedding for Convolutional Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
5
0

Year Published

2017
2017
2017
2017

Publication Types

Select...
3

Relationship

2
1

Authors

Journals

citations
Cited by 3 publications
(5 citation statements)
references
References 15 publications
0
5
0
Order By: Relevance
“…Using a single layer embedding for that purpose would result in a rather poor topology, as fully-connected layers represent but a portion of all patterns learnt by the DNN, and only a small subset of neurons activate for each data instance. To guarantee that the graph representation contains a topology rich enough as to empower network analysis algorithms we use the full-network embedding [7], which produces a vector embedding including all convolutional and fully-connected layers of a CNN. This results in a much larger embedding space (composed by tens of thousands of dimensions), allowing us to generate larger and richer graphs.…”
Section: Graph Representation Of Vector Embeddingsmentioning
confidence: 99%
See 2 more Smart Citations
“…Using a single layer embedding for that purpose would result in a rather poor topology, as fully-connected layers represent but a portion of all patterns learnt by the DNN, and only a small subset of neurons activate for each data instance. To guarantee that the graph representation contains a topology rich enough as to empower network analysis algorithms we use the full-network embedding [7], which produces a vector embedding including all convolutional and fully-connected layers of a CNN. This results in a much larger embedding space (composed by tens of thousands of dimensions), allowing us to generate larger and richer graphs.…”
Section: Graph Representation Of Vector Embeddingsmentioning
confidence: 99%
“…This process reduces noise and regularizes the embedding space. In the FNE, this discretization is done with a pair of constant thresholds (−0.25, 0.15) which determine if a feature is relevant by presence (1 implies an abnormally high activation) or is relevant by absence (-1 implies an abnormally low activation) for a given input data instance [7]. In our experiments we set more demanding thresholds (−2.0, 2.0), to make sure that the degree of sparsity of the graph is appropriate for network analysis methods.…”
Section: Full-network Embeddingmentioning
confidence: 99%
See 1 more Smart Citation
“…The optimal values of these thresholds can be found empirically for a labeled dataset [21]. Instead, we use threshold values shown to perform consistently across several domains [9].…”
Section: Full-network Embeddingmentioning
confidence: 99%
“…In fact, most approaches simply use a one-layer CNN embedding [7,8]. In this paper we explore the impact of using a Full-Network embedding (FNE) [9] to generate the required image embedding, replacing the one-layer embedding. We do so by integrating the FNE into the multimodal embedding pipeline defined by Kiros et al [1], which is based in the use of a Gated Recurrent Units neural network (GRU) [10] for text encoding and CNN for image encoding.…”
Section: Introductionmentioning
confidence: 99%