2019 IEEE International Symposium on Multimedia (ISM) 2019
DOI: 10.1109/ism46123.2019.00060
|View full text |Cite
|
Sign up to set email alerts
|

FWNetAE: Spatial Representation Learning for Full Waveform Data Using Deep Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
12
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
2
1

Relationship

2
1

Authors

Journals

citations
Cited by 3 publications
(12 citation statements)
references
References 26 publications
0
12
0
Order By: Relevance
“…To evaluate the quality of the spatial features extracted by our FWNet, we compared the 1D CNN (shown in Section 3.3 ), FWNetAE [ 27 ], and our FWNet. To compare the power of the feature extraction, we visualized the feature vectors of the test data we extracted from the trained 1D CNN, FWNetAE, and FWNet.…”
Section: Resultsmentioning
confidence: 99%
See 4 more Smart Citations
“…To evaluate the quality of the spatial features extracted by our FWNet, we compared the 1D CNN (shown in Section 3.3 ), FWNetAE [ 27 ], and our FWNet. To compare the power of the feature extraction, we visualized the feature vectors of the test data we extracted from the trained 1D CNN, FWNetAE, and FWNet.…”
Section: Resultsmentioning
confidence: 99%
“…The feature vectors used in the visualization were the bottleneck layer of the trained encoder. The features were extracted by FWNetAE [ 27 ] and trained with the same dataset, but the label information was omitted. FWNetAE shows the tendency to separate each class in a latent vector without supervised learning ( Figure 10 b).…”
Section: Resultsmentioning
confidence: 99%
See 3 more Smart Citations