2016 International Conference on Digital Image Computing: Techniques and Applications (DICTA) 2016
DOI: 10.1109/dicta.2016.7797053
|View full text |Cite
|
Sign up to set email alerts
|

Impact of Automatic Feature Extraction in Deep Learning Architecture

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
41
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3
3

Relationship

0
10

Authors

Journals

citations
Cited by 93 publications
(49 citation statements)
references
References 7 publications
0
41
0
Order By: Relevance
“…Lately, thanks to the advent of deep learning, feature extraction is done automatically by the chosen architecture. For example, CNN takes a 2-D matrix as input and automatically extracts hidden features using spatial filters [ 39 ].…”
Section: Resultsmentioning
confidence: 99%
“…Lately, thanks to the advent of deep learning, feature extraction is done automatically by the chosen architecture. For example, CNN takes a 2-D matrix as input and automatically extracts hidden features using spatial filters [ 39 ].…”
Section: Resultsmentioning
confidence: 99%
“…The randomness of loadings hinders the use of statistical time domain and frequency domain features for distinguishing the healthy case from the delaminated cases as well as a distinction between different cases of delaminations (e.g., AL1, AL2, AL3, AM1, etc.). Deep learning offers a natural solution to the current problem due to its capability to automatically extract discriminative features from the input images [53]. Figure 3 shows a schematic of the deep learning-based methodology for the current problem.…”
Section: Proposed Methodologymentioning
confidence: 99%
“…The auxiliary module acts as a feature extractor to extract CWS and POS representations from raw text inputs, which are expected to provide more information for the main module. Inspired by [16,17], we proposed two auxiliary module structures -the dilated-CNN (DCNN) and the transformer encoder (TE). The DCNN model consists of 3 dilated CNN layers, with kernel size=5, filter number=128, and dilation rate=1,2,4 respectively.…”
Section: Auxiliary Modulementioning
confidence: 99%