2020
DOI: 10.1109/tpami.2019.2913372
|View full text |Cite
|
Sign up to set email alerts
|

Squeeze-and-Excitation Networks

Abstract: The central building block of convolutional neural networks (CNNs) is the convolution operator, which enables networks to construct informative features by fusing both spatial and channel-wise information within local receptive fields at each layer. A broad range of prior research has investigated the spatial component of this relationship, seeking to strengthen the representational power of a CNN by enhancing the quality of spatial encodings throughout its feature hierarchy. In this work, we focus instead on … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

34
9,388
5
25

Year Published

2020
2020
2023
2023

Publication Types

Select...
8
1
1

Relationship

0
10

Authors

Journals

citations
Cited by 9,073 publications
(9,452 citation statements)
references
References 56 publications
34
9,388
5
25
Order By: Relevance
“…ResNet, published in 2015, consisted of modules with a shortcut process. Squeeze‐and‐Excitation Networks, published in 2017, induced Squeeze‐and‐Excitation Blocks, which are building blocks for convolutional neural networks that improve channel interdependencies. The AI used for image recognition is still being developed.…”
Section: Discussionmentioning
confidence: 99%
“…ResNet, published in 2015, consisted of modules with a shortcut process. Squeeze‐and‐Excitation Networks, published in 2017, induced Squeeze‐and‐Excitation Blocks, which are building blocks for convolutional neural networks that improve channel interdependencies. The AI used for image recognition is still being developed.…”
Section: Discussionmentioning
confidence: 99%
“…In this experiment, 27 × 75 × 93 × 81 data were generated via the aforementioned preprocessing and data augmentation steps. In the first layer, we used 1 × 1 × 1 convolutional filters, which have been widely used in recent structural designs of convolutional neural networks (CNNs) because these filters increase nonlinearity without changing the receptive fields of the convolutional layer (Hu, Shen, & Sun, 2017;Iandola et al, 2016;Simonyan & Zisserman, 2014). These filters can generate temporal descriptors for each voxel of the volume of the fMRI, and their weights can be easily learnt by DNNs during training.…”
Section: Preparation Of Fmri Time Series For Deep Learningmentioning
confidence: 99%
“…First, we extend the standard U-Net model for 3D HaN image segmentation by incorporating a new feature extraction component, based on squeeze-and-excitation (SE) residual blocks. 30 Second, we propose a new loss function for better segmenting small-volumed structures. Small volume segmentation suffers from the imbalanced data problem, where the number of voxels inside the small region is much smaller than those outside, leading to the difficulty of training.…”
Section: Introductionmentioning
confidence: 99%