2021
DOI: 10.1109/access.2020.3046108
|View full text |Cite
|
Sign up to set email alerts
|

Real-Time Segmentation Method of Lightweight Network For Finger Vein Using Embedded Terminal Technique

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(2 citation statements)
references
References 29 publications
0
2
0
Order By: Relevance
“…Note that the decoder receives input in many different ways, such as encoder, skip connection or data after data transmission via bridge network (or bottle neck). Using these fundamental changes, the decoder variations can be categorized into 16 different type listed as follows: (D1) convolution with dropout [70,76,86,95,101,102,134,138]; (D2) UNet++ type of change [130,144,154]; (D3) UNet+++ (UNet 3+) Full scale deep supervision [157]; (D4) Output from decoders to make a loss function [104,140]; (D5) fusion of the decoder outputs for scale adjustment [59,107]; (D6) recurrent residual [118,129,138]; (D7) residual block [75,84,88,105,138,150]; (D8) channel attention and scale attention block [65,113]; (D9) transpose convolution [66,88,94,95,139]; (D10) squeeze excitation (SE) Network [103,125]; (D11) cascade convolution [99]; (D12) addition of original image to each layer [100]; (D13) batch normalization [95,106,155]; (D14) inception block [97]; (D15) dense layer [87,91,…”
Section: B Decoder Variationsmentioning
confidence: 99%
“…Note that the decoder receives input in many different ways, such as encoder, skip connection or data after data transmission via bridge network (or bottle neck). Using these fundamental changes, the decoder variations can be categorized into 16 different type listed as follows: (D1) convolution with dropout [70,76,86,95,101,102,134,138]; (D2) UNet++ type of change [130,144,154]; (D3) UNet+++ (UNet 3+) Full scale deep supervision [157]; (D4) Output from decoders to make a loss function [104,140]; (D5) fusion of the decoder outputs for scale adjustment [59,107]; (D6) recurrent residual [118,129,138]; (D7) residual block [75,84,88,105,138,150]; (D8) channel attention and scale attention block [65,113]; (D9) transpose convolution [66,88,94,95,139]; (D10) squeeze excitation (SE) Network [103,125]; (D11) cascade convolution [99]; (D12) addition of original image to each layer [100]; (D13) batch normalization [95,106,155]; (D14) inception block [97]; (D15) dense layer [87,91,…”
Section: B Decoder Variationsmentioning
confidence: 99%
“…www.irjmets.com @International Research Journal of Modernization in Engineering, Technology and Science[21] …”
mentioning
confidence: 99%