2022
DOI: 10.1109/jbhi.2021.3138024
|View full text |Cite
|
Sign up to set email alerts
|

MSRF-Net: A Multi-Scale Residual Fusion Network for Biomedical Image Segmentation

Abstract: Methods based on convolutional neural networks have improved the performance of biomedical image segmentation. However, most of these methods cannot efficiently segment objects of variable sizes and train on small and biased datasets, which are common for biomedical use cases. While methods exist that incorporate multi-scale fusion approaches to address the challenges arising with variable sizes, they usually use complex models that are more suitable for general semantic segmentation problems. In this paper, w… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

1
49
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 132 publications
(50 citation statements)
references
References 56 publications
1
49
0
Order By: Relevance
“…Later, Zhang et al 8 proposed a hybrid method combining both transformer-based network and CNN to capture global dependencies and the low-level spatial features for the segmentation task. Inspired by high-resolution network 51 , Srivastava et al 10 proposed multi-scale residual fusion network (MSRF-Net) that allows information exchange across multiple scales and showed improved generalisability on unseen datasets. All of these encoder-decoder architectures were evaluated only on still images.…”
Section: /26mentioning
confidence: 99%
See 1 more Smart Citation
“…Later, Zhang et al 8 proposed a hybrid method combining both transformer-based network and CNN to capture global dependencies and the low-level spatial features for the segmentation task. Inspired by high-resolution network 51 , Srivastava et al 10 proposed multi-scale residual fusion network (MSRF-Net) that allows information exchange across multiple scales and showed improved generalisability on unseen datasets. All of these encoder-decoder architectures were evaluated only on still images.…”
Section: /26mentioning
confidence: 99%
“…Most deep learning-based detection [3][4][5] and segmentation [6][7][8][9] methods are trained and tested on the same center dataset and WLE modality only. These supervised deep learning techniques has a major issue in not being able to generalise to an unseen data from a different center population 10 or even different modality from the same center 11 . Also, the type of endoscope used also adds to the compromise in robustness.…”
Section: Introductionmentioning
confidence: 99%
“…To improve on the segmentation of polyps in colonoscopy images, a range of deep learning (DL) -based solutions [8,13,14,17,19,22,28,30,32,37] have been proposed. Such solutions are designed to automatically predict segmentation maps for colonoscopy images, in order to provide assistance to clinicians performing colonoscopy procedures.…”
Section: Introductionmentioning
confidence: 99%
“…Such solutions are designed to automatically predict segmentation maps for colonoscopy images, in order to provide assistance to clinicians performing colonoscopy procedures. These solutions have traditionally used fully convolutional networks (FCNs) [1,9,10,[13][14][15]17,25,28,39]. However, transformerbased architectures [24,[32][33][34]36] have recently become popular for semantic segmentation and shown superior performance over FCN-based alternatives.…”
Section: Introductionmentioning
confidence: 99%
“…The goal of semantic segmentation is to predict the predefined class (or label) of each pixel, which is fundamental yet challenging in computer vision. Owing to its increasing importance, it is widely adopted in various applications using vision sensors, such as autonomous driving [ 1 , 2 ], 3D reconstruction [ 3 ], and medical image analysis [ 4 , 5 ]. In recent years, deep convolutional neural networks (DCNNs) have achieved significant performance improvements and have been the dominant solution for semantic segmentation.…”
Section: Introductionmentioning
confidence: 99%