2020
DOI: 10.1016/j.asoc.2020.106348
|View full text |Cite
|
Sign up to set email alerts
|

A Quantum-Inspired Self-Supervised Network model for automatic segmentation of brain MR images

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
58
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3
1

Relationship

3
5

Authors

Journals

citations
Cited by 34 publications
(62 citation statements)
references
References 25 publications
0
58
0
Order By: Relevance
“…Table 1 reports the segmentation results of the proposed PQIS-Net with ResNet50 [9] and 3D-UNet [10] models for three different tasks (infection, lung, infection and lung). It is evident from the experimental data provided in Table 1 and from the statistical significance test (KS test) [19] conducted on the results that in spite of being a self-supervised network, the proposed PQIS-Net attains similar performance in segmentation tasks on the data set [30] in comparison with the pre-trained CNN models (ResNet50 [9] and 3D-UNet [10]) under the four evaluation parameters (ACC, DS, P P V, SS). Table 2 presents the numerical results obtained using the proposed semi-supervised shallow neural network model, ResNet50 [9], 3D-UNet [10], Kang et al [15] and Wang et al [18] for COVID-19 detection on the Brazilian data Figure 6: PQIS-Net segmented lung CT slice#171 [30] with the three different masks.…”
Section: Resultsmentioning
confidence: 94%
“…Table 1 reports the segmentation results of the proposed PQIS-Net with ResNet50 [9] and 3D-UNet [10] models for three different tasks (infection, lung, infection and lung). It is evident from the experimental data provided in Table 1 and from the statistical significance test (KS test) [19] conducted on the results that in spite of being a self-supervised network, the proposed PQIS-Net attains similar performance in segmentation tasks on the data set [30] in comparison with the pre-trained CNN models (ResNet50 [9] and 3D-UNet [10]) under the four evaluation parameters (ACC, DS, P P V, SS). Table 2 presents the numerical results obtained using the proposed semi-supervised shallow neural network model, ResNet50 [9], 3D-UNet [10], Kang et al [15] and Wang et al [18] for COVID-19 detection on the Brazilian data Figure 6: PQIS-Net segmented lung CT slice#171 [30] with the three different masks.…”
Section: Resultsmentioning
confidence: 94%
“…The quantum version of the classical self-supervised neural network architectures [29]- [32] offer a potential solution for faster and efficient image segmentation and surpasses the classical counterparts. Konar et al recently developed quantum-inspired neural network models referred to as QIS-Net [4] and QIBDS-Net [5] suitable for brain MR image segmentation. These networks have been found to attain promising outcome in complete brain tumor segmentation.…”
Section: Related Workmentioning
confidence: 99%
“…The generalized form of V ox − QSig is obtained by leveraging the activation function hyper-parameters employed in Equation 19 as owing to the wide variations of gray-levels. Inspired by the authors' previous works [4], [22], [35], [36], the proposed Vox-QSig activation function employs four different adaptive thresholding schemes suitable for efficient gray-scale segmentation in the 3D-QNet architecture. In addition, to investigate a number of optimal thresholds [37] is explored.…”
Section: A Quantum-inspired Self-supervised Tensor Network Modelmentioning
confidence: 99%
See 1 more Smart Citation
“…Moreover, QINN models fall short of the multi-level activation function and consequently, they fail to produce optimal thresholding for multi-level images. Recently, quantuminspired neural networks referred to as QIBDS-Net and QIS-Net architectures are suggested by Konar et al [16,17] for fully automated brain MR image segmentation. QIBDS-Net [16] and QIS-Net [17] fail to provide optimal outcome owing to wide variation of gray-scales in brain MR images and often suffer from convergence problems.…”
Section: Introductionmentioning
confidence: 99%