2020
DOI: 10.1109/access.2020.2980266
|View full text |Cite
|
Sign up to set email alerts
|

Convolutional Neural Networks With Intermediate Loss for 3D Super-Resolution of CT and MRI Scans

Abstract: Computed Tomography (CT) scanners that are commonly-used in hospitals and medical centers nowadays produce low-resolution images, e.g. one voxel in the image corresponds to at most onecubic millimeter of tissue. In order to accurately segment tumors and make treatment plans, radiologists and oncologists need CT scans of higher resolution. The same problem appears in Magnetic Resonance Imaging (MRI). In this paper, we propose an approach for the single-image super-resolution of 3D CT or MRI scans. Our method is… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
48
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
3
1
1

Relationship

2
7

Authors

Journals

citations
Cited by 51 publications
(48 citation statements)
references
References 51 publications
0
48
0
Order By: Relevance
“…Going at deeper convolutional layers, we typically observe more complex features, e.g., texture patterns, object parts, which are obtained by convolving lower-level features. Convolutional neural networks achieved state-of-the-art results on a broad range of computer vision [ 3 , 11 , 50 , 52 , 53 , 54 , 55 ] and medical imaging [ 12 , 19 , 24 ] tasks. In many cases, their success is in large part due to the availability of pre-trained models on large-scale datasets, which are highly transferable to other tasks, requiring only some fine-tuning.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Going at deeper convolutional layers, we typically observe more complex features, e.g., texture patterns, object parts, which are obtained by convolving lower-level features. Convolutional neural networks achieved state-of-the-art results on a broad range of computer vision [ 3 , 11 , 50 , 52 , 53 , 54 , 55 ] and medical imaging [ 12 , 19 , 24 ] tasks. In many cases, their success is in large part due to the availability of pre-trained models on large-scale datasets, which are highly transferable to other tasks, requiring only some fine-tuning.…”
Section: Methodsmentioning
confidence: 99%
“…After the initial success of deep learning [ 10 ] in object recognition from images [ 3 , 11 ], deep neural networks have been adopted for a broad range of tasks in medical imaging, ranging from cell segmentation [ 12 ] and cancer detection [ 13 , 14 , 15 , 16 , 17 ] to intracranial hemorrhage detection [ 5 , 8 , 18 , 19 , 20 , 21 , 22 ] and CT/MRI super-resolution [ 23 , 24 , 25 , 26 ]. Since we address the task of intracranial hemorrhage detection, we consider related works that are focused on the same task as ours [ 5 , 6 , 7 , 8 , 18 , 19 , 20 , 21 , 22 , 27 , 28 , 29 , 30 ], as well as works that study intracranial hemorrhage segmentation [ 31 , 32 , 33 , 34 , 35 , 36 , 37 , 38 , 39 , 40 , 41 , 42 , 43 , 44 ].…”
Section: Related Workmentioning
confidence: 99%
“…Now, deep learning has been widely used in medical image analysis and modeling [113]. Through this type of method, we can easily extract relevant regions from computed tomography or MRI data and reconstruct mesh data [114]. These technologies can be transferred to oral cavity modeling, which can greatly facilitate model construction in a virtual surgery system and quickly establish a personalized oral cavity model.…”
Section: Improvement Of the Simulation Of Force Feedback By Deep Learningmentioning
confidence: 99%
“…The second strategy is a post-processing step applied after the reconstruction of the 3D volume. For this strategy, the SR techniques can either be applied on the 2D slices of the volume [59,53,24], or on the 3D volume itself [47,28,23,19]. Both of these approaches are pipelined frameworks, where errors can accumulate between the CT reconstruction stage and the super-resolution stage, so that the final volume may actually be inconsistent with the original projection data, which may not be acceptable in some situations.…”
Section: Related Workmentioning
confidence: 99%