2021
DOI: 10.1117/1.jmi.8.5.052104
|View full text |Cite
|
Sign up to set email alerts
|

Deep-learning-based direct synthesis of low-energy virtual monoenergetic images with multi-energy CT

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5

Relationship

2
3

Authors

Journals

citations
Cited by 7 publications
(9 citation statements)
references
References 31 publications
0
9
0
Order By: Relevance
“…This method is based on a blind source separation variant of multivoltage projections with a stepped voltage potential scan. Compared with multienergy CT methods that require photon-counting detectors, such as the methods of references [25] , [26] , [27] , [29] , [30] , [33] , this method requires no hardware changes in traditional CT systems except voltage control. Compared with most multienergy CT methods based on traditional CT systems, the proposed method has four advantages.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…This method is based on a blind source separation variant of multivoltage projections with a stepped voltage potential scan. Compared with multienergy CT methods that require photon-counting detectors, such as the methods of references [25] , [26] , [27] , [29] , [30] , [33] , this method requires no hardware changes in traditional CT systems except voltage control. Compared with most multienergy CT methods based on traditional CT systems, the proposed method has four advantages.…”
Section: Discussionmentioning
confidence: 99%
“…Xiaochuan Wu et al improved fully convolutional DenseNets for multimaterial decomposition [29] . Hao Gong et al developed a deep-learning method for the direct synthesis of low-energy virtual monoenergetic images [30] . Weiwen Wu et al developed a U-net approach for image reconstruction with -norm, total variation, residual learning, and anisotropic adaptation [31] .…”
Section: Introductionmentioning
confidence: 99%
“…BCNN,k,n${B_{CNN,k,n}}$ denotes the pixel‐wise binary material‐specific mask generated from the classification branch. LIGC${L_{IGC}}$ denotes the image gradient correlation based regularizer 28,31 that ensured edge consistency in both material domain and DECT image domain, false(·false)$\nabla ( \cdot )$ denoted the anisotropic form of image gradient, ρfalse(·false)$\rho ( \cdot )$ denoted Pearson correlation, and ε was a small constant (fixed at 1.0×104$1.0\, \times \,{10^{ - 4}}$). LFeat${L_{Feat}}$ is the feature reconstruction loss 33 that ensured high‐level texture consistency between the mixed images ICNN,mix${I_{CNN,mix}}$ and IDECT,mix${I_{DECT,\;mix}}$ by matching the high‐level features extracted from the l th convolutional layer (empirically fixed at the 15th layer) of a pretrained VGG‐19 neural network ϕfalse(·false)$\phi ( \cdot )$, ICNN,k,mix${I}_{\textit{CNN},k,\textit{mix}}$ and IDECT,k,mix${I}_{\textit{DECT},k,\textit{mix}}$ were the mixed images from ICNN${I_{CNN}}$ and IDECT${I_{DECT}}$…”
Section: Methodsmentioning
confidence: 99%
“…Hyperparameters were all empirically determined from prior studies. Briefly, the setup of stem CNN was determined using our prior setup in references, 28,31 and the setup of bifurcated branches was determined in our recent study. 35 The hyper-parameters (e.g., weighting factors) in loss function were also similarly set up.…”
Section: Cnn Training and Inferencementioning
confidence: 99%
See 1 more Smart Citation