2017
DOI: 10.1186/s40679-017-0040-7
|View full text |Cite
|
Sign up to set email alerts
|

Trace: a high-throughput tomographic reconstruction engine for large-scale datasets

Abstract: BackgroundModern synchrotron light sources and detectors produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used imaging techniques that generates data at tens of gigabytes per second is computed tomography (CT). Although CT experiments result in rapid data generation, the analysis and reconstruction of the collected data may require hours or even days of computation time with a medium-sized workstation, which hinders the scientific… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
17
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
6
2

Relationship

1
7

Authors

Journals

citations
Cited by 31 publications
(17 citation statements)
references
References 36 publications
0
17
0
Order By: Relevance
“…As an example, processing a reconstructed image of 1024 × 1024 pixels with a 100-layer MS-D network takes around 200 ms with an NVidia GTX 1080 GPU (NVidia, Santa Clara, CA, USA). Since MS-D networks are able to automatically adapt to each problem, we are able to use the same network hyperparameters in all experiments: each network is 100 layers deep (excluding the input and output layer), and we use equally distributed dilations d i ∈ [1,10] by setting the dilation of layer i to d i = 1 + (i mod 10). The resulting MS-D network has around 46 thousand trainable parameters, which are initialized in the same way as described in [22].…”
Section: Setupmentioning
confidence: 99%
See 1 more Smart Citation
“…As an example, processing a reconstructed image of 1024 × 1024 pixels with a 100-layer MS-D network takes around 200 ms with an NVidia GTX 1080 GPU (NVidia, Santa Clara, CA, USA). Since MS-D networks are able to automatically adapt to each problem, we are able to use the same network hyperparameters in all experiments: each network is 100 layers deep (excluding the input and output layer), and we use equally distributed dilations d i ∈ [1,10] by setting the dilation of layer i to d i = 1 + (i mod 10). The resulting MS-D network has around 46 thousand trainable parameters, which are initialized in the same way as described in [22].…”
Section: Setupmentioning
confidence: 99%
“…In problems with limited data, direct algorithms often produce reconstructions with insufficient quality for further analysis [9]. Iterative algorithms are typically able to produce more accurate results for limited data [9], but their high computational costs can prohibit their application in practice [10]. Furthermore, the type of prior knowledge that is exploited by an iterative algorithm limits the type of objects the algorithm can be successfully applied to.…”
Section: Introductionmentioning
confidence: 99%
“…MemXCT is a highly optimized reconstruction engine for large-scale tomography datasets [10]. In this work, we extended our efficient stream reconstruction data analysis pipeline [8,54,55] with denoising capabilities [6,56].…”
Section: Related Workmentioning
confidence: 99%
“…Extensive prior research has shown that MBIR provides higher image quality than other methods [21,26,27]. In addition, MBIR requires only one fifth of the typical X-ray dose, and thus greatly reduces data acquisition time [2]. Because of these benefits, MBIR has great potential to be adopted in next-generation imaging systems [16].…”
Section: Introductionmentioning
confidence: 99%
“…However, MBIR's improved image quality requires several orders of magnitude more computation compared to traditional methods [30,33]. Therefore, MBIR is considered impractical for many applications [2,16].…”
Section: Introductionmentioning
confidence: 99%