2019 IEEE/ACM 1st Annual Workshop on Large-Scale Experiment-in-the-Loop Computing (XLOOP) 2019
DOI: 10.1109/xloop49562.2019.00007
|View full text |Cite
|
Sign up to set email alerts
|

Scientific Image Restoration Anywhere

Abstract: The use of deep learning models within scientific experimental facilities frequently requires low-latency inference, so that, for example, quality control operations can be performed while data are being collected. Edge computing devices can be useful in this context, as their low cost and compact form factor permit them to be co-located with the experimental apparatus. Can such devices, with their limited resources, can perform neural network feed-forward computations efficiently and effectively? We explore t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
3
2

Relationship

3
6

Authors

Journals

citations
Cited by 15 publications
(9 citation statements)
references
References 32 publications
0
9
0
Order By: Relevance
“…While the brightness increases with these new accelerator upgrades will be significant, these new sources will not reach their full potential if limited to using detectors of the type available today, or those presently seen on the horizon. In addition, application scientists have increasingly turned to AI to analyze data [4,5,6,7,8]. AI-accelerated workflows have been shown not only to be fast enough to keep up with experiments, but also to overcome experimental restrictions of conventional methods.…”
Section: Timeliness or Maturitymentioning
confidence: 99%
“…While the brightness increases with these new accelerator upgrades will be significant, these new sources will not reach their full potential if limited to using detectors of the type available today, or those presently seen on the horizon. In addition, application scientists have increasingly turned to AI to analyze data [4,5,6,7,8]. AI-accelerated workflows have been shown not only to be fast enough to keep up with experiments, but also to overcome experimental restrictions of conventional methods.…”
Section: Timeliness or Maturitymentioning
confidence: 99%
“…First, TomoGAN uses a GAN architecture, with adversarial loss, to help train the generator, and a pre-trained VGG [42] network; results presented below suggest that the adversarial loss avoids artifacts. Second, our generator, although also based on U-Net, has three U-Net boxes, as shown in Figure 3 instead of the four in FBPConvNet, and no batch normalization layer [20], two factors that reduce computation and memory needs for inference (e.g., TomoGAN can efficiently run on Google edge TPU [62]). For example, with one NVIDIA Tesla V100 card and a batch size of eight (minimizing DL framework overheads), TomoGAN and FBPConvNet take an average of 30ms and 90ms to process one 1024×1024 image, respectively: TomoGAN is three times faster.…”
Section: Comparison With Other Solutions On Experimental Datasetsmentioning
confidence: 99%
“…MemXCT is a highly optimized reconstruction engine for large-scale tomography datasets [10]. In this work, we extended our efficient stream reconstruction data analysis pipeline [8,54,55] with denoising capabilities [6,56].…”
Section: Related Workmentioning
confidence: 99%