2019
DOI: 10.1109/tvcg.2018.2796085
|View full text |Cite
|
Sign up to set email alerts
|

Deep-Learning-Assisted Volume Visualization

Abstract: Designing volume visualizations showing various structures of interest is critical to the exploratory analysis of volumetric data. The last few years have witnessed dramatic advances in the use of convolutional neural networks for identification of objects in large image collections. Whereas such machine learning methods have shown superior performance in a number of applications, their direct use in volume visualization has not yet been explored. In this paper, we present a deep-learning-assisted volume visua… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
26
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 32 publications
(26 citation statements)
references
References 41 publications
0
26
0
Order By: Relevance
“…For IVR, deep learning can help analyze geometric primitives obtained from volume data in multiple perspectives. For example, Cheng et al (2018) proposed a CNN-based model to derive characteristic feature vectors for voxels and then utilize them to generate a binarized volume for the Marching Cube algorithm. Han et al (2018) utilized a 3D CNN-based autoencoder model to learn dense representations of stream surfaces and lines, and then used projection to assist with selection and clustering analysis.…”
Section: Deep Learning For Volume Visualizationmentioning
confidence: 99%
“…For IVR, deep learning can help analyze geometric primitives obtained from volume data in multiple perspectives. For example, Cheng et al (2018) proposed a CNN-based model to derive characteristic feature vectors for voxels and then utilize them to generate a binarized volume for the Marching Cube algorithm. Han et al (2018) utilized a 3D CNN-based autoencoder model to learn dense representations of stream surfaces and lines, and then used projection to assist with selection and clustering analysis.…”
Section: Deep Learning For Volume Visualizationmentioning
confidence: 99%
“…Convolutional Neural Network (CNN) [52] and Generative Adversarial Nets (GAN) [53] have been employed to enhance the TF manipulations. Cheng et al [4] have used CNN for the volume visualization of complex high-dimensional information as the TF study, where they have divided the data into 65x65 patches and trained the CNN with the ADADELTA solver [54]. 200-dimensional features are extracted, and the volume is visualized with the conventional marching cube algorithm for specific high-dimensional feature values.…”
Section: Related Workmentioning
confidence: 99%
“…The up-to-date deep learning techniques have been applied to many research fields as a means of generating a model by combining neural network architectures with data. Several volume rendering techniques employ the deep learning techniques, including volume segmentation [4], [5], viewpoint estimation [6], transfer function [7], lighting [8], and quality improvement [9]. These studies utilize Convolution Neural Network (CNN) and Generative Adversarial Net (GAN).…”
Section: Introductionmentioning
confidence: 99%
“…HDA is trained with one configuration A (t) at a time; thus, to facilitate that concern, we generalize the HDA formulation given in Eqns. (7), (8), and (9). Input to the ANN of the HDA propagates through each neuron and an activation function at each of the layer of the ANN as shown in Fig.…”
Section: Hadamard Deep Autoencodermentioning
confidence: 99%
“…We utilize a generalized version of deep autoencoders (DAs) to reconstruct fragmented trajectories. A deep autoencoder [8], is a type of artificial neural network (ANN) that has two main parts: an encoder that maps input data to a compressed version called code, and a decoder that maps this code to the output [9]. DAs have several intermediate layers and can be customized to data of interest by adjusting the features such as number of layers, layer size, code size, etc.…”
Section: Introductionmentioning
confidence: 99%