2004
DOI: 10.1002/cav.1
|View full text |Cite
|
Sign up to set email alerts
|

A framework for fusion methods and rendering techniques of multimodal volume data

Abstract: Many different direct volume rendering methods have been developed to visualize 3D scalar fields on uniform rectilinear grids. However, little work has been done on rendering simultaneously various properties of the same 3D region measured with different registration devices or at different instants of time. The demand for this type of visualization is rapidly increasing in scientific applications such as medicine in which the visual integration of multiple modalities allows a better comprehension of the anato… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0

Year Published

2006
2006
2023
2023

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 18 publications
(11 citation statements)
references
References 25 publications
0
11
0
Order By: Relevance
“…Ferre et al . discussed strategies to visualize multi‐modal volume datasets using direct multi‐modal volume rendering (DMVR), in which they determine in which steps of the rendering pipeline data fusion must be performed in order to accomplish the desired visual integration [FPT04]. Furthermore, they stated requirements and proposed five rendering methods that differ in the step of the rendering pipeline at which the fusion is performed: property, property and gradient, material, shading and colour fusion.…”
Section: Rendering and Interaction Techniques For Multi‐modal Data VImentioning
confidence: 99%
“…Ferre et al . discussed strategies to visualize multi‐modal volume datasets using direct multi‐modal volume rendering (DMVR), in which they determine in which steps of the rendering pipeline data fusion must be performed in order to accomplish the desired visual integration [FPT04]. Furthermore, they stated requirements and proposed five rendering methods that differ in the step of the rendering pipeline at which the fusion is performed: property, property and gradient, material, shading and colour fusion.…”
Section: Rendering and Interaction Techniques For Multi‐modal Data VImentioning
confidence: 99%
“…In the first case, the property can be selected by a user-defined criterion, as proposed by Burns et al [5] and Brecheisen et al [6], or by an automatic method, such the one introduced by [3]. In the second case, the fusion can occur at different levels of the volume rendering pipeline [1], [7]. Cai and Sakas [1] defined three levels: image level intermixing, when two images are merged; accumulation level intermixing, when sample values are calculated in each volume along a ray and their visual contributions are mixed; and illumination model level intermixing, which consists in opacity and intensity calculation at each sampling point directly from a multi-volume illumination model.…”
Section: A Multimodal Volume Renderingmentioning
confidence: 99%
“…For better differentiation between tissues and blood flow profiles in CF, RGB lookup tables (e.g., Fig. 3) are first utilized to assign grayscale and hue values to B-mode and power Doppler data, respectively, prior to compositing [16,22]. Therefore, the output intensity and opacity for CF (i.e., C out,RGB,CF and a out,CF ) is given by…”
Section: Multi-volume Rendering Algorithmsmentioning
confidence: 99%
“…Multi-volume rendering has been used for merging anatomical images (e.g., computed tomography (CT)) with functional images (e.g., positron emission tomography (PET)) [13][14][15][16]. Similar multi-volume rendering techniques can be applied for blending B-mode and power Doppler data.…”
Section: Introductionmentioning
confidence: 99%