Note: Also to appear in the Dagstuhl 2012 SciVis book by Springer. Please cite this paper with its arXiv citation information. Abstract Ultrasound is one of the most frequently used imaging modality in medicine. The high spatial resolution, its interactive nature and non-invasiveness makes it the first choice in many examinations. Image interpretation is one of ultrasound's main challenges. Much training is required to obtain a confident skill level in ultrasound-based diagnostics. State-of-the-art graphics techniques is needed to provide meaningful visualizations of ultrasound in real-time. In this paper we present the process-pipeline for ultrasound visualization, including an overview of the tasks performed in the specific steps. To provide an insight into the trends of ultrasound visualization research, we have selected a set of significant publications and divided them into a technique-based taxonomy covering the topics pre-processing, segmentation, registration, rendering and augmented reality. For the different technique types we discuss the difference between ultrasound-based techniques and techniques for other modalities.
Visualization of volumetric multicomponent data sets is a high-dimensional problem, especially for color data. Medical 3D ultrasound (US) technology has rapidly advanced during the last few decades and scanners can now generate joint 3D scans of tissues (B-mode) and blood flow (power or color Doppler) in real time. Renderings of such data sets have to comprehensively convey both the relevant structures of the tissues that form the context for blood flow, as well as the distribution of blood flow itself. The narrow field of view in US data, which is often used to make real-time imaging possible, complicates volume exploration since only parts of organs are usually visible; that is, clearly defined anatomical landmarks are scarce. In addition, the noisy nature and low signal-to-contrast ratio of US data make effective visualization a challenge, whereby there are currently no convincing solutions for combined US B-mode and color Doppler data rendering. Therefore, displaying 2D slices out of the 3D data is still often the preferred visualization method. We present new combinations of photorealistic and nonphotorealistic rendering strategies for combined visualization of B-mode and color Doppler data, which are straightforward to implement, flexible, and suited for a wide range of US applications.
In the present paper we propose a method for fast segmentation of ultrasound data. It is based on setting up a model depending on user input. We apply a matching scheme in order to obtain initial contours for 2D segmentation of several cross-sections of the organ by a discrete dynamic contour. Further, we set up an active image which drives the deformation of the dynamic contour. The active image comprises both iexiuml information based on image data as well as spatial information which we derive from the inital contour. We design the active image according to user input and image quality to aid the segmentation task.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.