Background Coronary artery angiography is an indispensable assistive technique for cardiac interventional surgery. Segmentation and extraction of blood vessels from coronary angiographic images or videos are very essential prerequisites for physicians to locate, assess and diagnose the plaques and stenosis in blood vessels. Methods This article proposes a novel coronary artery segmentation framework that combines a three–dimensional (3D) convolutional input layer and a two–dimensional (2D) convolutional network. Instead of a single input image in the previous medical image segmentation applications, our framework accepts a sequence of coronary angiographic images as input, and outputs the clearest mask of segmentation result. The 3D input layer leverages the temporal information in the image sequence, and fuses the multiple images into more comprehensive 2D feature maps. The 2D convolutional network implements down–sampling encoders, up–sampling decoders, bottle–neck modules, and skip connections to accomplish the segmentation task. Results The spatial–temporal model of this article obtains good segmentation results despite the poor quality of coronary angiographic video sequences, and outperforms the state–of–the–art techniques. Conclusions The results justify that making full use of the spatial and temporal information in the image sequences will promote the analysis and understanding of the images in videos.
Background Coronary heart disease is one of the diseases with the highest mortality rate. Due to the important position of cardiovascular disease prevention and diagnosis in the medical field, the segmentation of cardiovascular images has gradually become a research hotspot. How to segment accurate blood vessels from coronary angiography videos to assist doctors in making accurate analysis has become the goal of our research. Method Based on the U-net architecture, we use a context-based convolutional network for capturing more information of the vessel in the video. The proposed method includes three modules: the sequence encoder module, the sequence decoder module, and the sequence filter module. The high-level information of the feature is extracted in the encoder module. Multi-kernel pooling layers suitable for the extraction of blood vessels are added before the decoder module. In the filter block, we add a simple temporal filter to reducing inter-frame flickers. Results The performance comparison with other method shows that our work can achieve 0.8739 in Sen, 0.9895 in Acc. From the performance of the results, the accuracy of our method is significantly improved. The performance benefit from the algorithm architecture and our enlarged dataset. Conclusion Compared with previous methods that only focus on single image analysis, our method can obtain more coronary information through image sequences. In future work, we will extend the network to 3D networks.
We present an automatic and robust technique for creating non-photorealistic rendering (NPR) and animation, starting from a video that depicts the shape details and follows the motion of underlying objects. We generate NPR from the initial frame of the source video using a greedy algorithm for stroke placements and models, in combination with a saliency map and a flow-guided difference-of-Gaussian filter. Our stroke model uses a set of triangles whose vertices are particles and whose edges are springs. Using a physicsbased framework, the generated and rendered strokes are translated, rotated and deformed by forces exerted from the sequential frames. External forces acting on strokes are calculated according to temporally and spatially smoothed per-pixel optical flow vectors. After simulating each frame, we delete unnecessary strokes and add new strokes for disappearing and appearing objects, but only if necessary to avoid popping and scintillation. Our framework automatically generates the coherent animation of rendered strokes, preserving the appearance details and animating strokes along with the underlying objects. This had been difficult to achieve with previous user-guided methods and automatic but limited transformations methods.
We present an automatic, efficient, and simple technique to create pencil drawing animation, starting from a video. We generate pencil drawing from a source frame based on stroke modeling, specifying the properties of strokes, in combination with layered lines, flow‐guided difference‐of‐Gaussian (DoG) filter to several layers. Generated pencil strokes are translated and rotated because of forces exerted from the sequential frames using rigid body dynamics. Linear and angular forces acting on strokes are calculated according to the temporally filtered per‐pixel optical flow vectors. Our framework effectively generates the coherent animation of pencil strokes preserving the structured appearance of charcoal or pastel, which is difficult to achieve with previous‐abstraction based non‐photorealistic animation. Moreover, our stroke simulation step is suitable for animating the different styles of strokes, such as oil painting and watercolor, and can be efficiently implemented to produce animation in relatively inexpensive manner. Copyright © 2013 John Wiley & Sons, Ltd.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.