Abstract-This work presents a graphics processing unit (GPU)-based implementation of a fully 3-D PET iterative reconstruction code, FIRST (Fast Iterative Reconstruction Software for [PET] Tomography), which was developed by our group. We describe the main steps followed to convert the FIRST code (which can run on several CPUs using the message passing interface [MPI] protocol) into a code where the main time-consuming parts of the reconstruction process (forward and backward projection) are massively parallelized on a GPU. Our objective was to obtain significant acceleration of the reconstruction without compromising the image quality or the flexibility of the CPU implementation. Therefore, we implemented a GPU version using an abstraction layer for the GPU, namely, CUDA C. The code reconstructs images from sinogram data, and with the same System Response Matrix obtained from Monte Carlo simulations than the CPU version. The use of memory was optimized to ensure good performance in the GPU. The code was adapted for the VrPET small-animal PET scanner. The CUDA version is more than 70 times faster than the original code running in a single core of a high-end CPU, with no loss of accuracy.
We present RERBEE (robust efficient registration via bifurcations and elongated elements), a novel feature-based registration algorithm able to correct local deformations in high-resolution ultra-wide field-of-view (UWFV) fluorescein angiogram (FA) sequences of the retina. The algorithm is able to cope with peripheral blurring, severe occlusions, presence of retinal pathologies and the change of image content due to the perfusion of the fluorescein dye in time. We have used the computational power of a graphics processor to increase the performance of the most computationally expensive parts of the algorithm by a factor of over × 1300, enabling the algorithm to register a pair of 3900 × 3072 UWFV FA images in 5-10 min instead of the 5-7 h required using only the CPU. We demonstrate accurate results on real data with 267 image pairs from a total of 277 (96.4%) graded as correctly registered by a clinician and 10 (3.6%) graded as correctly registered with minor errors but usable for clinical purposes. Quantitative comparison with state-of-the-art intensity-based and feature-based registration methods using synthetic data is also reported. We also show some potential usage of a correctly aligned sequence for vein/artery discrimination and automatic lesion detection.
The Max-Cut problem consists of finding a partition of the graph nodes into two subsets, such that the sum of the edge weights having endpoints in different subsets is maximized. This NP-hard problem for non planar graphs has different applications in areas such as VLSI and ASIC design. This paper proposes an evolutionary hybrid algorithm based on low-level hybridization between Memetic Algorithms and Variable Neighborhood Search. This algorithm is tested and compared with the results, found in the bibliography, obtained by other hybrid metaheuristics for the same problem. Achieved experimental results show the suitability of the approach, and that the proposed hybrid evolutionary algorithm finds near-optimal solutions. Moreover, on a set of standard test problems, new best known solutions were produced for several instances.
The linguistic description of a physical phenomenon is a summary of the available information where certain relevant aspects are remarked while other irrelevant aspects remain hidden. This paper deals with the development of computational systems capable to generate linguistic descriptions from images captured by a video camera.The problem of linguistically labeling images in a database is a challenge where still much work remains to be done. In this paper, we contribute to this field using a model of the observed phenomenon that allows us to interpret the content of images. We build the model by combining techniques from Computer Vision with ideas from the Zadeh's Computational Theory of Perceptions.We include a practical application consisting of a computational system capable to provide a linguistic description of the behavior of traffic in a roundabout.
This paper proposes a new approach to recognize human actions in 2D sequences, based on real-time visual tracking and simple feature extraction of human activities in video sequences. The proposed method emphasizes the simplicity of the strategies used, in an attempt to describe human actions as precisely as other more sophisticated (and more computationally demanding) methods in the literature. Specifically, we propose three complementary modules for the following: (a) tracking; (b) feature extraction; and (c) action recognition. The first module is based on the hybridization of a particle filter and a local search procedure and makes use of a reduced integral image to speed up the weight computation. The feature extraction module characterizes the silhouette of the tracked person by dividing it into rectangular boxes. Then, the system computes statistics on the evolution of these rectangular boxes over time. Finally, the action recognition module passes these statistics to a support vector machine to classify the actions. Experimental results show that the proposed method works in real-time, and its performance is competitive against other state-of-the-art methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.