This paper introduces a new method for automatic quantification of subcutaneous, visceral and nonvisceral internal fat from MR-images acquired using the two point Dixon technique in the abdominal region. The method includes (1) a three dimensional phase unwrapping to provide water and fat images, (2) an image intensity inhomogeneity correction, and (3) a morphon based registration and segmentation of the tissue. This is followed by an integration of the corrected fat images within the different fat compartmentsthat avoids the partial volume effects associated with traditional fat segmentation methods. The method was tested on 18 subjects before and after a period of fastfood hyper-alimentation showing high stability and performance in all analysis steps.
Abstract. This paper presents a novel method for phase unwrapping for phase sensitive reconstruction in MR imaging. The unwrapped phase is obtained by integrating the phase gradient by solving a Poisson equation. An efficient solver, which has been made publicly available, is used to solve the equation. The proposed method is demonstrated on a fat quantification MRI task that is a part of a prospective study of fat accumulation. The method is compared to a phase unwrapping method based on region growing. Results indicate that the proposed method provides more robust unwrapping. Unlike region growing methods, the proposed method is also straight-forward to implement in 3D.
We propose a two-stage method for detection of abnormal behaviours, such as aggression and fights in urban environment, which is applicable to operator support in surveillance applications. The proposed method is based on fusion of evidence from audio and optical sensors. In the first stage, a number of modalityspecific detectors perform recognition of low-level events. Their outputs act as input to the second stage, which performs fusion and disambiguation of the firststage detections. Experimental evaluation on scenes from the outdoor part of the PROMETHEUS database demonstrated the practical viability of the proposed approach. We report a fight detection rate of 81% when both audio and optical information are used. Reduced performance is observed when evidence from audio data is excluded from the fusion process. Finally, in the case when only evidence from one camera is used for detecting the fights, the recognition performance is poor.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.