BackgroundThis study proposes and validates a method of measuring 3D strain in myocardium using a 3D Cardiovascular Magnetic Resonance (CMR) tissue-tagging sequence and a 3D optical flow method (OFM).MethodsInitially, a 3D tag MR sequence was developed and the parameters of the sequence and 3D OFM were optimized using phantom images with simulated deformation. This method then was validated in-vivo and utilized to quantify normal sheep left ventricular functions.ResultsOptimizing imaging and OFM parameters in the phantom study produced sub-pixel root-mean square error (RMS) between the estimated and known displacements in the x (RMSx = 0.62 pixels (0.43 mm)), y (RMSy = 0.64 pixels (0.45 mm)) and z (RMSz = 0.68 pixels (1 mm)) direction, respectively. In-vivo validation demonstrated excellent correlation between the displacement measured by manually tracking tag intersections and that generated by 3D OFM (R ≥ 0.98). Technique performance was maintained even with 20% Gaussian noise added to the phantom images. Furthermore, 3D tracking of 3D cardiac motions resulted in a 51% decrease in in-plane tracking error as compared to 2D tracking. The in-vivo function studies showed that maximum wall thickening was greatest in the lateral wall, and increased from both apex and base towards the mid-ventricular region. Regional deformation patterns are in agreement with previous studies on LV function.ConclusionA novel method was developed to measure 3D LV wall deformation rapidly with high in-plane and through-plane resolution from one 3D cine acquisition.
Abstract. We investigate the influence of a shifting environment on the spreading of an invasive species through a model given by the diffusive logistic equation with a free boundary. When the environment is homogeneous and favourable, this model was first studied in Du and Lin [12], where a spreading-vanishing dichotomy was established for the long-time dynamics of the species, and when spreading happens, it was shown that the species invades the new territory at some uniquely determined asymptotic speed c 0 > 0. Here we consider the situation that part of such an environment becomes unfavourable, and the unfavourable range of the environment moves into the favourable part with speed c > 0. We prove that when c ≥ c 0 , the species always dies out in the long-run, but when 0 < c < c 0 , the long-time behavior of the species is determined by a trichotomy described by (a) vanishing, (b) borderline spreading, or (c) spreading. If the initial population is writen in the form u 0 (x) = σφ(x) with φ fixed and σ > 0 a parameter, then there exists σ 0 > 0 such that vanishing happens when σ ∈ (0, σ 0 ), borderline spreading happens when σ = σ 0 , and spreading happens when σ > σ 0 .
Foreground/background (fg/bg) classification is an important first step for several video analysis tasks such as people counting, activity recognition and anomaly detection. As is the case for several other Computer Vision problems, the advent of deep Convolutional Neural Network (CNN) methods has led to major improvements in this field. However, despite their success, CNN-based methods have difficulties in coping with multi-scene videos where the scenes change multiple times along the time sequence. In this paper, we propose a deep features fusion network based foreground segmentation method (DFFnetSeg), which is both robust to scene changes and unseen scenes comparing with competitive state-of-the-art methods. In the heart of DFFnetSeg lies a fusion network that takes as input deep features extracted from a current frame, a previous frame, and a reference frame and produces as output a segmentation mask into background and foreground objects. We show the advantages of using a fusion network and the three frames group in dealing with the unseen scene and bootstrap challenge. In addition, we show that a simple reference frame updating strategy enables DFFnetSeg to be robust to sudden scene changes inside video sequences and prepare a motion map based post-processing method which further reduces false positives. Experimental results on the test dataset generated from CDnet2014 and Lasiesta demonstrate the advantages of the DFFnetSeg method. INDEX TERMS Convolutional neural network, foreground segmentation, multi-scene videos aware.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.