The quadriceps tendon has the anatomic characteristics to produce a graft whose length and volume are both reproducible and predictable, while yielding a graft with a significantly greater intra-articular volume than a patellar tendon graft with a similar width.
Computational vision-based flame detection has drawn significant attention in the past decade with camera surveillance systems becoming ubiquitous. Whereas many discriminating features, such as color, shape, texture, etc., have been employed in the literature, this paper proposes a set of motion features based on motion estimators. The key idea consists of exploiting the difference between the turbulent, fast, fire motion, and the structured, rigid motion of other objects. Since classical optical flow methods do not model the characteristics of fire motion (e.g., non-smoothness of motion, non-constancy of intensity), two optical flow methods are specifically designed for the fire detection task: optimal mass transport models fire with dynamic texture, while a data-driven optical flow scheme models saturated flames. Then, characteristic features related to the flow magnitudes and directions are computed from the flow fields to discriminate between fire and non-fire motion. The proposed features are tested on a large video database to demonstrate their practical usefulness. Moreover, a novel evaluation method is proposed by fire simulations that allow for a controlled environment to analyze parameter influences, such as flame saturation, spatial resolution, frame rate, and random noise.
Currently, much of the manual labor needed to generate as-built building information models (BIMs) of existing facilities is spent converting raw point cloud data sets (PCDs)
Detection of fire and smoke in video is of practical and theoretical interest. In this paper, we propose the use of optimal mass transport (OMT) optical flow as a low-dimensional descriptor of these complex processes. The detection process is posed as a supervised Bayesian classification problem with spatio-temporal neighborhoods of pixels;feature vectors are composed of OMT velocities and R,G,B color channels. The classifier is implemented as a single-hidden-layer neural network. Sample results show probability of pixels belonging to fire or smoke. In particular, the classifier successfully distinguishes between smoke and similarly colored white wall, as well as fire from a similarly colored background.
Segmentation of injured or unusual anatomic structures in medical imagery is a problem that has continued to elude fully automated solutions. In this paper, the goal of easy-to-use and consistent interactive segmentation is transformed into a control synthesis problem. A nominal level set PDE is assumed to be given; this open-loop system achieves correct segmentation under ideal conditions, but does not agree with a human expert's ideal boundary for real image data. Perturbing the state and dynamics of a level set PDE via the accumulated user input and an observer-like system leads to desirable closed-loop behavior. The input structure is designed such that a user can stabilize the boundary in some desired state without needing to understand any mathematical parameters. Effectiveness of the technique is illustrated with applications to the challenging segmentations of a patellar tendon in MR and a shattered femur in CT.
Graphics processing units (GPUs) are a powerful tool The programmability of the most recent generation of for numerical computation. The GPU architecture and GPUs creates the opportunity to develop very powerful, low computational model are uniquely designed for high-resolution cost accelerators for key radar signal processing algorithms. In high-speed grid-based calculations. This capability can be this paper, we describe an experiment in the application of utilized to accelerate certain classes of compute-intensive radar GPUs to the two-dimensional phase unwrapping problem at signal processing algorithms. Characteristics of a problem wellsuited for computation on a GPU include high levels of data the heart of interferometric synthetic aperture radar (IFSAR) parallelism, low control logic, uniform boundary conditions, and processing. While phase unwrapping is relatively simple in well-defined input and output. one dimension, it becomes quite complex in multiple dimensions. One approach to the problem of recovering the We describe the implementation of two-dimensional multigrid * -least-squares weighted phase unwrapping on a GPU and original phase from a measurement of wrapped phase in the demonstrate a large speedup over C and MATLAB presence of noise or other distortions casts it into the implementations. Details of the GPU computation are provided. mathematical framework of a solution to the discretized Background information on the GPU architecture and its Poisson's equation [2]. The resulting least squares solution for applicability to general-purpose computation is discussed. the weighted phase unwrapping problem involves the use of an iterative solution technique requiring only scalar add, subtract, multiply, and divide operations at each step, and is
This paper studies the problem of achieving consistent performance for visual servoing. Given the nonlinearities introduced by the camera projection equa tions in monocular visual servoing systems, many control algorithms experience non-uniform performance bounds.The variable performance bounds arise from depth de pendence in the error rates. In order to guarantee depth invariant performance bounds, the depth nonlinearity must be cancelled, however estimating distance along the optical axis is problematic when faced with an object with unknown geometry. By tracking a planar visual feature on a given target, and measuring the area of the planar feature, a distance invariant input to state stable visual servoing controller is derived. Two approaches are given for achieving the visual tracking. Both of these approaches avoid the need to maintain long-term tracks of individual feature points. Realistic image uncertainty is captured in experimental tests that control the camera motion in a 3D renderer using the observed image data for feedback.
Image segmentation is a fundamental problem in computational vision and medical imaging.Designing a generic, automated method that works for various objects and imaging modalities is a formidable task. Instead of proposing a new specific segmentation algorithm, we present a general design principle on how to integrate user interactions from the perspective of feedback control theory. Impulsive control and Lyapunov stability analysis are employed to design and analyze an interactive segmentation system. Then stabilization conditions are derived to guide algorithm design. Finally, the effectiveness and robustness of proposed method are demonstrated. Index TermsInteractive image segmentation, dynamical system, feedback control, impulsive control, evolutionary process
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.