In this article, we introduce a method to apply ideas from electrostatics to parameterize the open space around an object. By simulating the object as a virtually charged conductor, we can define an object-centric coordinate system which we call Electric Coordinates. It parameterizes the outer space of a reference object in a way analogous to polar coordinates. We also introduce a measure that quantifies the extent to which an object is wrapped by a surface. This measure can be computed as the electric flux through the wrapping surface due to the electric field around the charged conductor. The electrostatic parameters, which comprise the Electric Coordinates and flux, have several applications in computer graphics, including: texturing, morphing, meshing, path planning relative to a target object, mesh parameterization, designing deformable objects, and computing coverage. Our method works for objects of arbitrary geometry and topology, and thus is applicable in a wide variety of scenarios.
Abstract. We first consider network security services and then review threats, vulnerabilities and failure modes. This review is based on standard texts, using well-known concepts, categorizations, and methods, e.g. risk analysis using asset-based threat profiles and vulnerability profiles (attributes). The review is used to construct a framework which is then used to define an extensible ontology for network security attacks. We present a conceptualization of this ontology in figure 1.
Estimation of the point light source position in the scene enhances the experience for augmented reality. The intensity image and depth information from the RGB-D camera allows estimation of the point light source position in a scene, where our approach does not need any probe objects or other measuring devices. The approach uses the Lambertian reflectance model, where the RGB-D camera provides the image and the surface normals and the remaining unknowns are the albedo and light parameters (light intensity and direction). In order to determine the light parameters, we assume that segments with a similar colour have the same albedo, which allows us to find the point light source that explains the illumination in the scene. The performance of this method is evaluated on multiple scenes, where a single light bulb is used to illuminate the scene. In this case, the average error in the angle between the true light position vector and our estimate is around 10 degrees. This allows realistic rendering of synthetic objects into the recorded scene, which is used to improve the experience of augmented reality.
The first hybrid CPU‐GPU based method for estimating a point light source position in a scene recorded by an RGB‐D camera is presented. The image and depth information from the Kinect is enough to estimate a light position in a scene, which allows for the rendering of synthetic objects into a scene that appears realistic enough for augmented reality purposes. This method does not require a light probe or other physical device. To make this method suitable for augmented reality, we developed a hybrid implementation that performs light estimation in under 1second. This is sufficient for most augmented reality scenarios because both the position of the light source and the position of the Kinect are typically fixed. The method is able to estimate the angle of the light source with an average error of 20°. By rendering synthetic objects into the recorded scene, we illustrate that this accuracy is good enough for the rendered objects to look realistic. Copyright © 2015 John Wiley & Sons, Ltd.
Abstract-We propose a novel approach to transfer reach and grasp movements while being agnostic and invariant to finger kinematics, hand configurations and relative changes in object dimensions. We exploit a novel representation based on electrostatics to parametrise the salient aspects of the demonstrated grasp. By working in this alternate space that focuses on the relational aspects of the grasp rather than absolute kinematics, we are able to use inference based planning techniques to couple the motion in abstract spaces with trajectories in the configuration space of the robot. We demonstrate that our method computes stable grasps that generalise over objects of different shapes and robots of dissimilar kinematics while retaining the qualitative grasp type -all without expensive collision detection or re-optimisation.
Capturing a close interaction between an actor and an object can be difficult as a result of occlusion and having to recreate the geometry of the scene accurately. In this paper, we propose a technique that allows us to capture the object's motion and geometry alongside the actor's movements and optionally the local environment, using a magnetic motion capture system and an RGB-D sensor. This not only gives greater information when placing a character in a scene but enables us to digitally recreate the scene in motion without significant animator work after capture. The use of magnetic sensors prevents occlusion or marker confusion that is common in optical techniques when dealing with close interactions, as the magnetic sensors do not require direct line of sight to a camera. The geometry reconstruction ensures that the proportions of objects and surfaces the character interacts with are accurate and alleviates the need for an artist to model the object. We perform validation of the results by comparison with an optical system and show a variety of motions, such as using a screwdriver or removing a cap to drink from a bottle, that can be captured using our technique.
This paper introduces a method for finding a dense correspondence between objects of varying topology or connectivity by using a proxy, genus zero mesh alongside the technique of Blended Intrinsic Maps. Harmonic space parameterisation is used to create a closed, genus zero shape that approximates the geometry of the original object. This allows for noisy or topologically different representations of objects to be mapped to one another, with seams in the mapping falling in generally hidden concave areas and tunnels. The paper presents example mapping between objects with and without holes, as well as objects that consist of a number of disconnected segments.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.