In neural networks, it is often desirable to work with various representations of the same space. For example, 3D rotations can be represented with quaternions or Euler angles. In this paper, we advance a definition of a continuous representation, which can be helpful for training deep neural networks. We relate this to topological concepts such as homeomorphism and embedding. We then investigate what are continuous and discontinuous representations for 2D, 3D, and n-dimensional rotations. We demonstrate that for 3D rotations, all representations are discontinuous in the real Euclidean spaces of four or fewer dimensions. Thus, widely used representations such as quaternions and Euler angles are discontinuous and difficult for neural networks to learn. We show that the 3D rotations have continuous representations in 5D and 6D, which are more suitable for learning. We also present continuous representations for the general case of the n dimensional rotation group SO(n). While our main focus is on rotations, we also show that our constructions apply to other groups such as the orthogonal group and similarity transforms. We finally present empirical results, which show that our continuous rotation representations outperform discontinuous ones for several practical problems in graphics and vision, including a simple autoencoder sanity test, a rotation estimator for 3D point clouds, and an inverse kinematics solver for 3D human poses.
SketchSketch + Color Generated results Figure 1. A user can sketch and scribble colors to control deep image synthesis. On the left is an image generated from a hand drawn sketch. On the right several objects have been deleted from the sketch, a vase has been added, and the color of various scene elements has been constrained by sparse color strokes. AbstractRecently, there have been several promising methods to generate realistic imagery from deep convolutional networks. These methods sidestep the traditional computer graphics rendering pipeline and instead generate imagery at the pixel level by learning from large collections of photos (e.g. faces or bedrooms). However, these methods are of limited utility because it is difficult for a user to control what the network produces. In this paper, we propose a deep adversarial image synthesis architecture that is conditioned on sketched boundaries and sparse color strokes to generate realistic cars, bedrooms, or faces. We demonstrate a sketch based image synthesis system which allows users to scribble over the sketch to indicate preferred color for objects. Our network can then generate convincing images that satisfy both the color and the sketch constraints of user. The network is feed-forward which allows users to see the effect of their edits in real time. We compare to recent work on sketch to image synthesis and show that our approach can generate more realistic, more diverse, and more controllable outputs. The architecture is also effective at user-guided colorization of grayscale images.
Conference on 'Nutrition and age-related muscle loss, sarcopenia and cachexia' The first reports of accurate skeletal muscle mass measurement in human subjects appeared at about the same time as introduction of the sarcopenia concept in the late 1980s. Since then these methods, computed tomography and MRI, have been used to gain insights into older (i.e. anthropometry and urinary markers) and more recently developed and refined methods (ultrasound, bioimpedance analysis and dual-energy X-ray absorptiometry) of quantifying regional and total body skeletal muscle mass. The objective of this review is to describe the evolution of these methods and their continued development in the context of sarcopenia evaluation and treatment. Advances in these technologies are described with a focus on additional quantifiable measures that relate to muscle composition and 'quality'. The integration of these collective evaluations with strength and physical performance indices is highlighted with linkages to evaluation of sarcopenia and the spectrum of related disorders such as sarcopenic obesity, cachexia and frailty. Our findings show that currently available methods and those in development are capable of non-invasively extending measures from solely 'mass' to quality evaluations that promise to close the gaps now recognised between skeletal muscle mass and muscle function, morbidity and mortality. As the largest tissue compartment in most adults, skeletal muscle mass and aspects of muscle composition can now be evaluated by a wide array of technologies that provide important new research and clinical opportunities aligned with the growing interest in the spectrum of conditions associated with sarcopenia.
The question of what are good views of a 3D object has been addressed by numerous researchers in perception, computer vision, and computer graphics. This has led to a large variety of measures for the goodness of views as well as some special-case viewpoint selection algorithms. In this article, we leverage the results of a large user study to optimize the parameters of a general model for viewpoint goodness, such that the fitted model can predict people's preferred views for a broad range of objects. Our model is represented as a combination of attributes known to be important for view selection, such as projected model area and silhouette length. Moreover, this framework can easily incorporate new attributes in the future, based on the data from our existing study. We demonstrate our combined goodness measure in a number of applications, such as automatically selecting a good set of representative views, optimizing camera orbits to pass through good views and avoid bad views, and trackball controls that gently guide the viewer towards better views.
Figure 1. With TextureGAN, one can generate novel instances of common items from hand drawn sketches and simple texture patches. You can now be your own fashion guru! Top row: Sketch with texture patch overlaid. Bottom row: Results from TextureGAN. AbstractIn this paper, we investigate deep image synthesis guided by sketch, color, and texture. Previous image synthesis methods can be controlled by sketch and color strokes but we are the first to examine texture control. We allow a user to place a texture patch on a sketch at arbitrary locations and scales to control the desired output texture. Our generative network learns to synthesize objects consistent with these texture suggestions. To achieve this, we develop a local texture loss in addition to adversarial and content loss to train the generative network. We conduct experiments using sketches generated from real images and textures sampled from a separate texture database and results show that our proposed algorithm is able to generate plausible images that are faithful to user controls. Ablation studies show that our proposed pipeline can generate more realistic images than adapting existing methods directly.
Redirected walking techniques can enhance the immersion and visual-vestibular comfort of virtual reality (VR) navigation, but are often limited by the size, shape, and content of the physical environments. We propose a redirected walking technique that can apply to small physical environments with static or dynamic obstacles. Via a head- and eye-tracking VR headset, our method detects saccadic suppression and redirects the users during the resulting temporary blindness. Our dynamic path planning runs in real-time on a GPU, and thus can avoid static and dynamic obstacles, including walls, furniture, and other VR users sharing the same physical space. To further enhance saccadic redirection, we propose subtle gaze direction methods tailored for VR perception. We demonstrate that saccades can significantly increase the rotation gains during redirection without introducing visual distortions or simulator sickness. This allows our method to apply to large open virtual spaces and small physical environments for room-scale VR. We evaluate our system via numerical simulations and real user studies.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.