we furthermore propose an efficient implementation which significantly reduces the GPU memory required during the training process. By employing our method in hierarchical network architectures we can outperform most of the state-of-the-art networks on established point cloud segmentation, classification and normal estimation benchmarks. Furthermore, in contrast to most existing approaches, we also demonstrate the robustness of our method with respect to sampling variations, even when training with uniformly sampled data only. To support the direct application of these concepts, we provide a ready-to-use TensorFlow implementation of these layers at https://github.com/viscom-ulm/MCCNN.
In the last decade a new family of methods, namely Image-Based Rendering, has appeared. These techniques rely on the use of precomputed images to totally or partially substitute the geometric representation of the scene. This allows to obtain realistic renderings even with modest resources. The main problem is the amount of data needed, mainly due to the high redundancy and the high computational cost of capture. In this paper we present a new method to automatically determine the correct camera placement positions in order to obtain a minimal set of views for Image-Based Rendering. The input is a 3D polyhedral model including textures and the output is a set of views that sample all visible polygons at an appropriate rate. The viewpoints should cover all visible polygons with an adequate quality, so that we sample the polygons at sufficient rate. This permits to avoid the excessive redundancy of the data existing in several other approaches. We also reduce the cost of the capturing process, as the number of actually computed reference views decreases. The localization of interesting viewpoints is performed with the aid of an information theory-based measure, dubbed viewpoint entropy. This measure is used to determine the amount of information seen from a viewpoint. Next we develop a greedy algorithm to minimize the number of images needed to represent a scene. In contrast to other approaches, our system uses a special preprocess for textures to avoid artifacts appearing in partially occluded textured polygons. Therefore no visible detail of these images is lost.
The exploration of complex walkthrough models is often a difficult task due to the presence of densely occluded regions which pose a serious challenge to online navigation. In this paper we address the problem of algorithmic generation of exploration paths for complex walkthrough models. We present a characterization of suitable properties for camera paths and we discuss an efficient algorithm for computing them with little or no user intervention. Our approach is based on identifying the free-space structure of the scene (represented by a cell and portal graph) and an entropy-based measure of the relevance of a view-point. This metric is key for deciding which cells have to be visited and for computing critical way-points inside each cell. Several results on different model categories are presented and discussed.
(a) Time step 1 (b) Time step 2 (c) Time step 3 Fig. 1. By deriving analytic expressions, we can enhance molecular visualizations and realize interreflections in real-time. The images (a-c) show three time steps of a molecular simulation investigating the interaction between a magenta-colored ligand and a receptor molecule, which receives exaggerated diffuse interreflections. Due to these interreflections it can be seen how the ligand enters the active site.Abstract-Today molecular simulations produce complex data sets capturing the interactions of molecules in detail. Due to the complexity of this time-varying data, advanced visualization techniques are required to support its visual analysis. Current molecular visualization techniques utilize ambient occlusion as a global illumination approximation to improve spatial comprehension. Besides these shadow-like effects, interreflections are also known to improve the spatial comprehension of complex geometric structures. Unfortunately, the inherent computational complexity of interreflections would forbid interactive exploration, which is mandatory in many scenarios dealing with static and time-varying data. In this paper, we introduce a novel analytic approach for capturing interreflections of molecular structures in real-time. By exploiting the knowledge of the underlying space filling representations, we are able to reduce the required parameters and can thus apply symbolic regression to obtain an analytic expression for interreflections. We show how to obtain the data required for the symbolic regression analysis, and how to exploit our analytic solution to enhance interactive molecular visualizations.
We propose a new learning-based algorithm which is able to predict high quality viewpoints directly on 3D models. The key to learning viewpoints is a novel approach to resolve label ambiguities, in the form of dynamic label generation, which adapts the network target during training, and enables our network to learn viewpoints for various viewpoint quality measures. By learning solely on unstructured 3D point information, our approach is robust under mesh quality changes, and the viewpoint prediction is separated from the rendering process during evaluation.
Fig. 1. Successive steps during the visual analysis of the binding nature of Aspirin and the Phospholipase A2 protein. We compute and visualize all essential interaction energies represented by 2D and 3D arrows. The orientation of the depicted arrows encodes the sign of the energy, i.e., attracting vs. repelling force. The width of the arrows as well as the color of the residue's silhouettes support energy quantification. During the visual analysis, energies are computed and depicted on-the-fly to support interactive hypothesis testing (left), and residues can be filtered based on energy and distance to obtain a more focused view (middle). Additionally, a 2D visualization helps to obtain total energy values in an uncluttered manner (right).Abstract-Molecular simulations are used in many areas of biotechnology, such as drug design and enzyme engineering. Despite the development of automatic computational protocols, analysis of molecular interactions is still a major aspect where human comprehension and intuition are key to accelerate, analyze, and propose modifications to the molecule of interest. Most visualization algorithms help the users by providing an accurate depiction of the spatial arrangement: the atoms involved in inter-molecular contacts. There are few tools that provide visual information on the forces governing molecular docking. Unfortunately these tools, commonly restricted to close interaction between atoms, do not consider whole simulation paths, long-range distances and, importantly, do not provide visual cues for a quicker and intuitive comprehension of the energy functions (modeling intermolecular interactions) involved. In this paper, we propose visualizations designed to enable the characterization of interaction forces by taking into account several relevant variables such as molecule-ligand distance and the energy function, which is essential to understand binding affinities. We put emphasis on mapping molecular docking paths obtained from Molecular Dynamics or Monte Carlo simulations, and provide time-dependent visualizations for different energy components and particle resolutions: atoms, groups or residues. The presented visualizations have the potential to support domain experts in a more efficient drug or enzyme design process.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.