In augmented reality (AR)-based assembly and disassembly guiding systems, the guiding effects mainly depend on display characters of virtual guiding scene. If the virtual guiding scene is not displayed properly, such as displaying out of the operator's view, overlaying on the interest region of the screen, selecting an inappropriate viewpoint of the three-dimensional (3D) guiding scene, and so on, it may disturb normal operation instead of guiding. To display the 3D virtual guiding scene on suitable screen region from a comfortable viewpoint, an adaptive guiding scene display method was proposed. In the adaptive selection of the display region, the display screen was divided into many grids. The screen projection coordinates for vertexes of CAD feature's bounding boxes, and the occupied index of each grid was calculated. The maximum connected region of empty grids was chosen as a non-interest region on the screen for displaying the 3D guiding scene. In the optimal viewpoint selection algorithm, a viewpoint information measurement operator was put forward, which took projection area, visible proportion, information entropy, and difference in depth of all visible vertexes of CAD feature's bounding box in 3D virtual guiding scene into account. Finally, based on the principle of perspective projection, the 3D virtual guiding scene was positioned into the virtual world and displayed from the specified viewpoint on the specified screen area. All algorithms proposed in this paper utilized the automatically extracted CAD feature's bounding box model of part as input data so they could be implemented for online planning of 3D assembly guiding scenes.
Caricature is an interesting art to express exaggerated views of different persons and things through drawing.The face caricature is popular and widely used for different applications. To do this, we have to properly extract unique/specialized features of a person's face. A person's facial feature not only depends on his/her natural appearance, but also the associated expression style. Therefore, we would like to extract the neutural facial features and personal expression style for different applicaions. In this paper, we represent the 3D neutral face models in BU-3DFE database by sparse signal decomposition in the training phase. With this decomposition, the sparse training data can be used for robust linear subspace modeling of public faces. For an input 3D face model, we fit the model and decompose the 3D model geometry into a neutral face and the expression deformation separately. The neutral geomertry can be further decomposed into public face and individualized facial feature. We exaggerate the facial features and the expressions by estimating the probability on the corresponding manifold. The public face, the exaggerated facial features and the exaggerated expression are combined to synthesize a 3D caricature for a 3D face model. The proposed algorithm is automatic and can effectively extract the individualized facial features from an input 3D face model to create 3D face caricature.
We propose a self-supervising learning framework for finding the dominant eigenfunction-eigenvalue pairs of linear and self-adjoint operators. We represent target eigenfunctions with coordinate-based neural networks and employ the Fourier positional encodings to enable the approximation of high-frequency modes. We formulate a self-supervised training objective for spectral learning and propose a novel regularization mechanism to ensure that the network finds the exact eigenfunctions instead of a space spanned by the eigenfunctions. Furthermore, we investigate the effect of weight normalization as a mechanism to alleviate the risk of recovering linear dependent modes, allowing us to accurately recover a large number of eigenpairs. The effectiveness of our methods is demonstrated across a collection of representative benchmarks including both local and non-local diffusion operators, as well as high-dimensional time-series data from a video sequence. Our results indicate that the present algorithm can outperform competing approaches in terms of both approximation accuracy and computational cost.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.