Abstract:3D shape editing is widely used in a range of applications such as movie production, computer games and computer aided design. It is also a popular research topic in computer graphics and computer vision. In past decades, researchers have developed a series of editing methods to make the editing process faster, more robust, and more reliable. Traditionally, the deformed shape is determined by the optimal transformation and weights for an energy formulation. With increasing availability of 3D shapes on the Inte… Show more
“…Editing a 3D model means deforming the shape of the model under some controls given by the user. There has been much work about the editing of explicit geometry representation [9,17], which we refer readers to a recent survey [80]. Traditional mesh deformation methods are based on Laplacian coordinates [35,57,58], Poisson equation [79], and dual Laplacian coordinates [2].…”
Implicit neural rendering, especially Neural Radiance Field (NeRF), has shown great potential in novel view synthesis of a scene. However, current NeRF-based methods cannot enable users to perform user-controlled shape deformation in the scene. While existing works have proposed some approaches to modify the radiance field according to the user's constraints, the modification is limited to color editing or object translation and rotation. In this paper, we propose a method that allows users to perform controllable shape deformation on the implicit representation of the scene, and synthesizes the novel view images of the edited scene without re-training the network. Specifically, we establish a correspondence between the extracted explicit mesh representation and the implicit neural representation of the target scene. Users can first utilize welldeveloped mesh-based deformation methods to deform the mesh representation of the scene. Our method then utilizes user edits from the mesh representation to bend the camera rays by introducing a tetrahedra mesh as a proxy, obtaining the rendering results of the edited scene. Extensive experiments demonstrate that our framework can achieve ideal editing results not only on synthetic data, but also on real scenes captured by users.
“…Editing a 3D model means deforming the shape of the model under some controls given by the user. There has been much work about the editing of explicit geometry representation [9,17], which we refer readers to a recent survey [80]. Traditional mesh deformation methods are based on Laplacian coordinates [35,57,58], Poisson equation [79], and dual Laplacian coordinates [2].…”
Implicit neural rendering, especially Neural Radiance Field (NeRF), has shown great potential in novel view synthesis of a scene. However, current NeRF-based methods cannot enable users to perform user-controlled shape deformation in the scene. While existing works have proposed some approaches to modify the radiance field according to the user's constraints, the modification is limited to color editing or object translation and rotation. In this paper, we propose a method that allows users to perform controllable shape deformation on the implicit representation of the scene, and synthesizes the novel view images of the edited scene without re-training the network. Specifically, we establish a correspondence between the extracted explicit mesh representation and the implicit neural representation of the target scene. Users can first utilize welldeveloped mesh-based deformation methods to deform the mesh representation of the scene. Our method then utilizes user edits from the mesh representation to bend the camera rays by introducing a tetrahedra mesh as a proxy, obtaining the rendering results of the edited scene. Extensive experiments demonstrate that our framework can achieve ideal editing results not only on synthetic data, but also on real scenes captured by users.
“…Recent works have developed deep learning frameworks for 3D voxel grids [Choy et al 2016;Maturana and Scherer 2015], multi-view 2D rendering of 3D data [Kalogerakis et al 2017;Su et al 2018, 3D point clouds [Achlioptas et al 2018;Fan et al 2017;Qi et al 2017a,b], 3D polygonal meshes Groueix et al 2018;, and 3D implicit functions [Chen and Zhang 2019;Mescheder et al 2019;. For more detailed discussion and comparison, we refer the readers to these surveys [Ahmed et al 2018;Ioannidou et al 2017;Jin et al 2020;Xiao et al 2020;Yuan et al 2021].…”
3D shape generation is a fundamental operation in computer graphics. While significant progress has been made, especially with recent deep generative models, it remains a challenge to synthesize high-quality shapes with rich geometric details and complex structures, in a controllable manner. To tackle this, we introduce DSG-Net, a deep neural network that learns a disentangled structured & geometric mesh representation for 3D shapes, where two key aspects of shapes, geometry and structure, are encoded in a synergistic manner to ensure plausibility of the generated shapes, while also being disentangled as much as possible. This supports a range of novel shape generation applications with disentangled control, such as interpolation of structure (geometry) while keeping geometry (structure) unchanged. To achieve this, we simultaneously learn structure and geometry through variational autoencoders (VAEs) in a hierarchical manner for both, with bijective mappings at each level. In this manner, we effectively encode geometry and structure in separate latent spaces, while ensuring their compatibility: the structure is used to guide the geometry and vice versa. At the leaf level, the part geometry is represented using a conditional part VAE, to encode high-quality geometric details, guided by the structure context as the condition. Our method not only supports controllable generation applications, but also produces high-quality synthesized shapes, outperforming state-of-the-art methods.
“…Editing a 3D model means deforming the shape of the model under some controls given by the user. There has been much work about the editing of explicit geometry representation [9,17], which we refer readers to a recent survey [80]. Traditional mesh deformation methods are based on Laplacian coordinates [35,57,58], Poisson equation [79], and dual Laplacian coordinates [2].…”
Section: Related Workmentioning
confidence: 99%
“…The mesh representation, as a kind of explicit representation, is commonly used in shape modeling and rendering. There is a lot of research work on mesh deformation or editing [80]. However, it is difficult to obtain an accurate explicit representation of a real-world scene.…”
Implicit neural rendering, especially Neural Radiance Field (NeRF), has shown great potential in novel view synthesis of a scene. However, current NeRF-based methods cannot enable users to perform user-controlled shape deformation in the scene. While existing works have proposed some approaches to modify the radiance field according to the user's constraints, the modification is limited to color editing or object translation and rotation. In this paper, we propose a method that allows users to perform controllable shape deformation on the implicit representation of the scene, and synthesizes the novel view images of the edited scene without re-training the network. Specifically, we establish a correspondence between the extracted explicit mesh representation and the implicit neural representation of the target scene. Users can first utilize welldeveloped mesh-based deformation methods to deform the mesh representation of the scene. Our method then utilizes user edits from the mesh representation to bend the camera rays by introducing a tetrahedra mesh as a proxy, obtaining the rendering results of the edited scene. Extensive experiments demonstrate that our framework can achieve ideal editing results not only on synthetic data, but also on real scenes captured by users.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.