Input imageFrame eld Final result Fig. 1. Given a possibly noisy grayscale bitmap image, we compute a frame field aligned with the directions on the image, superimposing multiple directions around sharp corners as well as X-and T-junctions. We then use this frame field to extract the drawing topology and create the final vectorization with the computed topology. Frame field computation (shown for a subset of pixels in the upper zoom and the full field in the lower one) is the key component of the system. The frame field disambiguates X-and T-junctions even in the noisy areas, allowing tracing to be straightforward and robust. Input images are from www.easy-drawings-and-sketches.com, ©Ivan Huska.Image tracing is a foundational component of the work ow in graphic design, engineering, and computer animation, linking hand-drawn concept images to collections of smooth curves needed for geometry processing and editing. Even for clean line drawings, modern algorithms o en fail to faithfully vectorize junctions, or points at which curves meet; this produces vector drawings with incorrect connectivity. is subtle issue undermines the practical application of vectorization tools and accounts for hesitance among artists and engineers to use automatic vectorization so ware. To address this issue, we propose a novel image vectorization method based on state-of-the-art mathematical algorithms for frame eld processing. Our algorithm is tailored speci cally to disambiguate junctions without sacri cing quality.
a) input line drawing and mask b) stroke-aligned parametrization c) output curve network a) input line drawing and mask b) s a) input line drawing and mask a) input line drawing and mask ask b) stroke-aligned parametrization e drawing and mask b) stroke-aligned par and mask b) stroke-aligned parametrizati igned parametrization c) output curve n b) stroke-aligned parametrization roke-aligned parametrization c) output Figure 1: Starting from an input line drawing (left), we locally parametrize the sketch as a grid aligned with the strokes (middle). Neighboring parallel strokes are automatically snapped to the same isoline of the parametrization, while junctions are snapped to grid nodes. This parametrization facilitates the extraction of a clean network of Bézier curves (right). Using a simple mask, the user can locally specify the desired amount of simplification in the output (purple scribbles: less simplification, orange scribbles: more simplification). See supplemental materials for a result without the mask.
(e) design-driven quadrangulation AbstractWe propose a novel, design-driven, approach to quadrangulation of closed 3D curves created by sketch-based or other curve modeling systems. Unlike the multitude of approaches for quad-remeshing of existing surfaces, we rely solely on the input curves to both conceive and construct the quad-mesh of an artist imagined surface bounded by them. We observe that viewers complete the intended shape by envisioning a dense network of smooth, gradually changing, flow-lines that interpolates the input curves. Components of the network bridge pairs of input curve segments with similar orientation and shape. Our algorithm mimics this behavior. It first segments the input closed curves into pairs of matching segments, defining dominant flow line sequences across the surface. It then interpolates the input curves by a network of quadrilateral cycles whose iso-lines define the desired flow line network. We proceed to interpolate these networks with all-quad meshes that convey designer intent. We evaluate our results by showing convincing quadrangulations of complex and diverse curve networks with concave, non-planar cycles, and validate our approach by comparing our results to artist generated interpolating meshes.
This paper presents a new preconditioning technique for large‐scale geometric optimization problems, inspired by applications in mesh parameterization. Our positive (semi‐)definite preconditioner acts on the gradients of optimization problems whose variables are positions of the vertices of a triangle mesh in ℝ2 or of a tetrahedral mesh in ℝ3, converting localized distortion gradients into the velocity of a globally near‐rigid motion via a linear solve. We pose our preconditioning tool in terms of the Killing energy of a deformation field and provide new efficient formulas for constructing Killing operators on triangle and tetrahedral meshes. We demonstrate that our method is competitive with state‐of‐the‐art algorithms for locally injective parameterization using a variety of optimization objectives and show applications to two‐ and three‐dimensional mesh deformation.
Line drawing vectorization is a daily task in graphic design, computer animation, and engineering, necessary to convert raster images to a set of curves for editing and geometry processing. Despite recent progress in the area, automatic vectorization tools often produce spurious branches or incorrect connectivity around curve junctions; or smooth out sharp corners. These issues detract from the use of such vectorization tools, both from an aesthetic viewpoint and for feasibility of downstream applications (e.g., automatic coloring or inbetweening). We address these problems by introducing a novel line drawing vectorization algorithm that splits the task into three components: (1) finding keypoints, i.e., curve endpoints, junctions, and sharp corners; (2) extracting drawing topology, i.e., finding connections between keypoints; and (3) computing the geometry of those connections. We compute the optimal geometry of the connecting curves via a novel geometric flow --- PolyVector Flow --- that aligns the curves to the drawing, disambiguating directions around Y-, X-, and T-junctions. We show that our system robustly infers both the geometry and topology of detailed complex drawings. We validate our system both quantitatively and qualitatively, demonstrating that our method visually outperforms previous work.
We introduce a novel technique for the construction of a 3D character proxy, or canvas , directly from a 2D cartoon drawing and a user-provided correspondingly posed 3D skeleton. Our choice of input is motivated by the observation that traditional cartoon characters are well approximated by a union of generalized surface of revolution body parts, anchored by a skeletal structure. While typical 2D character contour drawings allow ambiguities in 3D interpretation, our use of a 3D skeleton eliminates such ambiguities and enables the construction of believable character canvases from complex drawings. Our canvases conform to the 2D contours of the input drawings, and are consistent with the perceptual principles of Gestalt continuity, simplicity, and contour persistence. We first segment the input 2D contours into individual body-part outlines corresponding to 3D skeletal bones using the Gestalt continuation principle to correctly resolve inter-part occlusions in the drawings. We then use this segmentation to compute the canvas geometry, generating 3D generalized surfaces of revolution around the skeletal bones that conform to the original outlines and balance simplicity against contour persistence. The combined method generates believable canvases for characters drawn in complex poses with numerous inter-part occlusions, variable contour depth, and significant foreshortening. Our canvases serve as 3D geometric proxies for cartoon characters, enabling unconstrained 3D viewing, articulation, and non-photorealistic rendering. We validate our algorithm via a range of user studies and comparisons to ground-truth 3D models and artist-drawn results. We further demonstrate a compelling gallery of 3D character canvases created from a diverse set of cartoon drawings with matching 3D skeletons.
No abstract
Figure 1: Micrography images created using our system. Closeups of parts of the images shown in the middle. Left: excerpt from Alice in Wonderland, target size 110x110cm. Right: Song of Songs, target size 42x60cm. Please zoom into the images using the digital version to read the fine text. See supplementary material for large images.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.