Constructing a 3D papercraft model from its unfolding has been fun for both children and adults since we can reproduce virtual 3D models in the real world. However, facilitating the papercraft construction process is still a challenging problem, especially when the shape of the input model is complex in the sense that it has large variation in its surface curvature. This paper presents a new heuristic approach to unfolding 3D triangular meshes without any shape distortions, so that we can construct the 3D papercraft models through simple atomic operations for gluing boundary edges around the 2D unfoldings. Our approach is inspired by the concept of topological surgery, where the appearance of boundary edges of the unfolded closed surface can be encoded using a symbolic representation. To fully simplify the papercraft construction process, we developed a genetic-based algorithm for unfolding the 3D mesh into a single connected patch in general, while optimizing the usage of the paper sheet and balance in the shape of that patch. Several examples together with user studies are included to demonstrate that the proposed approach works well for a broad range of 3D triangular meshes.
No abstract
Abstract.A potential field approach, coupled with force-directed methods, is proposed in this paper for drawing graphs with nonuniform nodes in 2-D and 3-D. In our framework, nonuniform nodes are uniformly or nonuniformly charged, while edges are modelled by springs. Using certain techniques developed in the field of potential-based path planning, we are able to find analytically tractable procedures for computing the repulsive force and torque of a node in the potential field induced by the remaining nodes. Our experimental results suggest this new approach to be promising, as drawings of good quality for a variety of graphs in 2-D and 3-D can be produced efficiently.
Extractive speech summarization, aiming to automatically select an indicative set of sentences from a spoken document so as to concisely represent the most important aspects of the document, has become an active area for research and experimentation. An emerging stream of work is to employ the language modeling framework along with the Kullback-Leibler divergence measure for extractive speech summarization, which can perform important sentence selection in an unsupervised manner and has shown preliminary success. This paper presents a continuation of such a general line of research and its main contribution is two-fold. First, by virtue of pseudo-relevance feedback, we explore several effective sentence modeling formulations to enhance the sentence models involved in the LM-based summarization framework. Second, the utilities of our summarization methods and several widely-used methods are analyzed and compared extensively, which demonstrates the effectiveness of our methods.
Extractive summarization is a process that manages to select the most salient sentences from a document (or a set of documents) and subsequently assemble them to form an informative summary, facilitating users to browse and assimilate the main theme of the document efficiently. Our work in this paper continues this general line of research and its main contributions are twofold. First, we explore to leverage the recently proposed word mover's distance (WMD) metric, in conjunction with semantic-aware continuous space representations of words, to authentically capture finer-grained sentence-to-document and/or sentence-to-sentence semantic relatedness for effective use in the summarization process. Second, we investigate to combine our proposed approach with several state-of-the-art summarization methods, which originally adopted the conventional term-overlap or bag-ofwords (BOW) approaches for similarity calculation. A series of experiments conducted on a typical broadcast news summarization task seem to suggest the performance merits of our proposed approach, in comparison to the mainstream methods.
The complexities of the possible rendezvous and the lockout problems for propositional concurrent progmms will be investigated an detail. W e develop a unified strategy, based on domino tiling, to show that the above two problems with respect to a variety of propositional concurrent progmms are complete for a broad spectrum of complexity classes, ranging from NLOGSPACE, PTIME, NP, PSPACE to EXPTIME. Our technique is novel in the sense that it demonstrates how two seemingly unrelated models, namely propositional concurrent programs and dominoes, can be linked together in a natural and elegant fashion.
Extractive summarization is intended to automatically select a set of representative sentences from a text or spoken document that can concisely express the most important topics of the document. Language modeling (LM) has been proven to be a promising framework for performing extractive summarization in an unsupervised manner. However, there remain two fundamental challenges facing existing LM-based methods. One is how to construct sentence models involved in the LM framework more accurately without resorting to external information sources. The other is how to additionally take into account the sentence-level structural relationships embedded in a document for important sentence selection. To address these two challenges, in this paper we explore a novel approach that generates overlapped clusters to extract sentence relatedness information from the document to be summarized, which can be used not only to enhance the estimation of various sentence models but also to allow for the sentencelevel structural relationships for better summarization performance. Further, the utilities of our proposed methods and several state-of-the-art unsupervised methods are analyzed and compared extensively. A series of experiments conducted on a Mandarin broadcast news summarization task demonstrate the effectiveness and viability of our method.
Extractive summarization, with the intention of automatically selecting a set of representative sentences from a text (or spoken) document so as to concisely express the most important theme of the document, has been an active area of experimentation and development. A recent trend of research is to employ the language modeling (LM) approach for important sentence selection, which has proven to be effective for performing extractive summarization in an unsupervised fashion. However, one of the major challenges facing the LM approach is how to formulate the sentence models and estimate their parameters more accurately for each text (or spoken) document to be summarized. This paper extends this line of research and its contributions are three-fold. First, we propose a positional language modeling framework using different granularities of position-specific information to better estimate the sentence models involved in summarization. Second, we also explore to integrate the positional cues into relevance modeling through a pseudo-relevance feedback procedure. Third, the utilities of the various methods originated from our proposed framework and several well-established unsupervised methods are analyzed and compared extensively. Empirical evaluations conducted on a broadcast news summarization task seem to demonstrate the performance merits of our summarization methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.