Figure 1: Given an input Horse model (a), our method generates a skin-frame structure (b), which is approximate to the model, to minimize the cost of material used in printing it. The frame structure is designed to meet various constraints by an optimization scheme. In (b) we remove the front part of the skin in order to show the internal structure of frame. (c) is the photo of an printed model by removing part of its skin to see the internal struts. (d) is the photo of the printed model generated by our method. A small red drawing pin is put under the object as a size reference in (c) and (d) respectively. The material usage in (d) is only 15.0% of that of a solid object.
Abstract3D printers have become popular in recent years and enable fabrication of custom objects for home users. However, the cost of the material used in printing remains high. In this paper, we present an automatic solution to design a skin-frame structure for the purpose of reducing the material cost in printing a given 3D object. The frame structure is designed by an optimization scheme which significantly reduces material volume and is guaranteed to be physically stable, geometrically approximate, and printable. Furthermore, the number of struts is minimized by solving an 0 sparsity optimization. We formulate it as a multi-objective programming problem and an iterative extension of the preemptive algorithm is developed to find a compromise solution. We demonstrate the applicability and practicability of our solution by printing various objects using both powder-type and extrusion-type 3D printers. Our method is shown to be more cost-effective than previous works.
Many geometry processing applications are sensitive to noise and sharp features. Although there are a number of works on detecting noise and sharp features in the literature, they are heuristic. On one hand, traditional denoising methods use filtering operators to remove noise, however, they may blur sharp features and shrink the object. On the other hand, noise makes detection of features, which relies on computation of differential properties, unreliable and unstable. Therefore, detecting noise and features on discrete surfaces still remains challenging.In this article, we present an approach for decoupling noise and features on 3D shapes. Our approach consists of two phases. In the first phase, a base mesh is estimated from the input noisy data by a global Laplacian regularization denoising scheme. The estimated base mesh is guaranteed to asymptotically converge to the true underlying surface with probability one as the sample size goes to infinity. In the second phase, an 1 -analysis compressed sensing optimization is proposed to recover sharp features from the residual between base mesh and input mesh. This is based on our discovery that sharp features can be sparsely represented in some coherent dictionary which is constructed by the pseudo-inverse matrix of the Laplacian of the shape. The features are recovered from the residual in a progressive way. Theoretical analysis and experimental results show that our approach can reliably and robustly remove noise and extract sharp features on 3D shapes.
We present an adaptive slicing scheme for reducing the manufacturing time for 3D printing systems. Based on a new saliencybased metric, our method optimizes the thicknesses of slicing layers to save printing time and preserve the visual quality of the printing results. We formulate the problem as a constrained 0 optimization and compute the slicing result via a two-step optimization scheme. To further reduce printing time, we develop a saliency-based segmentation scheme to partition an object into subparts and then optimize the slicing of each subpart separately. We validate our method with a large set of 3D shapes ranging from CAD models to scanned objects. Results show that our method saves printing time by 30-40% and generates 3D objects that are visually similar to the ones printed with the finest resolution possible.
Video-based person re-identification (ReID) is a challenging problem, where some video tracks of people across non-overlapping cameras are available for matching. Feature aggregation from a video track is a key step for videobased person ReID. Many existing methods tackle this problem by average/maximum temporal pooling or RNNs with attention. However, these methods cannot deal with temporal dependency and spatial misalignment problems at the same time. We are inspired by video action recognition that involves the identification of different actions from video tracks. Firstly, we use 3D convolutions on video volume, instead of using 2D convolutions across frames, to extract spatial and temporal features simultaneously. Secondly, we use a non-local block to tackle the misalignment problem and capture spatial-temporal long-range dependencies. As a result, the network can learn useful spatial-temporal information as a weighted sum of the features in all space and temporal positions in the input feature map. Experimental results on three datasets show that our framework outperforms state-of-the-art approaches by a large margin on multiple metrics.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.