Deep generative neural networks (DGNNs) have achieved realistic and high-quality data generation. In particular, the adversarial training scheme has been applied to many DGNNs and has exhibited powerful performance. Despite of recent advances in generative networks, identifying the image generation mechanism still remains challenging. In this paper, we present an explorative sampling algorithm to analyze generation mechanism of DGNNs. Our method efficiently obtains samples with identical attributes from a query image in a perspective of the trained model. We define generative boundaries which determine the activation of nodes in the internal layer and probe inside the model with this information. To handle a large number of boundaries, we obtain the essential set of boundaries using optimization. By gathering samples within the region surrounded by generative boundaries, we can empirically reveal the characteristics of the internal layers of DGNNs. We also demonstrate that our algorithm can find more homogeneous, the model specific samples compared to the variations of ϵ-based sampling method.
Quality assurance of Additive Manufacturing (AM) products has become an important issue as the AM technology is extending its application throughout the industry. However, with no definite measure to quantify the error of the product and monitor the manufacturing process, many attempts are made to propose an effective monitoring system for the quality assurance of AM products. In this research, a novel approach for quantifying the error in real-time is presented through a closed-loop vision-based tracking method. As conventional AM processes are open-loop processes, we focus on the implementation of real-time error quantification of the products through the utilization of a closed-loop process. Three test models are designed for the experiment, and the tracking data from the camera will be compared with the G-code of the product to evaluate the geometrical errors. The results obtained from the camera analysis will then be validated through comparison of the results obtained from a 3D scanner.
Despite significant improvements on the image generation performance of Generative Adversarial Networks (GANs), generations with low visual fidelity still have been observed. As widely used metrics for GANs focus more on the overall performance of the model, evaluation on the quality of individual generations or detection of defective generations is challenging. While recent studies try to detect featuremap units that cause artifacts and evaluate individual samples, these approaches require additional resources such as external networks or a number of training data to approximate the real data manifold.
In this work, we propose the concept of local activation, and devise a metric on the local activation to detect artifact generations without additional supervision.
We empirically verify that our approach can detect and correct artifact generations from GANs with various datasets. Finally, we discuss a geometrical analysis to partially reveal the relation between the proposed concept and low visual fidelity.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.