Multi-view clustering aims to leverage information from multiple views to improve clustering. Most previous works assumed that each view has complete data. However, in real-world datasets, it is often the case that a view may contain some missing data, resulting in the incomplete multi-view clustering problem. Previous methods for this problem have at least one of the following drawbacks: (1) employing shallow models, which cannot well handle the dependence and discrepancy among different views; (2) ignoring the hidden information of the missing data; (3) dedicated to the two-view case. To eliminate all these drawbacks, in this work we present an Adversarial Incomplete Multi-view Clustering (AIMC) method. Unlike most existing methods which only learn a new representation with existing views, AIMC seeks the common latent space of multi-view data and performs missing data inference simultaneously. In particular, the element-wise reconstruction and the generative adversarial network (GAN) are integrated to infer the missing data. They aim to capture overall structure and get a deeper semantic understanding respectively. Moreover, an aligned clustering loss is designed to obtain a better clustering structure. Experiments conducted on three datasets show that AIMC performs well and outperforms baseline methods.
The texture image decomposition of porcelain fragments based on convolutional neural network is a functional algorithm based on energy minimization. It maps the image to a suitable space and can effectively decompose the image structure, texture, and noise. This paper conducts a systematic research on image decomposition based on variational method and compressed sensing reconstruction of convolutional neural network. This paper uses the layered variational image decomposition method to decompose the image into structural components and texture components and uses a compressed sensing algorithm based on hybrid basis to reconstruct the structure and texture components with large data. In compressed sensing, to further increase each feature component, the sparseness of tight framework wavelet-based shearlet transform is constructed and combined with wave atoms as a joint sparse dictionary big data. Under the condition of the same sampling rate, this algorithm can retain more image texture details and big data than the algorithm. The production of big data that meets the characteristics of the background text is actually an image-based normalization method. This method is not very sensitive to the relative position, density, spacing, and thickness of the text. A super-resolution model for certain texture features can improve the restoration effect of such texture images. And the dataset extracted by the classification method used in this paper accounts for 20% of the total dataset, and at the same time, the PSNR value of 0.1 is improved on average. Therefore, taking into account the requirements for future big data experimental training, this article mainly uses jpg/csv two standardized database datasets after segmentation. This dataset minimizes the difference between the same type of base text in the same period to lay the foundation for good big data recognition in the future.
with the improvement of computer science and technology, modern fashion design has become the product of the combination of designers, computer application and art design. 3D CAD design based on virtual technology has become a technical means and is widely used in the field of fashion. This paper mainly studies how to apply virtual technology to 3D CAD Hanfu design. Starting from the principle of garment CAD, this topic first introduces the function and application method of 3D virtual design LookStailor X, the establishment of 3D human model, the establishment of garment model, style line drawing, the conversion from 3D garment piece to 2D garment piece, etc. Then, the virtual design of Hanfu is carried out through parametric 3D human modeling, 3D garment generation and physical simulation. Finally, through the actual research, this paper compares and analyzes the clothing under the three modes, demonstrates the advantages of 3D virtual clothing design with specific research figures, and predicts the market prospect and economic contribution of 3D virtual clothing design.
Bid optimization for online advertising from single advertiser's perspective has been thoroughly investigated in both academic research and industrial practice. However, existing work typically assume competitors do not change their bids, i.e., the wining price is fixed, leading to poor performance of the derived solution. Although a few studies use multi-agent reinforcement learning to set up a cooperative game, they still suffer the following drawbacks:(1) They fail to avoid collusion solutions where all the advertisers involved in an auction collude to bid an extremely low price on purpose. (2) Previous works cannot well handle the underlying complex bidding environment, leading to poor model convergence. This problem could be amplified when handling multiple objectives of advertisers which are practical demands but not considered by previous work. In this paper, we propose a novel multi-objective cooperative bid optimization formulation called Multi-Agent Cooperative bidding Games (MACG). MACG sets up a carefully designed multi-objective optimization framework where different objectives of advertisers are incorporated. A global objective to maximize the overall profit of all advertisements is added in order to encourage better cooperation and also to protect self-bidding advertisers. To avoid collusion, we also introduce an extra platform revenue constraint. We analyze the optimal functional form of the bidding formula theoretically and design a policy network accordingly to generate auction-level bids. Then we design an efficient multi-agent evolutionary strategy for model optimization. Evolutionary strategy does not need to model the underlying environment explicitly and is more suitable for bid optimization. Offline experiments and online A/B tests conducted on the Taobao platform indicate both single advertiser's objective and global profit have been significantly improved compared to state-of-art methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.