Abstract. Multi-view clustering, which explores complementary information between multiple distinct feature sets, has received considerable attention. For accurate clustering, all data with the same label should be clustered together regardless of their multiple views. However, this is not guaranteed in existing approaches. To address this issue, we propose Adaptive Multi-View Semi-Supervised Nonnegative Matrix Factorization (AMVNMF), which uses label information as hard constraints to ensure data with same label are clustered together, so that the discriminating power of new representations are enhanced. Besides, AMVNMF provides a viable solution to learn the weight of each view adaptively with only a single parameter. Using L2,1-norm, AMVNMF is also robust to noises and outliers. We further develop an efficient iterative algorithm for solving the optimization problem. Experiments carried out on five well-known datasets have demonstrated the effectiveness of AMVNMF in comparison to other existing state-of-the-art approaches in terms of accuracy and normalized mutual information.
We address the problem of disentangled representation learning with independent latent factors in graph convolutional networks (GCNs). The current methods usually learn node representation by describing its neighborhood as a perceptual whole in a holistic manner while ignoring the entanglement of the latent factors. However, a real-world graph is formed by the complex interaction of many latent factors (e.g., the same hobby, education or work in social network). While little effort has been made toward exploring the disentangled representation in GCNs. In this paper, we propose a novel Independence Promoted Graph Disentangled Networks (IPGDN) to learn disentangled node representation while enhancing the independence among node representations. In particular, we firstly present disentangled representation learning by neighborhood routing mechanism, and then employ the Hilbert-Schmidt Independence Criterion (HSIC) to enforce independence between the latent representations, which is effectively integrated into a graph convolutional framework as a regularizer at the output layer. Experimental studies on real-world graphs validate our model and demonstrate that our algorithms outperform the state-of-the-arts by a wide margin in different network applications, including semi-supervised graph classification, graph clustering and graph visualization.
Lung cancer has one of the highest morbidity and mortality rates in the world. Lung nodules are an early indicator of lung cancer. Therefore, accurate detection and image segmentation of lung nodules is of great significance to the early diagnosis of lung cancer. This paper proposes a CT (Computed Tomography) image lung nodule segmentation method based on 3D-UNet and Res2Net, and establishes a new convolutional neural network called 3D-Res2UNet. 3D-Res2Net has a symmetrical hierarchical connection network with strong multi-scale feature extraction capabilities. It enables the network to express multi-scale features with a finer granularity, while increasing the receptive field of each layer of the network. This structure solves the deep level problem. The network is not prone to gradient disappearance and gradient explosion problems, which improves the accuracy of detection and segmentation. The U-shaped network ensures the size of the feature map while effectively repairing the lost features. The method in this paper was tested on the LUNA16 public dataset, where the dice coefficient index reached 95.30% and the recall rate reached 99.1%, indicating that this method has good performance in lung nodule image segmentation.
Through a comparative analysis, we confirm that the value of the dark channel pixels of the smoke image is higher than the non-smoke image. It means that the dark channel of the smoke image has more elaborate information of the smoke, which is of great benefit to our detailed feature extraction of smoke. On this background, we propose a dual convolution network using dark channel prior for image smoke classification (DarkC-DCN) for the image smoke classification. In DarkC-DCN, basing on the AlexNet, and through continuous structural improvement and optimization, we improve a detailed CNN to extract the detailed features of dark channel images. Similarly, to extract the general features in the image, we further design another residual network based on the AlexNet, which is the main framework of the entire network. To ascertain the robustness of the network, the two channels are trained separately for various inputs. In addition, we perform feature fusion before the common fully connected layer. In the experiment, we also add some non-smoke data similar to smoke in the public smoke data set for data expansion. The experimental results indicate that the model has a good performance in general. The accuracy value reaches 98.56%. INDEX TERMS Dark channel prior, dual convolution network, image smoke classification, AlexNet.
Currently, lung cancer has one of the highest mortality rates because it is often caught too late. Therefore, early detection is essential to reduce the risk of death. Pulmonary nodules are considered key indicators of primary lung cancer. Developing an efficient and accurate computer-aided diagnosis system for pulmonary nodule detection is an important goal. Typically, a computer-aided diagnosis system for pulmonary nodule detection consists of two parts: candidate nodule extraction and false-positive reduction of candidate nodules. The reduction of false positives (FPs) of candidate nodules remains an important challenge due to morphological characteristics of nodule height changes and similar characteristics to other organs. In this study, we propose a novel multi-scale heterogeneous three-dimensional (3D) convolutional neural network (MSH-CNN) based on chest computed tomography (CT) images. There are three main strategies of the design: (1) using multi-scale 3D nodule blocks with different levels of contextual information as inputs; (2) using two different branches of 3D CNN to extract the expression features; (3) using a set of weights which are determined by back propagation to fuse the expression features produced by step 2. In order to test the performance of the algorithm, we trained and tested on the Lung Nodule Analysis 2016 (LUNA16) dataset, achieving an average competitive performance metric (CPM) score of 0.874 and a sensitivity of 91.7% at two FPs/scan. Moreover, our framework is universal and can be easily extended to other candidate false-positive reduction tasks in 3D object detection, as well as 3D object classification.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.