Anomaly detection on attributed networks attracts considerable research interests due to wide applications of attributed networks in modeling a wide range of complex systems. Recently, the deep learning-based anomaly detection methods have shown promising results over shallow approaches, especially on networks with high-dimensional attributes and complex structures. However, existing approaches, which employ graph autoencoder as their backbone, do not fully exploit the rich information of the network, resulting in suboptimal performance. Furthermore, these methods do not directly target anomaly detection in their learning objective and fail to scale to large networks due to the full graph training mechanism. To overcome these limitations, in this paper, we present a novel Contrastive self-supervised Learning framework for Anomaly detection on attributed networks (CoLA for abbreviation). Our framework fully exploits the local information from network data by sampling a novel type of contrastive instance pair, which can capture the relationship between each node and its neighboring substructure in an unsupervised way. Meanwhile, a well-designed graph neural network-based contrastive learning model is proposed to learn informative embedding from highdimensional attributes and local structure and measure the agreement of each instance pairs with its outputted scores. The multi-round predicted scores by the contrastive learning model are further used to evaluate the abnormality of each node with statistical estimation. In this way, the learning model is trained by a specific anomaly detection-aware target. Furthermore, since the input of the graph neural network module is batches of instance pairs instead of the full network, our framework can adapt Manuscript
The aim of multi-output learning is to simultaneously predict multiple outputs given an input. It is an important learning problem for decision-making, since making decisions in the real world often involves multiple complex factors and criteria. In recent times, an increasing number of research studies have focused on ways to predict multiple outputs at once. Such efforts have transpired in different forms according to the particular multi-output learning problem under study. Classic cases of multi-output learning include multi-label learning, multidimensional learning, multi-target regression and others. From our survey of the topic, we were struck by a lack in studies that generalize the different forms of multi-output learning into a common framework. This paper fills that gap with a comprehensive review and analysis of the multi-output learning paradigm. In particular, we characterize the 4 Vs of multi-output learning, i.e., volume, velocity, variety, and veracity, and the ways in which the 4 Vs both benefit and bring challenges to multioutput learning by taking inspiration from big data. We analyze the life cycle of output labeling, present the main mathematical definitions of multi-output learning, and examine the field's key challenges and corresponding solutions as found in the literature. Several model evaluation metrics and popular data repositories are also discussed. Last but not least, we highlight some emerging challenges with multi-output learning from the perspective of the 4 Vs as potential research directions worthy of further studies.
Deep brain stimulation (DBS) is an established treatment for patients with Parkinson’s disease (PD). Sleep disorders are common complications of PD and affected by subthalamic DBS treatment. To achieve more precise neuromodulation, chronic sleep monitoring and closed-loop DBS toward sleep-wake cycles could potentially be utilized. Local field potential (LFP) signals that are sensed by the DBS electrode could be processed as primary feedback signals. This is the first study to systematically investigate the sleep-stage classification based on LFPs in subthalamic nucleus (STN). With our newly developed recording and transmission system, STN-LFPs were collected from 12 PD patients during wakefulness and nocturnal polysomnography sleep monitoring at one month after DBS implantation. Automatic sleep-stage classification models were built with robust and interpretable machine learning methods (support vector machine and decision tree). The accuracy, sensitivity, selectivity, and specificity of the classification reached high values (above 90% at most measures) at group and individual levels. Features extracted in alpha (8–13 Hz), beta (13–35 Hz), and gamma (35–50 Hz) bands were found to contribute the most to the classification. These results will directly guide the engineering development of implantable sleep monitoring and closed-loop DBS and pave the way for a better understanding of the STN-LFP sleep patterns.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.