In multi-label learning, each instance is associated with multiple labels and the crucial task is how to leverage label correlations in building models. Deep neural network methods usually jointly embed the feature and label information into a latent space to exploit label correlations. However, the success of these methods highly depends on the precise choice of model depth. Deep forest is a recent deep learning framework based on tree model ensembles, which does not rely on backpropagation. We consider the advantages of deep forest models are very appropriate for solving multi-label problems. Therefore we design the Multi-Label Deep Forest (MLDF) method with two mechanisms: measure-aware feature reuse and measure-aware layer growth. The measure-aware feature reuse mechanism reuses the good representation in the previous layer guided by confidence. The measure-aware layer growth mechanism ensures MLDF gradually increase the model complexity by performance measure. MLDF handles two challenging problems at the same time: one is restricting the model complexity to ease the overfitting issue; another is optimizing the performance measure on user's demand since there are many different measures in the multi-label evaluation. Experiments show that our proposal not only beats the compared methods over six measures on benchmark datasets but also enjoys label correlation discovery and other desired properties in multi-label learning.
Given a publicly available pool of machine learning models constructed for various tasks, when a user plans to build a model for her own machine learning application, is it possible to build upon models in the pool such that the previous efforts on these existing models can be reused rather than starting from scratch? Here, a grand challenge is how to find models that are helpful for the current application, without accessing the raw training data for the models in the pool. In this paper, we present a two-phase framework. In the upload phase, when a model is uploading into the pool, we construct a reduced kernel mean embedding (RKME) as a specification for the model. Then in the deployment phase, the relatedness of the current task and pre-trained models will be measured based on the value of the RKME specification. Theoretical results and extensive experiments validate the effectiveness of our approach.
The life spans of machine learning models are often short and a large number of models are wasted because they can only be applied to a specific task. However, a well-designed, carefully trained model contains learned knowledge from its task, which may be more concise than training data. Furthermore, when we have no access to training data, the trained model is the last remaining source of information. This study introduces a framework to reuse existing models trained in other tasks and help improve the model for the current task, especially when limited data is available for the current task. This framework incorporates high-level domain knowledge to combine existing models and treat them as black boxes, in order for them to be universal for complex models. Experiments on applying the framework to practical problems demonstrate that we can improve the performance on the current task by reusing existing models. Keywords machine learning, model reuse, domain knowledge, environment change, learnware Xi-Zhu WU was born in 1991. He received his BSc degree in Computer Science in Kuangyaming Honor School of Nanjing University in 2014. He was admitted to study for a MSc degree at Nanjing University without entrance examination in 2014 and transferred for a Ph.D. degree in 2016. His research interests include machine learning and data mining. Zhi-Hua ZHOU was born in 1973. He received his Ph.D. degree in Computer Science from Nanjing University, China, in 2000. He is currently a professor at Nanjing University. His research interests include artificial intelligence, machine learning, and data mining.
Background: Postoperative cognitive dysfunction (POCD) is the progressive deterioration of cognitive function after surgery. The mechanism underlying the development of POCD is unclear. Previous studies have suggested that neuroinflammation is a major contributor to the development of POCD. The purpose of this study was to observe the effects of preoperative pain on inflammatory factors and neuronal apoptosis in the hippocampus. Methods: Cognitive function was evaluated by the Morris water maze (MWM), and the expression levels of pro-inflammatory cytokines (IL-6, IL-1β, and TNF-α) were measured on the 1st, 3rd and 7th days after surgery. The levels of Ach, cAMP, PKA, and GABAA in the hippocampus were measured at the same time points. Results: Our results showed that the rats that experienced preoperative pain exhibited impaired learning and memory after surgery (P< 0.001). Moreover, rats in the preoperative pain+surgery group exhibited increased neuronal apoptosis compared to that of rats in the surgery group. On the 1st, 3rd and 7th days after surgery, the expression of IL-1β, IL-6 and TNF-α in the pain+surgery group was increased compared to that in the surgery group (P<0.001). Furthermore, the expression of key proteins, including ACh, cAMP, PKA and GABAA, was decreased in the pain+ surgery group compared to the surgery group. Conclusions: Preoperative pain may be a key risk factor for the development of POCD by inhibiting the cholinergic anti-inflammatory pathway (ACh-cAMP-PKA signalling pathway) and decreasing the expression of GABAA in the CNS.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.