Through the success of deep learning in various domains, artificial neural networks are currently among the most used artificial intelligence methods. Taking inspiration from the network properties of biological neural networks (e.g. sparsity, scale-freeness), we argue that (contrary to general practice) artificial neural networks, too, should not have fully-connected layers. Here we propose sparse evolutionary training of artificial neural networks, an algorithm which evolves an initial sparse topology (Erdős–Rényi random graph) of two consecutive layers of neurons into a scale-free topology, during learning. Our method replaces artificial neural networks fully-connected layers with sparse ones before training, reducing quadratically the number of parameters, with no decrease in accuracy. We demonstrate our claims on restricted Boltzmann machines, multi-layer perceptrons, and convolutional neural networks for unsupervised and supervised learning on 15 datasets. Our approach has the potential to enable artificial neural networks to scale up beyond what is currently possible.
Unprecedented high volumes of data are becoming available with the growth of the advanced metering infrastructure. These are expected to benefit planning and operation of the future power system, and to help the customers transition from a passive to an active role. In this paper, we explore for the first time in the smart grid context the benefits of using Deep Reinforcement Learning, a hybrid type of methods that combines Reinforcement Learning with Deep Learning, to perform on-line optimization of schedules for building energy management systems. The learning procedure was explored using two methods, Deep Q-learning and Deep Policy Gradient, both of them being extended to perform multiple actions simultaneously. The proposed approach was validated on the large-scale Pecan Street Inc. database. This highly-dimensional database includes information about photovoltaic power generation, electric vehicles as well as buildings appliances. Moreover, these on-line energy scheduling strategies could be used to provide realtime feedback to consumers to encourage more efficient use of electricity.
Edge computing paradigm has attracted many interests in the last few years as a valid alternative to the standard Cloud-based approaches to reduce the interaction timing and the huge amount of data coming from IoT devices toward the Internet. In the next future, Edge-based approaches will be essential to support time-dependent applications in the Industry 4.0 context; thus, the paper proposes BodyEdge, a novel architecture well suited for human-centric applications, in the context of the emerging healthcare industry. It consists of a tiny mobile client module and a performing Edge gateway supporting multi-radio and multi-technology communication to collect and locally process data coming from different scenarios; moreover, it also exploits the facilities made available from both private and public Cloud platforms to guarantee a high flexibility, robustness and adaptive service level. The advantages of the designed software platform have been evaluated in terms of reduced transmitted data and processing time through a real implementation on different hardware platforms. The conducted study also highlighted the network conditions (data load and processing delay) in which BodyEdge is a valid and inexpensive solution for healthcare application scenarios.
Underwater images play a key role in ocean exploration, but often suffer from severe quality degradation due to light absorption and scattering in water medium. Although major breakthroughs have been made recently in the general area of image enhancement and restoration, the applicability of new methods for improving the quality of underwater images has not specifically been captured. In this paper, we review the image enhancement and restoration methods that tackle typical underwater image impairments, including some extreme degradations and distortions. Firstly, we introduce the key causes of quality reduction in underwater images, in terms of the underwater image formation model (IFM). Then, we review underwater restoration methods, considering both the IFM-free and the IFM-based approaches. Next, we present an experimental-based comparative evaluation of state-of-the-art IFM-free and IFM-based methods, considering also the prior-based parameter estimation algorithms of the IFM-based methods, using both subjective and objective analysis (the used code is freely available at https://github.com/wangyanckxx/Single-Underwater-Image-Enhancement-and-Color-Restoration). Starting from this study, we pinpoint the key shortcomings of existing methods, drawing recommendations for future research in this area. Our review of underwater image enhancement and restoration provides researchers with the necessary background to appreciate challenges and opportunities in this important field. INDEX TERMSUnderwater image formation model, single underwater image enhancement, single underwater image restoration, background light estimation, transmission map estimation
Measuring and predicting the user's Quality of Experience (QoE) of a multimedia stream is the first step towards improving and optimizing the provision of mobile streaming services. This enables us to better understand how Quality of Service (QoS) parameters affect service quality, as it is actually perceived by the end user. Over the last years this goal has been pursued by means of subjective tests and through the analysis of the user's feedback. Existing statistical techniques have lead to poor accuracy (order of 70%) and inability to evolve prediction models with the system's dynamics. In this paper, we propose a novel approach for building accurate and adaptive QoE prediction models using Machine Learning classification algorithms, trained on subjective test data. These models can be used for real-time prediction of QoE and can be efficiently integrated into online learning systems that can adapt the models according to changes in the environment. Providing high accuracy of above 90%, the classification algorithms become an indispensible component of a mobile multimedia QoE management system
High-density communications in wireless sensor networks (WSNs) demand for new approaches to meet stringent energy and spectrum requirements. We turn to reinforcement learning, a prominent method in artificial intelligence, to design an energy-preserving MAC protocol, with the aim to extend the network lifetime. Our QL-MAC protocol is derived from Q-learning, which iteratively tweaks the MAC parameters through a trial-and-error process to converge to a low energy state. This has a dual benefit of 1) solving this minimization problem without the need of predetermining the system model and 2) providing a self-adaptive protocol to topological and other external changes. QL-MAC self-adjusts the WSN node duty-cycle, reducing energy consumption without detrimental effects on the other network parameters. This is achieved by adjusting the radio sleeping and active periods based on traffic predictions and transmission state of neighboring nodes. Our findings are corroborated by an extensive set of experiments carried out on off-the-shelf devices, alongside large-scale simulations. INDEX TERMS Wireless sensor network, artificial intelligence, reinforcement learning, energy-efficient network, medium access control.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.