With the breakthroughs in deep learning, the recent years have witnessed a booming of artificial intelligence (AI) applications and services, spanning from personal assistant to recommendation systems to video/audio surveillance. More recently, with the proliferation of mobile computing and Internet-of-Things (IoT), billions of mobile and IoT devices are connected to the Internet, generating zillions Bytes of data at the network edge. Driving by this trend, there is an urgent need to push the AI frontiers to the network edge so as to fully unleash the potential of the edge big data. To meet this demand, edge computing, an emerging paradigm that pushes computing tasks and services from the network core to the network edge, has been widely recognized as a promising solution.The resulted new inter-discipline, edge AI or edge intelligence, is beginning to receive a tremendous amount of interest. However, research on edge intelligence is still in its infancy stage, and a dedicated venue for exchanging the recent advances of edge intelligence is highly desired by both the computer system and artificial intelligence communities. To this end, we conduct a comprehensive survey of the recent research efforts on edge intelligence. Specifically, we first review the background and motivation for artificial intelligence running at the network edge. We then provide an overview of the overarching architectures, frameworks and emerging key technologies for deep learning model towards training/inference at the network edge. Finally, we discuss future research opportunities on edge intelligence. We believe that this survey will elicit escalating attentions, stimulate fruitful discussions and inspire further research ideas on edge intelligence.
Understanding the diversity of cell types in the brain has been an enduring challenge and requires detailed characterization of individual neurons in multiple dimensions. To profile morpho-electric properties of mammalian neurons systematically, we established a single cell characterization pipeline using standardized patch clamp recordings in brain slices and biocytin-based neuronal reconstructions. We built a publicly-accessible online database, the Allen Cell Types Database, to display these data sets. Intrinsic physiological and morphological properties were measured from over 1,800 neurons from the adult laboratory mouse visual cortex. Quantitative features were used to classify neurons into distinct types using unsupervised methods. We establish a taxonomy of morphologically-and electrophysiologically-defined cell types for this region of cortex with 17 e-types and 35 m-types, as well as an initial correspondence with previously-defined transcriptomic cell types using the same transgenic mouse lines. INTRODUCTION Neurons of the mammalian neocortex exhibit diverse physiological and morphological characteristics. Classifying these neurons into cell types, following Plato's dictum to "carve
Dendritic and axonal morphology reflects the input and output of neurons and is a defining feature of neuronal types1,2, yet our knowledge of its diversity remains limited. Here, to systematically examine complete single-neuron morphologies on a brain-wide scale, we established a pipeline encompassing sparse labelling, whole-brain imaging, reconstruction, registration and analysis. We fully reconstructed 1,741 neurons from cortex, claustrum, thalamus, striatum and other brain regions in mice. We identified 11 major projection neuron types with distinct morphological features and corresponding transcriptomic identities. Extensive projectional diversity was found within each of these major types, on the basis of which some types were clustered into more refined subtypes. This diversity follows a set of generalizable principles that govern long-range axonal projections at different levels, including molecular correspondence, divergent or convergent projection, axon termination pattern, regional specificity, topography, and individual cell variability. Although clear concordance with transcriptomic profiles is evident at the level of major projection type, fine-grained morphological diversity often does not readily correlate with transcriptomic subtypes derived from unsupervised clustering, highlighting the need for single-cell cross-modality studies. Overall, our study demonstrates the crucial need for quantitative description of complete single-cell anatomy in cell-type classification, as single-cell morphological diversity reveals a plethora of ways in which different cell types and their individual members may contribute to the configuration and function of their respective circuits.
Open-Source 3D Visualization-Assisted Analysis (Vaa3D) is a software platform for the visualization and analysis of large-scale multidimensional images. In this protocol we describe how to use several popular features of Vaa3D, including (i) multidimensional image visualization, (ii) 3D image object generation and quantitative measurement, (iii) 3D image comparison, fusion and management, (iv) visualization of heterogeneous images and respective surface objects and (v) extension of Vaa3D functions using its plug-in interface. We also briefly demonstrate how to integrate these functions for complicated applications of microscopic image visualization and quantitative analysis using three exemplar pipelines, including an automated pipeline for image filtering, segmentation and surface generation; an automated pipeline for 3D image stitching; and an automated pipeline for neuron morphology reconstruction, quantification and comparison. Once a user is familiar with Vaa3D, visualization usually runs in real time and analysis takes less than a few minutes for a simple data set.
As a key technology of enabling Artificial Intelligence (AI) applications in 5G era, Deep Neural Networks (DNNs) have quickly attracted widespread attention. However, it is challenging to run computation-intensive DNN-based tasks on mobile devices due to the limited computation resources. What's worse, traditional cloud-assisted DNN inference is heavily hindered by the significant wide-area network latency, leading to poor real-time performance as well as low quality of user experience. To address these challenges, in this paper, we propose Edgent, a framework that leverages edge computing for DNN collaborative inference through device-edge synergy. Edgent exploits two design knobs: (1) DNN partitioning that adaptively partitions computation between device and edge for purpose of coordinating the powerful cloud resource and the proximal edge resource for real-time DNN inference; (2) DNN right-sizing that further reduces computing latency via early exiting inference at an appropriate intermediate DNN layer. In addition, considering the potential network fluctuation in realworld deployment, Edgent is properly design to specialize for both static and dynamic network environment. Specifically, in a static environment where the bandwidth changes slowly, Edgent derives the best configurations with the assist of regressionbased prediction models, while in a dynamic environment where the bandwidth varies dramatically, Edgent generates the best execution plan through the online change point detection algorithm that maps the current bandwidth state to the optimal configuration. We implement Edgent prototype based on the Raspberry Pi and the desktop PC and the extensive experimental evaluations demonstrate Edgent's effectiveness in enabling ondemand low-latency edge intelligence.
Federated Learning (FL) has been proposed as an appealing approach to handle data privacy issue of mobile devices compared to conventional machine learning at the remote cloud with raw user data uploading. By leveraging edge servers as intermediaries to perform partial model aggregation in proximity and relieve core network transmission overhead, it enables great potentials in low-latency and energy-efficient FL. Hence we introduce a novel Hierarchical Federated Edge Learning (HFEL) framework in which model aggregation is partially migrated to edge servers from the cloud. We further formulate a joint computation and communication resource allocation and edge association problem for device users under HFEL framework to achieve global cost minimization. To solve the problem, we propose an efficient resource scheduling algorithm in the HFEL framework. It can be decomposed into two subproblems: resource allocation given a scheduled set of devices for each edge server and edge association of device users across all the edge servers. With the optimal policy of the convex resource allocation subproblem for a set of devices under a single edge server, an efficient edge association strategy can be achieved through iterative global cost reduction adjustment process, which is shown to converge to a stable system point. Extensive performance evaluations demonstrate that our HFEL framework outperforms the proposed benchmarks in global cost saving and achieves better training performance compared to conventional federated learning.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.