edge devices. The new breed of intelligent devices and high-stake applications (drones, augmented/virtual reality, autonomous systems, and so on) requires a novel paradigm change calling for distributed, low-latency and reliable ML at the wireless network edge (referred to as edge ML). In edge ML, training data are unevenly distributed over a large number of edge nodes, which have access to a tiny fraction of the data. Moreover, training and inference are carried out collectively over wireless links, where edge devices communicate and exchange their learned models (not their private data). In a first of its kind, this article explores the key building blocks of edge ML, different neural network architectural splits and their inherent tradeoffs, as well as theoretical and technical enablers stemming from a wide range of mathematical disciplines. Finally, several case studies pertaining to various high-stake applications are presented to demonstrate the effectiveness of edge ML in unlocking the full potential of 5G and beyond.
By leveraging blockchain, this letter proposes a blockchained federated learning (BlockFL) architecture where local learning model updates are exchanged and verified. This enables on-device machine learning without any centralized training data or coordination by utilizing a consensus mechanism in blockchain. Moreover, we analyze an end-to-end latency model of BlockFL and characterize the optimal block generation rate by considering communication, computation, and consensus delays.
Ultra-reliable low latency communication (URLLC) is an important new feature brought by 5G, with a potential to support a vast set of applications that rely on mission-critical links. In this article, we first discuss the principles for supporting URLLC from the perspective of the traditional assumptions and models applied in communication/information theory. We then discuss how these principles are applied in various elements of the system design, such as use of various diversity sources, design of packets and access protocols. The important messages are that there is a need to optimize the transmission of signaling information, as well as a need for a lean use of various sources of diversity.
The forthcoming 5G cellular network is expected to overlay millimeter-wave (mmW) transmissions with the incumbent micro-wave (µW) architecture. The overall mm-µW resource management should therefore harmonize with each other. This paper aims at maximizing the overall downlink (DL) rate with a minimum uplink (UL) rate constraint, and concludes: mmW tends to focus more on DL transmissions while µW has high priority for complementing UL, under time-division duplex (TDD) mmW operations. Such UL dedication of µW results from the limited use of mmW UL bandwidth due to excessive power consumption and/or high peak-to-average power ratio (PAPR) at mobile users. To further relieve this UL bottleneck, we propose mmW UL decoupling that allows each legacy µW base station (BS) to receive mmW signals. Its impact on mm-µW resource management is provided in a tractable way by virtue of a novel closed-form mm-µW spectral efficiency (SE) derivation. In an ultra-dense cellular network (UDN), our derivation verifies mmW (or µW) SE is a logarithmic function of BS-to-user density ratio. This strikingly simple yet practically valid analysis is enabled by exploiting stochastic geometry in conjunction with real three dimensional (3D) building blockage statistics in Seoul, Korea.
In addition to imaging the lymphatics and detecting various types of lymphatic leakage, lymphangiography is a therapeutic option for patients with chylothorax, chylous ascites, and lymphatic fistula. Percutaneous thoracic duct embolization, transabdominal catheterization of the cisterna chyli or thoracic duct, and subsequent embolization of the thoracic duct is an alternative to surgical ligation of the thoracic duct. In this pictorial review, we present the detailed technique, clinical applications, and complications of lymphangiography and thoracic duct embolization.
Machine learning (ML) is a promising enabler for the fifth generation (5G) communication systems and beyond. By imbuing intelligence into the network edge, edge nodes can proactively carry out decision-making, and thereby react to local environmental changes and disturbances while experiencing zero communication latency. To achieve this goal, it is essential to cater for high ML inference accuracy at scale under time-varying channel and network dynamics, by continuously exchanging fresh data and ML model updates in a distributed way. Taming this new kind of data traffic boils down to improving the communication efficiency of distributed learning by optimizing communication payload types, transmission techniques, and scheduling, as well as ML architectures, algorithms, and data processing methods. To this end, this article aims to provide a holistic overview of relevant communication and ML principles, and thereby present communication-efficient and distributed learning frameworks with selected use cases.
SIGNIFICANCE AND MOTIVATIONThe pursuit of extremely stringent latency and reliability guarantees is essential in the fifth generation (5G) communication system and beyond [1], [2]. In a wirelessly automated factory, the remote control of assembly robots should provision the same level of target latency and reliability offered by existing wired factory systems. To this end, for instance, control packets should be delivered within 1 ms with 99.99999% reliability [3]- [5]. Things are becoming even more challenging in the emerging mission-critical applications beyond 5G. A prime example is the forthcoming nonterrestrial networks consisting of a massive constellation of low-altitude earth orbit (LEO) satellites [6]- [11]. Given such
This letter proposes a novel communication-efficient and privacy-preserving distributed machine learning framework, coined Mix2FLD. To address uplink-downlink capacity asymmetry, local model outputs are uploaded to a server in the uplink as in federated distillation (FD), whereas global model parameters are downloaded in the downlink as in federated learning (FL). This requires a model output-to-parameter conversion at the server, after collecting additional data samples from devices. To preserve privacy while not compromising accuracy, linearly mixed-up local samples are uploaded, and inversely mixed up across different devices at the server. Numerical evaluations show that Mix2FLD achieves up to 16.7% higher test accuracy while reducing convergence time by up to 18.8% under asymmetric uplink-downlink channels compared to FL. Index Terms-Distributed machine learning, on-device learning, federated learning, federated distillation, uplink-downlink asymmetry.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.