The Internet of Audio Things (IoAuT) is an emerging research field positioned at the intersection of the Internet of Things, sound and music computing, artificial intelligence, and human-computer interaction. The IoAuT refers to the networks of computing devices embedded in physical objects (Audio Things) dedicated to the production, reception, analysis and understanding of audio in distributed environments. Audio Things, such as nodes of wireless acoustic sensor networks, are connected by an infrastructure that enables multidirectional communication, both locally and remotely. In this paper, we first review the state of the art of this field, then we present a vision for the IoAuT and its motivations. In the proposed vision, the IoAuT enables the connection of digital and physical domains by means of appropriate information and communication technologies, fostering novel applications and services based on auditory information. The ecosystems associated with the IoAuT include interoperable devices and services that connect humans and machines to support human-human and human-machines interactions. We discuss challenges and implications of this field, which lead to future research directions on the topics of privacy, security, design of Audio Things, and methods for the analysis and representation of audio-related information.
This paper addresses the problem of distributed training of a machine learning model over the nodes of a wireless communication network. Existing distributed training methods are not explicitly designed for these networks, which usually have physical limitations on bandwidth, delay, or computation, thus hindering or even blocking the training tasks. To address such a problem, we consider a general class of algorithms where the training is performed by iterative distributed computations across the nodes. We assume that the nodes have some background traffic and communicate using the slotted-ALOHA protocol. We propose an iteration-termination criterion to investigate the trade-off between achievable training performance and the overall cost of running the algorithms. We show that, given a total running budget, the training performance becomes worse as either the background communication traffic or the dimension of the training problem increases. We conclude that a codesign of distributed optimization algorithms and communication protocols is essential for the success of machine learning over wireless networks and edge computing.
Millimeter-wave (mmWave) networks rely on directional transmissions, in both control plane and data plane, to overcome severe path-loss. Nevertheless, the use of narrow beams complicates the initial cell-search procedure where we lack sufficient information for beamforming. In this paper, we investigate the feasibility of random beamforming for cell-search. We develop a stochastic geometry framework to analyze the performance in terms of failure probability and expected latency of cell-search. Meanwhile, we compare our results with the naive, but heavily used, exhaustive search scheme. Numerical results show that, for a given discovery failure probability, random beamforming can substantially reduce the latency of exhaustive search, especially in dense networks. Our work demonstrates that developing complex cell-discovery algorithms may be unnecessary in dense mmWave networks and thus shed new lights on mmWave system design.
This paper investigates efficient distributed training of a Federated Learning (FL) model over a wireless network of wireless devices. The communication iterations of the distributed training algorithm may be substantially deteriorated or even blocked by the effects of the devices' background traffic, packet losses, congestion, or latency. We abstract the communication-computation impacts as an 'iteration cost' and propose a cost-aware causal FL algorithm (FedCau) to tackle this problem. We propose an iteration-termination method that trade-offs the training performance and networking costs.We apply our approach when clients use the slotted-ALOHA, the carrier-sense multiple access with collision avoidance (CSMA/CA), and the orthogonal frequency-division multiple access (OFDMA) protocols. We show that, given a total cost budget, the training performance degrades as either the background communication traffic or the dimension of the training problem increases. Our results demonstrate the importance of proactively designing optimal cost-efficient stopping criteria to avoid unnecessary communication-computation costs to achieve only a marginal FL training improvement.We validate our method by training and testing FL over the MNIST dataset. Finally, we apply our approach to existing communication efficient FL methods from the literature, achieving further efficiency. We conclude that cost-efficient stopping criteria are essential for the success of practical FL over wireless networks.
We summarize our recent findings Authors (2017), where we proposed a framework for learning a Kolmogorov model, for a collection of binary random variables. More specifically, we derive conditions that causally link outcomes of specific random variables, and extract valuable relations from the data. We also propose an algorithm for computing the model and show its first-order optimality, despite the combinatorial nature of the learning problem. We apply the proposed algorithm to recommendation systems, although it is applicable to other scenarios. We believe that the work is a significant step toward interpretable machine learning.
Inter-operator spectrum sharing in millimeter-wave bands has the potential of substantially increasing the spectrum utilization and providing a larger bandwidth to individual user equipment at the expense of increasing inter-operator interference. Unfortunately, traditional model-based spectrum sharing schemes make idealistic assumptions about inter-operator coordination mechanisms in terms of latency and protocol overhead, while being sensitive to missing channel state information. In this paper, we propose hybrid model-based and data-driven multi-operator spectrum sharing mechanisms, which incorporate model-based beamforming and user association complemented by data-driven model refinements. Our solution has the same computational complexity as a model-based approach but has the major advantage of having substantially less signaling overhead. We discuss how limited channel state information and quantized codebook-based beamforming affect the learning and the spectrum sharing performance. We show that the proposed hybrid sharing scheme significantly improves spectrum utilization under realistic assumptions on inter-operator coordination and channel state information acquisition.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.