Federated learning involves training statistical models in massive, heterogeneous networks. Naively minimizing an aggregate loss function in such a network may disproportionately advantage or disadvantage some of the devices. In this work, we propose q-Fair Federated Learning (q-FFL), a novel optimization objective inspired by resource allocation in wireless networks that encourages a more fair (i.e., lowervariance) accuracy distribution across devices in federated networks. To solve q-FFL, we devise a communication-efficient method, q-FedAvg, that is suited to federated networks. We validate both the effectiveness of q-FFL and the efficiency of q-FedAvg on a suite of federated datasets, and show that q-FFL (along with q-FedAvg) outperforms existing baselines in terms of the resulting fairness, flexibility, and efficiency.
Abstract-In diffusion-based molecular communications, messages can be conveyed via the variation in the concentration of molecules in the medium. In this paper, we intend to analyze the achievable capacity in transmission of information from one node to another in a diffusion channel. We observe that because of the molecular diffusion in the medium, the channel possesses memory. We then model the memory of the channel by a two-step Markov chain and obtain the equations describing the capacity of the diffusion channel. By performing a numerical analysis, we obtain the maximum achievable rate for different levels of the transmitter power, i.e., the molecule production rate.
Abstract-In this paper, we investigate the redundancy of universal coding schemes on smooth parametric sources in the finite-length regime. We derive an upper bound on the probability of the event that a sequence of length n, chosen using Jeffreys' prior from the family of parametric sources with d unknown parameters, is compressed with a redundancy smaller thanlog n for any ǫ > 0. Our results also confirm that for large enough n and d, the average minimax redundancy provides a good estimate for the redundancy of most sources. Our result may be used to evaluate the performance of universal source coding schemes on finite-length sequences. Additionally, we precisely characterize the minimax redundancy for two-stage codes. We demonstrate that the two-stage assumption incurs a negligible redundancy especially when the number of source parameters is large. Finally, we show that the redundancy is significant in the compression of small sequences.
Many applications require data processing to be performed on individual pieces of data which are of finite sizes, e.g., files in cloud storage units and packets in data networks. However, traditional universal compression solutions would not perform well over the finite-length sequences. Recently, we proposed a framework called memory-assisted universal compression that holds a significant promise for reducing the amount of redundant data from the finite-length sequences. The proposed compression scheme is based on the observation that it is possible to learn source statistics (by memorizing previous sequences from the source) at some intermediate entities and then leverage the memorized context to reduce redundancy of the universal compression of finite-length sequences. We first present the fundamental gain of the proposed memory-assisted universal source coding over conventional universal compression (without memorization) for a single parametric source. Then, we extend and investigate the benefits of the memory-assisted universal source coding when the data sequences are generated by a compound source which is a mixture of parametric sources. We further develop a clustering technique within the memoryassisted compression framework to better utilize the memory by classifying the observed data sequences from a mixture of parametric sources. Finally, we demonstrate through computer simulations that the proposed joint memorization and clustering technique can achieve up to 6-fold improvement over the traditional universal compression technique when a mixture of non-binary Markov sources is considered.
In September 2017, McAffee Labs quarterly report [2] estimated that brute force attacks represent 20% of total network attacks, making them the most prevalent type of attack ex-aequo with browser based vulnerabilities. These attacks have sometimes catastrophic consequences, and understanding their fundamental limits may play an important role in the risk assessment of password-secured systems, and in the design of better security protocols. While some solutions exist to prevent online brute-force attacks that arise from one single IP address, attacks performed by botnets are more challenging. In this paper, we analyze these distributed attacks by using a simplified model. Our aim is to understand the impact of distribution and asynchronization on the overall computational effort necessary to breach a system. Our result is based on Guesswork, a measure of the number of queries (guesses) required of an adversary before a correct sequence, such as a password, is found in an optimal attack. Guesswork is a direct surrogate for time and computational effort of guessing a sequence from a set of sequences with associated likelihoods. We model the lack of synchronization by a worst-case optimization in which the queries made by multiple adversarial agents are received in the worst possible order for the adversary, resulting in a min-max formulation. We show that, even without synchronization, and for sequences of growing length, the asymptotic optimal performance is achievable by using randomized guesses drawn from an appropriate distribution. Therefore, randomization is key for distributed asynchronous attacks. In other words, asynchronous guessers can asymptotically perform brute-force attacks as efficiently as synchronized guessers. I. INTRODUCTIONFrom online banking [3] and bitcoin wallets [4], to secure shell (SSH), file transfer protocol (ftp), and telnet servers [5], and passing by governmental institutions [6], brute-force attacks have shown to be one of the major threats to network security. Despite the computational burden on the attacker, brute-force attacks are prevalent. This can be explained through multiple points of view. First, passwords are often weaker than what they ought to be, meaning that attackers can hope to find the correct password well before they query a significant portion of the possible password strings. Next, attacks through huge networks of compromised computers (botnets) are now more common, giving access to significant computational resources for the attacker. More critically, these botnets help to disguise the attack by distributing it. Indeed, a main solution to the threat of online brute-force attacks is to setup a system that detects and prevents too many queries from any one user, as determined by IP addresses. As such, an attacker which uses only a single IP address would be limited to a fixed number of guesses. In recent years, however, this defense was circumvented by using massive botnets, each bot querying potential passwords. In this situation, it is hard to detect legitimat...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.