I. ABSTRACTWe are at the cusp of a technological revolution driven mainly by advances in hardware technology, network architectural support, and the ability to process big data. The hardware industry, driven by Moore's law, continues to provide steadily increasing computing capability with diminishing costs. With the support of hardware advances, platforms for distributed storage and processing of big data, such as Apache Hadoop, provide the ability to scalably and reliably process massive amounts of data using a cluster of commodity servers. In parallel, a revolution is ongoing in the networking world. The emergence of the software defined networking (SDN) architecture promises to rectify the ossified architecture of the Internet allowing network managers to flexibly program the network. In this paper, we argue that SDN and big data, two technological trends that promise to revolutionize all aspects of modern life, can leverage the freedom afforded by each other to jointly increase their value proposition. In particular, SDN can utilize the large amounts of operational data to optimize the network behavior, while big data platforms can benefit from the flexible architectural support provided by SDN. In this paper, we will provide a brief self-contained exposition on opportunities for SDN and big data to synergize and jointly optimize. We will also point out open research issues and will identify future directions of work.12th International Conference on Frontiers of Information Technology 978-1-4799-7505-1/14 $31.00
Federated learning (FL) is an efficient learning framework that assists distributed machine learning when data cannot be shared with a centralized server due to privacy and regulatory restrictions. Recent advancements in FL use predefined architecture-based learning for all the clients. However, given that clients' data are invisible to the server and data distributions are non-identical across clients, a predefined architecture discovered in a centralized setting may not be an optimal solution for all the clients in FL. Motivated by this challenge, in this work, we introduce SPIDER, an algorithmic framework that aims to Search PersonalIzed neural architecture for feDERated learning. SPIDER is designed based on two unique features:(1) alternately optimizing one architecture-homogeneous global model (Supernet) in a generic FL manner and one architecture-heterogeneous local model that is connected to the global model by weight sharing-based regularization (2) achieving architecture-heterogeneous local model by a novel neural architecture search (NAS) method that can select optimal subnet progressively using operation-level perturbation on the accuracy value as the criterion. Experimental results demonstrate that SPIDER outperforms other state-of-the-art personalization methods, and the searched personalized architectures are more inference efficient.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.