Mobile Edge Computing Caching System (MECCS) realizes low-latency and high-bandwidth content access and enables seamless 4K Ultra High Definition (UHD) video streaming by caching content in advance at edge-servers of a cellular network. The objective of MECCS is to maximize cache hit by caching highly popular video content while utilizing the storage capacity efficiently in edge-servers. Most of existing caching schemes estimate the popularity of each content based on content request history in off-line or on-line manners, considering the characteristics of Video-on-Demand (VoD) content which has long-term time-varying popularity. However, since live streaming follows Short-term Time-Varying (STV) characteristics, estimating popularity based on content request history do not guarantee acceptable performance on cache hit for live streaming. In this paper, we propose a request model to estimate the popularity distribution considering STV characteristics. Also, we propose a STV request model-based chunk caching scheme to cache highly popular content and enhance cache hit in multiple live channels, utilizing the storage capacity of collaborative edge-servers efficiently. Experimental results show that the proposed scheme outperforms existing schemes regarding cache hit and backhaul traffic.
The scale of model parameters and datasets is rapidly growing for high accuracy in various areas. To train a large-scale deep neural network (DNN) model, a huge amount of computation and memory is required; therefore, a parallelization technique for training large-scale DNN models has attracted attention. A number of approaches have been proposed to parallelize large-scale DNN models, but these schemes lack scalability because of their long communication time and limited worker memory. They often sacrifice accuracy to reduce communication time. In this work, we proposed an efficient parallelism strategy named group hybrid parallelism (GHP) to minimize the training time without any accuracy loss. Two key ideas inspired our approach. First, grouping workers and training them by groups reduces unnecessary communication overhead among workers. It saves a huge amount of network resources in the course of training large-scale networks. Second, mixing data and model parallelism can reduce communication time and mitigate the worker memory issue. Data and model paralleism are complementary to each other so the training time can be enhanced when they are combined. We analyzed the training time model of the data and model parallelism, and based on the training time model, we demonstrated the heuristics that determine the parallelization strategy for minimizing training time. We evaluated group hybrid parallelism in comparison with existing parallelism schemes, and our experimental results show that group hybrid parallelism outperforms them.
Abstract. In the science area, workflow management systems (WMS) coordinate collaborative tasks between researchers of many research organizations. Also, WMS effectively compose the high performance computing system with globally distributed computing resources. In addition, with the maturity of cloud computing technology, many researches try to enhancing the economic feasibility and system tolerability. While executing a workflow application, a workflow scheduler, which is in WMS, should recognize the dynamic status of resources and decide to assign appropriate resource on each task. With the negotiation procedure, users can ask for saving processing cost or shortening completion time. However, satisfying these multiple objectives at the same time is hard to achieve. Therefore, the existing workflow scheduling schemes try to find the near optimal solution with heuristic approaches. In this paper, we propose heuristic workflow scheduling scheme with petri-net workflow modeling, resource type mapping in accordance to workload ratio and policy based task division to guarantee the deadline constraint with minimum budget consumption.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.