Future networks are expected to support lowlatency, context-aware and user-specific services in a highly flexible and efficient manner. One approach to support emerging use cases such as, e.g., virtual reality and in-network image processing is to introduce virtualized network functions (vNF)s at the edge of the network, placed in close proximity to the end users to reduce end-to-end latency, time-to-response, and unnecessary utilisation of the core network. While placement of vNFs has been studied before, it has so far mostly focused on reducing the utilisation of server resources (i.e., minimising the number of servers required in the network to run a specific set of vNFs), and not taking network conditions into consideration such as, e.g., end-to-end latency, the constantly changing network dynamics, and user mobility patterns. In this paper, we first formulate the Edge vNF placement problem to allocate vNFs to a distributed edge infrastructure, minimising end-to-end latency from all users to their associated vNFs. Furthermore, we present a way to dynamically reschedule the optimal placement of vNFs based on temporal network-wide latency fluctuations using optimal stopping theory. We evaluate our dynamic scheduler over a simulated nationwide backbone network using real-world ISP latency characteristics. We show that our proposed dynamic placement scheduler minimises vNF migrations compared to other schedulers (e.g., periodic and always-on scheduling of a new placement), and offers Quality of Service guarantees by not exceeding a maximum number of latency violations that can be tolerated by certain applications.
Location-based mobile services have been in use, and studied, for a long time. With the proliferation of wireless networking technologies, users are mostly interested in advanced services that render the surrounding environment (i.e., the building) highly intelligent and significantly facilitate their activities. In this paper our focus is on indoor navigation, one of the most important location services. Existing approaches for indoor navigation are driven by geometric information and neglect important aspects, such as the semantics of space and user capabilities and context. The derived applications are not intelligent enough to catalytically contribute to the pervasive computing vision. In this paper, a novel navigation mechanism is introduced. Such navigation scheme is enriched with user profiles and the adoption of an ontological framework. These enhancements introduce a series of technical challenges that are extensively discussed throughout the paper.
Abstract. In this article, we report software architectures for context awareness in mobile computing environments, sensor centric systems and discuss context modeling issues. Defining an architecture for supporting context-aware applications for mobile devices explicitly implies a scalable description of how to represent contextual information and which are the abstraction models capable of handling such information. Using sensors to retrieve contextual information (e.g., user location) leads to a sensor network scheme that provides services to the applications level. Operations for capturing, collating, storing, and disseminating contextual information at the lowest level and aggregating it into increasingly more abstract models qualify the context-aware systems. In this article, we introduce context aware systems in mobile computing environments, review the basic mechanisms underlying the operation of such systems, and discuss notable work and important architectures in the area.
Abstract-We study a novel solution to executing aggregation (and specifically COUNT) queries over large-scale data. The proposed solution is generally applicable, in the sense that it can be deployed in environments in which data owners may or may not restrict access to their data and allow only 'aggregation operators' to be executed over their data. For this, it is based on predictive analytics, driven by queries and their results. We propose a machine learning (ML) framework for the task (which can be adapted for different aggregates as well). We focus on the widely used set-cardinality (i.e., COUNT) aggregation operator, as it is a fundamental operator for both internal data system optimisations and for aggregation-query analytics. We contribute a novel, query-driven ML model whose goals are to: (i) learn the query space (access patterns), (ii) associate (complex) aggregation queries with the cardinality of their results, (iii) define query similarity and use it to predict the cardinality of the answer set of an ad-hoc incoming query. Our ML model incorporates incremental learning algorithms for ensuring high prediction accuracy even when both the querying patterns and the underlying data change. The significance of contribution lies in that it (i) is the only query-driven solution applicable over general environments which include restrictedaccess data, (ii) offers incremental learning adjusted for arriving ad-hoc queries, which is well suited for big data analytics, and (iii) offers a performance (in terms of prediction accuracy and time, and memory requirements) that is superior to datacentric approaches. We provide a comprehensive performance evaluation of our model, evaluating its sensitivity and comparative advantages versus acclaimed data-centric methods (self-tuning histograms, sampling, and multidimensional histograms).
Mobile Edge Computing (MEC) has emerged as new computing paradigm to improve the QoS of users' applications. A challenge in MEC is computation (task/data) offloading, whose goal is to enhance the mobile devices' capabilities to face the requirements of new applications. Computation offloading faces the challenges of where and when to offload data to perform computing (analytics) tasks. In this paper, we tackle this problem by adopting the principles of Optimal Stopping Theory contributing with two time-optimized sequential decision making models. A performance evaluation is provided using real world data sets compared with baseline deterministic and stochastic models. The results show that our approach optimizes such decision in single user and competitive users scenarios. Index Terms-Mobile edge computing, tasks offloading, optimal stopping theory, sequential decision making.
We introduce an edge-centric parametric predictive analytics methodology, which contributes to real-time regression model caching and selective forwarding in the network edge where communication overhead is significantly reduced as only model's parameters and sufficient statistics are disseminated instead of raw data obtaining high analytics quality. Moreover, sophisticated model selection algorithms are introduced to combine diverse local models for predictive modeling without transferring and processing data at edge gateways. We provide mathematical modeling, performance and comparative assessment over real data showing its benefits in edge computing environments.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.