New technologies and the ubiquitous use of smartphones have opened the possibilities for more convenient, affordable, fast, and safe options in urban transportation. This has led to the emergence of mobility-on-demand (MoD) systems, such as Uber and Lyft, which aim to provide fast and reliable mobility that is catered to individualistic needs. At the same time, automated vehicle (AV) technology has advanced at an impressive pace. Corporations, such as Google and Tesla (1), have been in a race to develop a fully automated vehicle. The combination of these two promising technologies, known as automated mobility-on-demand (AMoD), has recently attracted interest among both researchers and industry (for example, Uber (2) has started testing AV programs in several states in the US).The term AMoD (3) designates a service similar to MoD or taxi, with the difference that vehicle operations are driverless. AMoD combines the benefits of MoD and AVs in several aspects. First, operational cost is drastically reduced, given the complete removal of driver labor costs and superior energy efficiency of AVs. Furthermore, negative externalities, such as emissions, travel time uncertainty, and accidents, may also reduce, as already observed for MoD (4) and for AVs (5). The latter also observes that AMoD will increase road network utilization, making it possible to transport more passengers with less congestion, with respect to privately owned cars. Fagnant and Kockelman (6) found that AV benefits would amount to between $2,690 and $3,900 annually per vehicle, incorporating decreases in insurance, parking costs, and traffic congestion.It is clear that AMoD is a disruptive technology that will deeply impact the transportation system. Most of the literature (7-11) has focused on the efficiency of AMoD and AVs, in terms of road movement and fleet management. However, only a handful of recent studies have shown the importance 758630T RRXXX10.
Albeit an important goal of caching is traffic reduction, a perhaps even more important aspect follows from the above achievement: the reduction of Internet Service Provider (ISP) operational costs that comes as a consequence of the reduced load on transit and provider links. Surprisingly, to date this crucial aspect has not been properly taken into account in cache design.In this paper, we show that the classic caching efficiency indicator, i.e. the hit ratio, conflicts with cost. We therefore propose a mechanism whose goal is the reduction of cost and, in particular, we design a Cost-Aware (CoA) cache decision policy that, leveraging price heterogeneity among external links, tends to store with more probability the objects that the ISP has to retrieve through the most expensive links. We provide a model of our mechanism, based on Che's approximation, and, by means of a thorough simulation campaign, we contrast it with traditional cost-blind schemes, showing that CoA yields a significant cost saving, that is furthermore consistent over a wide range of scenarios. We show that CoA is easy to implement and robust, making the proposal of practical relevance.
Edge Computing (EC) consists in deploying computational resources, e.g., memory, CPUs, at the Edge of the network, e.g., base stations, access points, and run there a part of the computation currently running on the Cloud. This approach promises to reduce latency, inter-domain traffic and enhance user experience. Since resources at the Edge are scarce, resource allocation is crucial for EC. While most of the studies assume users interact directly with the Edge submitting a sequence of tasks, we instead consider that users will interact with different Service Providers (SPs), as they currently do in the Web. We therefore consider the case of a Network Operator (NO) that owns the resources at the Edge and must decide how much resource to allocate to the different tenants (SPs). We propose MORA, a polynomial time strategy which allows the NO to maximize its utility, which can be inter-domain traffic savings, improved users' QoE or other metrics of interest. The core of MORA is that (i) it exploits service elasticity, i.e., the fact that services can adapt to the resources allocated by the NO and rely on a remote Cloud for the excess of computation, (ii) it is suitable for micro-services architecture, which decomposes a single service in a set of components, which MORA places in the different computational nodes of the Edge and (iii) it copes with multi-dimensional resources, e.g., memory and CPUs. After analyzing the properties of the algorithm, we show numerically that it performs close to the optimum. To guarantee reproducibility, the numerical evaluation is performed on publicly available traces from Google and Alibaba clusters and in synthetic scenarios and our code is open source. CCS CONCEPTS • Networks → Cloud computing; Network management; Programmable networks; • Computer systems organization → Cloud computing; n-tier architectures;
Caching is frequently used by Internet Service Providers as a viable technique to reduce the latency perceived by end users, while jointly offloading network traffic. While the cache hit-ratio is generally considered in the literature as the dominant performance metric for such type of systems, in this paper we argue that a critical missing piece has so far been neglected. Adopting a radically different perspective, in this paper we explicitly account for the cost of content retrieval, i.e. the cost associated to the external bandwidth needed by an ISP to retrieve the contents requested by its customers. Interestingly, we discover that classical cache provisioning techniques that maximize cache efficiency (i.e., the hit-ratio), lead to suboptimal solutions with higher overall cost. To show this mismatch, we propose two optimization models that either minimize the overall costs or maximize the hit-ratio, jointly providing cache sizing, object placement and path selection. We formulate a polynomialtime greedy algorithm to solve the two problems and analytically prove its optimality. We provide numerical results and show that significant cost savings are attainable via a cost-aware design.
Mobility on demand (MoD) systems have recently emerged as a promising paradigm for sustainable personal urban mobility in cities. In the context of multi-agent simulation technology, the state-of-the-art lacks a platform that captures the dynamics between decentralized driver decision-making and the centralized coordinated decision-making. This work aims to fill this gap by introducing a comprehensive framework that models various facets of MoD, namely heterogeneous MoD driver decision-making and coordinated fleet management within SimMobility, an agent- and activity-based demand model integrated with a dynamic multi-modal network assignment model. To facilitate such a study, we propose an event-based modeling framework. Behavioral models were estimated to characterize the decision-making of drivers using a GPS dataset from a major MoD fleet operator in Singapore. The proposed framework was designed to accommodate behaviors of multiple on-demand services such as traditional MoD, Lyft-like services, and automated MoD (AMoD) services which interact with traffic simulators and a multi-modal transportation network. We demonstrate the benefits of the proposed framework through a large-scale case study in Singapore comparing the fully decentralized traditional MoD with the future AMoD services in a realistic simulation setting. We found that AMoD results in a more efficient service even with increased demand. Parking strategies and fleet sizes will also have an effect on user satisfaction and network performance.
The paper presents the system optimization (SO) framework of Tripod, an integrated bi-level transportation management system aimed at maximizing energy savings of the multi-modal transportation system. From the user’s perspective, Tripod is a smartphone app, accessed before performing trips. The app proposes a series of alternatives, consisting of a combination of departure time, mode, and route. Each alternative is rewarded with an amount of tokens which the user can later redeem for goods or services. The role of SO is to compute the optimized set of tokens associated with the available alternatives to minimize the system-wide energy consumption under a limited token budget. To do so, the alternatives that guarantee the largest energy reduction must be rewarded with more tokens. SO is multi-modal, in that it considers private cars, public transit, walking, car pooling, and so forth. Moreover, it is dynamic, predictive, and personalized: the same alternative is rewarded differently, depending on the current and the predicted future condition of the network and on the individual profile. The paper presents a method to solve this complex optimization problem and describe the system architecture, the multi-modal simulation-based optimization model, and the heuristic method for the online computation of the optimized token allocation. Finally it showcases the framework with simulation results.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.