Autonomous cars controlled by an artificial intelligence are increasingly being integrated in the transport portfolio of cities, with strong repercussions for the design and sustainability of the built environment. This paper sheds light on the urban transition to autonomous transport, in a threefold manner. First, we advance a theoretical framework to understand the diffusion of autonomous cars in cities, on the basis of three interconnected factors: social attitudes, technological innovation and urban politics. Second, we draw upon an in-depth survey conducted in Dublin (1,233 respondents), to provide empirical evidence of (a) the public interest in autonomous cars and the intention to use them once available, (b) the fears and concerns that individuals have regarding autonomous vehicles and (c) how people intend to employ this new form of transport.Third, we use the empirics generated via the survey as a stepping stone to discuss possible urban futures, focusing on the changes in urban design and sustainability that the transition to autonomous transport is likely to trigger. Interpreting the data through the lens of smart and neoliberal urbanism, we picture a complex urban geography characterized by shared and private autonomous vehicles, human drivers and artificial intelligences overlapping and competing for urban spaces.
Abstract-Applications such as generator scheduling, household smart device scheduling, transmission line overload management and microgrid islanding autonomy all play key roles in the smart grid ecosystem. Management of these applications could benefit from short-term load prediction, which has been successfully achieved on large-scale systems such as national grids. However, the scale of the data for analysis is much smaller, similar to the load of a single transformer, making prediction difficult. This paper examines several prediction approaches for day and week ahead electrical load of a community of houses that are supplied by a common residential transformer, in particular: artificial neural networks; fuzzy logic; auto-regression; autoregressive moving average; auto-regressive integrated moving average; and wavelet neural networks. In our evaluation, the methods use pre-recorded electrical load data with added weather information. Data is recorded from a smart-meter trial that took place during 2009-2010 in Ireland, which registered individual household consumption for 17 months. Two different scenarios are investigated, one with 90 houses, and another with 230 houses. Results for the two scenarios are compared and the performances of the evaluated prediction methods are discussed.
Shared mobility-on-demand systems can improve the efficiency of urban mobility through reduced vehicle ownership and parking demand. However, some issues in their implementations remain open, most notably the issue of rebalancing non-occupied vehicles to meet geographically uneven demand, as is, for example, the case during the rush hour. This is somewhat alleviated by the prospect of autonomous mobility-on-demand systems, where autonomous vehicles can relocate themselves; however, the proposed relocation strategies are still centralized and assume all vehicles are a part of the same fleet. Furthermore, ride-sharing is not considered, which also has an impact on rebalancing, as already occupied vehicles can also potentially be available to serve new requests simultaneously. In this paper we propose a reinforcement learning-based decentralized approach to vehicle relocation as well as ride request assignment in shared mobility-on-demand systems. Each vehicle autonomously learns its behaviour, which includes both rebalancing and selecting which requests to serve, based on its local current and observed historical demand. We evaluate the approach using data on taxi use in New York City, first serving a single request by a vehicle at a time, and then introduce ride-sharing to evaluate its impact on the learnt rebalancing and assignment behaviour.
Mobility-on-demand systems consisting of shared autonomous vehicles (SAVs) are expected to improve the efficiency of urban mobility through reduced vehicle ownership and parking demand. However, several issues in their implementation remain open, such as unifying the vehicle and ride-sharing assignment with rebalancing non-occupied vehicles. Furthermore, proposed SAV systems are evaluated in isolation from other traffic; no congestion is taken into account when assigning requests or calculating routes. To address this gap, we present Shared Autonomous Mobility-on-Demand system (SAMoD), a reinforcement learning-based approach to vehicle relocation and ride-sharing request assignment. Each vehicle learns its pickup and rebalancing behaviour based on local current and observed historical demand. We evaluate SAMoD on Manhattan network using NYC taxi data in microsimulator SUMO. We investigate SAMoD performance in the presence of congestion generated by private vehicles, as well as investigate impact of different percentages of SAMoD vehicles in the system on overall traffic network performance.
Multi-agent reinforcement learning (MARL) is a widely researched technique for decentralised control in complex large-scale autonomous systems. Such systems often operate in environments that are continuously evolving and where agents’ actions are non-deterministic, so called inherently non-stationary environments. When there are inconsistent results for agents acting on such an environment, learning and adapting is challenging. In this article, we propose P-MARL, an approach that integrates prediction and pattern change detection abilities into MARL and thus minimises the effect of non-stationarity in the environment. The environment is modelled as a time-series, with future estimates provided using prediction techniques. Learning is based on the predicted environment behaviour, with agents employing this knowledge to improve their performance in realtime. We illustrate P-MARL’s performance in a real-world smart grid scenario, where the environment is heavily influenced by non-stationary power demand patterns from residential consumers. We evaluate P-MARL in three different situations, where agents’ action decisions are independent, simultaneous, and sequential. Results show that all methods outperform traditional MARL, with sequential P-MARL achieving best results.
Abstract-Large-scale agent-based systems are required to self-optimize towards multiple, potentially conflicting, policies of varying spatial and temporal scope. As a result, not all agents may be implementing all policies at all times, resulting in agent heterogeneity. As agents share their operating environment, significant dependencies can arise between agents and therefore between policy implementations. To address self-optimization in the presence of agent heterogeneity, policy dependency and the lack of global knowledge that is inherent in large-scale decentralized environments, we propose Distributed W-Learning (DWL). DWL is a reinforcement learning (RL)-based algorithm for collaborative agent-based self-optimization towards multiple policies, which relies only on local interactions and learning. We have evaluated the DWL algorithm in a simulation of a selforganizing urban traffic control (UTC) system and show that using DWL can improve the performance of multiple policies deployed simultaneously, even over corresponding single-policy deployments. For example, in UTC, optimizing simultaneously for cars and public transport vehicles reduces the waiting times of cars to 78% of their waiting times in the best-performing single-policy deployment that optimizes for cars only, while also outperforming the widely-deployed round-robin and saturationbalancing traffic controllers that we used as baselines.
Large-scale, multi-agent systems are too complex for optimal control strategies to be known at design time and as a result good strategies must be learned at runtime. Learning in such systems, particularly those with multiple objectives, takes a considerable amount of time because of the size of the environment and dependencies between goals.Transfer Learning (TL) has been shown to reduce learning time in single-agent, single-objective applications. It is the process of sharing knowledge between two learning tasks called the source and target. The source is required to have been completed prior to the target task.This work proposes extending TL to multi-agent, multiobjective applications. To achieve this, an on-line version of TL called Parallel Transfer Learning (PTL) is presented. The issues involved in extending this algorithm to a multi-objective form are discussed. The effectiveness of this approach is evaluated in a smart grid scenario. When using PTL in this scenario learning is significantly accelerated. PTL achieves comparable performance to the base line in one third of the time.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.