Getting people cycling is an increasingly common objective in transport planning institutions worldwide. A growing evidence base indicates that high quality infrastructure can boost local cycling rates. Yet for infrastructure and other cycling measures to be effective, it is important to intervene in the right places, such as along 'desire lines' of high latent demand. is creates the need for tools and methods to help answer the question 'where to build?' . Following a brief review of the policy and research context related to this question, this paper describes the design, features and potential applications of such a tool. e Propensity to Cycle Tool (PCT) is an online, interactive planning support system that was initially developed to explore and map cycling potential across England (see www.pct.bike). Based on origin-destination data it models cycling levels at area, desire line, route and route network levels, for current levels of cycling, and for scenario-based 'cycling futures.' Four scenarios are presented, including 'Go Dutch' and 'Ebikes, ' which explore what would happen if English people had the same propensity to cycle as Dutch people and the potential impact of electric cycles on cycling uptake. e cost effectiveness of investment depends not only on the number of additional trips cycled, but on wider impacts such as health and carbon benefits. e PCT reports these at area, desire line, and route level for each scenario. e PCT is open source, facilitating the creation of scenarios and deployment in new contexts. We conclude that the PCT illustrates the potential of online tools to inform transport decisions and raises the wider issue of how models should be used in transport planning.
Background
Planners and politicians in many countries seek to increase the proportion of trips made by cycling. However, this is often challenging. In England, a national target to double cycling by 2025 is likely to be missed: between 2001 and 2011 the proportion of commutes made by cycling barely grew. One important contributory factor is continued low investment in cycling infrastructure, by comparison to European leaders.
Methods
This paper examines barriers to cycling investment, considering that these need to be better understood to understand failures to increase cycling level. It is based on qualitative data from an online survey of over 400 stakeholders, alongside seven in-depth interviews.
Results
Many respondents reported that change continues to be blocked by chronic barriers including a lack of funding and leadership. Participants provided insights into how challenges develop along the life of a scheme. In authorities with little consideration given to cycling provision, media and public opposition were not reported as a major issue. However, where planning and implementation have begun, this can change quickly; although examples were given of schemes successfully proceeding, despite this. The research points to a growing gap between authorities that have overcome key challenges, and those that have not.
Iterative proportional fitting (IPF) is a widely used method for spatial microsimulation. The technique results in non-integer weights for individual rows of data. This is problematic for certain applications and has led many researchers to favour combinatorial optimisation approaches such as simulated annealing. An alternative to this is 'integerisation' of IPF weights: the translation of the continuous weight variable into a discrete number of unique or 'cloned' individuals. We describe four existing methods of integerisation and present a new one. Our method -'truncate, replicate, sample' (TRS) -recognises that IPF weights consist of both 'replication weights' and 'conventional weights', the effects of which need to be separated. The procedure consists of three steps: 1) separate replication and conventional weights by truncation; 2) replication of individuals with positive integer weights; and 3) probabilistic sampling. The results, which are reproducible using supplementary code and data published alongside this paper, show that TRS is fast, and more accurate than alternative approaches to integerisation.
Summaryosmdata imports OpenStreetMap (OSM) data into R as either Simple Features or R Spatial objects, respectively able to be processed with the R packages sf and sp. OSM data are extracted from the Overpass API and processed with very fast C++ routines for return to R. The package enables simple Overpass queries to be constructed without the user necessarily understanding the syntax of the Overpass query language, while retaining the ability to handle arbitrarily complex queries. Functions are also provided to enable recursive searching between different kinds of OSM data (for example, to find all lines which intersect a given point). The package is faster than current alternatives for importing OSM data into R and is the only one compatible with sf.
There has been much excitement among quantitative geographers about newly available data sets, characterized by high volume, velocity, and variety. This phenomenon is often labeled as "Big Data" and has contributed to methodological and empirical advances, particularly in the areas of visualization and analysis of social networks. However, a fourth vveracity (or lack thereof)-has been conspicuously lacking from the literature. This article sets out to test the potential for verifying large data sets. It does this by cross-comparing three unrelated estimates of retail flows-human movements from home locations to shopping centers-derived from the following geo-coded sources: (1) a major mobile telephone service provider; (2) a commercial consumer survey; and (3) geotagged Twitter messages. Three spatial interaction models also provided estimates of flow: constrained and unconstrained versions of the "gravity model" and the recently developed "radiation model." We found positive relationships between all data-based and theoretical sources of estimated retail flows. Based on the analysis, the mobile telephone data fitted the modeled flows and consumer survey data closely, while flows obtained directly from the Twitter data diverged from other sources. The research highlights the importance of verification in flow data derived from new sources and demonstrates methods for achieving this.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.