This article presents an overview and initial results of a geoweb analysis designed to provide the foundation for a continued discussion of the potential impacts of 'big data' for the practice of critical human geography. While Haklay's (2012) observation that social media content is generated by a small number of 'outliers' is correct, we explore alternative methods and conceptual frameworks that might allow for one to overcome the limitations of previous analyses of user-generated geographic information. Though more illustrative than explanatory, the results of our analysis suggest a cautious approach toward the use of the geoweb and big data that are as mindful of their shortcomings as their potential.More specifically, we propose five extensions to the typical practice of mapping georeferenced data that we call going 'beyond the geotag': (1) going beyond social media that is explicitly geographic; (2) going beyond spatialities of the 'here and now'; (3) going beyond the proximate; (4) going beyond the human to data produced by bots and automated systems, and (5) going beyond the geoweb itself, by leveraging these sources against ancillary data, such as news reports and census data. We see these extensions of existing methodologies as providing the potential for overcoming existing limitations on the analysis of the geoweb.The principal case study focuses on the widely reported riots following the University of Kentucky men's basketball team's victory in the 2012 NCAA championship and its manifestation within the geoweb. Drawing upon a database of archived Twitter activity -including all geotagged tweets since December 2011-we analyze the geography of tweets that used a specific hashtag (#LexingtonPoliceScanner) in order to demonstrate the potential application of our methodological and conceptual program. By tracking the social, spatial, and temporal diffusion of this hashtag, we show how large databases of such spatially referenced internet content can be used in a more systematic way for critical social and spatial analysis.
It is sometimes claimed that the degree of polycentricity of an urban region influences that region's competitiveness. However, because of widespread use and policy relevance, the underlying concept of polycentricity has become a 'stretched concept' in urban studies. As a result, academic debate on the topic leads to situations reminiscent of Babel's Tower. This meta-study of the scientific literature in urban studies traces the conceptual stretching of polycentricity using scientometric methods and content analysis. All published studies that either apply the concept directly or cite a work that does, were collected from the Scopus bibliographic database. This resulted in a citation network with over 9,000 works and more than 20,000 citations between them. Network analysis and clustering algorithms were used to define the most influential papers in different citation clusters within the network. Subsequently, we employed content analysis to systematically assess the mechanisms associated with the formation of polycentric urban systems in each of these papers. Based on this meta-analysis, we argue that the common categorization of polycentricity research in intra-urban, inter-urban and interregional polycentricity is somewhat misleading. More apt categorizations to understand the origins of polycentricity's conceptual ambiguity relate to different methodological traditions and geographical contexts in which the research is conducted. Nonetheless, we observe a firm relation across clusters between assessments of polycentricity and different kinds of agglomeration economies. We conclude by proposing a re-conceptualisation of polycentricity based on explicitly acknowledging the variable spatial impact of these different kinds of agglomeration economies.
A troubling new political economy of geographical intelligence emerged in the United States over the last two decades. The contours of this new political economy are difficult to identify due to official policies keeping much relevant information secret. The US intelligence community increasingly relies on private corporations, working as contractors, to undertake intelligence work, including geographical intelligence (formally known as GEOINT). In this paper we first describe the geography intelligence "contracting nexus" consisting of tens of thousands of companies (including those in the GIS and mapping sector), universities and non-profits receiving Department of Defense and intelligence agency funding. Second, we discuss the "knowledge nexus" to conceptualize the way geographical knowledge figures in current US intelligence efforts, themselves part of the US's war on terror and counterinsurgency (COIN). To analyze the contracting nexus we compiled and examined extensive data on military and intelligence contracts, especially those contracts awarded by the country's premier geographical intelligence agency, the National Geospatial-Intelligence Agency (NGA) for satellite data. To analyze the knowledge nexus we examined recent changes in the type of geographical knowledges enrolled in and produced by the US intelligence community. We note a shift from an emphasis on areal and cultural expertise to a focus on calculative predictive spatial analysis in geographical intelligence. Due to a lack of public oversight and accountability, the new political economy of geographical intelligence is not easy to research, yet there are reasons to be troubled by it and the violent surveillant state it supports.
While exciting, big data (particularly geotagged social media data) has proven difficult for many urbanists and social science researchers to use. As a partial solution, we propose a strategy that enables the fast extracting of only relevant data from large sets of geosocial data. While contrary to many big data approaches -in which analysis is done on the entire dataset -much productive social science work, can use smaller datasets -around the same size as census or survey data -within standard methodological frameworks. The approach we outline in this paper -including the example of a fully operating systemoffers a solution for urban researchers interested in these types of data but reluctant to personally build data science skills.
How to draw neighborhood boundaries, or spatial regions in general, has been a long‐standing focus in Geography. This article examines this question from a methodological perspective, often referred to as regionalization, with an empirical study of neighborhoods in New York City. I argue that methodological advances, combined with the affordances of big data, enable a different, more nuanced approach to regionalization than has been possible in the past. Conventional data sets often dictate constraints in terms of data availability and spatio‐temporal granularity. However, big data is now available at much finer spatio‐temporal scales and covers a wider array of aspects of social life. The emergence of these data sets supports the notion that neighborhoods can be fuzzy and highly dependent on spatio‐temporal scales and socio‐economic variables. As such, these new data sets can help to bring quantitative analysis in line with social theory that has long emphasized the heterogeneous nature of neighborhoods. This article uses a data set of geotagged tweets to demonstrate how different “sets” of neighborhoods may exist at different spatio‐temporal scales and for different algorithms. Such varying neighborhood boundaries are not a technical problem in need of a solution but rather a reflection of the complexity of the underlying urban fabric.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.