Two theoretical approaches have recently emerged to characterize new digital objects of study in the media landscape: infrastructure studies and platform studies. Despite their separate origins and different features, we demonstrate in this article how the cross-articulation of these two perspectives improves our understanding of current digital media. We use case studies of the Open Web, Facebook, and Google to demonstrate that infrastructure studies provides a valuable approach to the evolution of shared, widely accessible systems and services of the type often provided or regulated by governments in the public interest. On the other hand, platform studies captures how communication and expression are both enabled and constrained by new digital systems and new media. In these environments, platform-based services acquire characteristics of infrastructure, while both new and existing infrastructures are built or reorganized on the logic of platforms. We conclude by underlining the potential of this combined framework for future case studies.
When scientists from two or more disciplines work together on related problems, they often face what we call 'science friction'. As science becomes more data-driven, collaborative, and interdisciplinary, demand increases for interoperability among data, tools, and services. Metadata -usually viewed simply as 'data about data', describing objects such as books, journal articles, or datasets -serve key roles in interoperability. Yet we find that metadata may be a source of friction between scientific collaborators, impeding data sharing. We propose an alternative view of metadata, focusing on its role in an ephemeral process of scientific communication, rather than as an enduring outcome or product. We report examples of highly useful, yet ad hoc, incomplete, loosely structured, and mutable, descriptions of data found in our ethnographic studies of several large projects in the environmental sciences. Based on this (5) evidence, we argue that while metadata products can be powerful resources, usually they must be supplemented with metadata processes. Metadata-as-process suggests the very large role of the ad hoc, the incomplete, and the unfinished in everyday scientific work.
The history of climate modeling begins with conceptual models, followed in the 19th century by mathematical models of energy balance and radiative transfer, as well as simple analog models. Since the 1950s, the principal tools of climate science have been computer simulation models of the global general circulation. From the 1990s to the present, a trend toward increasingly comprehensive coupled models of the entire climate system has dominated the field. Climate model evaluation and intercomparison is changing modeling into a more standardized, modular process, presenting the potential for unifying research and operational aspects of climate science.
JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact support@jstor.org..
The University of Chicago Press andThe History of Science Society are collaborating with JSTOR to digitize, preserve and extend access to Osiris.
ABSTRACTThis chapter explores the history of a global governance institution, the World Meteorological Organization (WMO), from its nineteenth-century origins through the beginnings of a planetary meteorological observing network, the WMO's World Weather Watch (WWW), in the 1960s. This history illustrates a profoundly important transition from voluntarist internationalism, based on shared interests, to quasiobligatory globalism, based on a more permanent shared infrastructure. The WMO and the WWW thus represent infrastructural globalism, by which "the world" as a whole is produced and maintained (as both object of knowledge and unified arena of human action) through global infrastructures.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.