In the past fifteen years, a great deal has been learned about the particular challenges of distant collaboration. Overall, we have learned that even when advanced technologies are available, distance still matters (Olson and Olson 2000). In addition, a recent seminal study of sixty-two projects sponsored by the National Science Foundation (NSF) showed that the major indicator of lower success was the number of institutions involved (Cummings and Kiesler 2005; chapter 5, this volume). The greater the number of institutions involved, the less well coordinated a project was and the fewer the positive outcomes. There are a number of reasons for these challenges. For one, distance threatens context and common ground (Cramton 2001). Second, trust is more difficult to establish and maintain when the collaborators are separated from each other (Shrum, Chompalov, and Genuth 2001; Kramer and Tyler 1995). Third, poorly designed incentive systems can inhibit collaborations and prevent the adoption of new collaboration technology (Orlikowski 1992; Grudin 1988). Finally, organizational structures and governance systems, along with the nature of the work, can either contribute to or inhibit collaboration (Larson et al. 2002; Mazur and Boyko 1981; Hesse et al. 1993; Sonnenwald 2007). This chapter describes our attempt to synthesize these findings and enumerate those factors that we (and others) believe are important in determining the success of remote collaboration in science. In working toward a theory of remote scientific collaboration (TORSC), we have drawn from data collected as part of the Science of Collaboratories (SOC) project, studies in the sociology of science, and investigations of distance collaboration in general. The Developing Theory Success We begin by discussing what we might mean by success in remote collaboration, since in the literature it can vary from revolutionary new thinking in the science to simply having some new software used. Different sets of factors may lead to different kinds of success. These outputs include effects on the science itself, science careers, learning and science education, funding and public perception, and inspiration to develop new collaboratories and new collaborative tools. The details are listed in short form in table 4.1. Effects on the Science Itself Early goals for collaboratories included that they would increase productivity and the number of participants, and democratize science through improved access to elite researchers (Finholt and Olson 1997; Hesse et al. 1993; Walsh and Bayma 1996). Similar assumptions were made with regard to interdisciplinary research (Steele and Stier 2000). These goals have to date not been tested. Today, scholars, policymakers, and scientists no longer take these assumptions for granted. Increasingly, they recognize that to define and evaluate the success of distributed and large-scale scientific collaborations is a complex task. Traditional measures of success in science are geared toward the individual, and include metrics such as producti...
This article analyzes the experiences of ecologists who used data they did not collect themselves. Specifically, the author examines the processes by which ecologists understand and assess the quality of the data they reuse, and investigates the role that standard methods of data collection play in these processes. Standardization is one means by which scientific knowledge is transported from local to public spheres. While standards can be helpful, the results show that knowledge of the local context is critical to ecologists' reuse of data. Yet, this information is often left behind as data move from the private to the public world. The knowledge that ecologists acquire through fieldwork enables them to recover the local details that are so critical to their comprehension of data collected by others. Social processes also play a role in ecologists' efforts to judge the quality of data they reuse.
There is almost universal agreement that scientific data should be shared for use beyond the purposes for which they were initially collected. Access to data enables system-level science, expands the instruments and products of research to new communities, and advances solutions to complex human problems. While demands for data are not new, the vision of open access to data is increasingly ambitious. The aim is to make data accessible and usable to anyone, anytime, anywhere, and for any purpose. Until recently, scholarly investigations related to data sharing and reuse were sparse. They have become more common as technology and instrumentation have advanced, policies that mandate sharing have been implemented, and research has become more interdisciplinary. Each of these factors has contributed to what is commonly referred to as the "data deluge". Most discussions about increases in the scale of sharing and reuse have focused on growing amounts of data. There are other issues related to open access to data that also concern scale which have not been as widely discussed: broader participation in data sharing and reuse, increases in the number and types of intermediaries, and more digital data products. The purpose of this paper is to develop a research agenda for scientific data sharing and reuse that considers these three areas.
Promoting affiliation between scientists is relatively easy, but creating larger organizational structures is much more difficult, due to traditions of scientific independence, difficulties of sharing implicit knowledge, and formal organizational barriers. The Science of Collaboratories (SOC) project conducted a broad five‐year review to take stock of the diverse ecosystem of projects that fit our definition of a collaboratory and to distill lessons learned in the process. This article describes one of the main products of that review, a seven‐category taxonomy of collaboratory types. The types are: Distributed Research Centers, Shared Instruments, Community Data Systems, Open Community Contribution Systems, Virtual Communities of Practice, Virtual Learning Communities, and Community Infrastructure Projects. Each of the types is defined and illustrated with one example, and key technical and organizational issues are identified.
No abstract
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.