In this paper I attempt to answer the question: What is interdisciplinary communication? I attempt to answer this question, rather than what some might consider the ontologically prior question-what is interdisciplinarity (ID)?-for two reasons: (1) there is no generally agreed-upon definition of ID; and (2) one's views regarding interdisciplinary communication have a normative relationship with one's other views of ID, including one's views of its very essence. I support these claims with reference to the growing literature on ID, which has a marked tendency to favor the idea that interdisciplinary communication entails some kind of 'integration'. The literature on ID does not yet include very many philosophers, but we have something valuable to offer in addressing the question of interdisciplinary communication. Playing some-what fast-and-loose with traditional categories of the subdisciplines of philosophy, I group some philosophers-mostly from the philosophy of science, social-political philosophy, and moral theory-and some non-philosophers together to provide three different, but related, answers to the question of interdisciplinary communication. The groups are as follows: (1) Habermas-Klein, (2) KuhnMacIntyre, and (3) Bataille-Lyotard. These groups can also be thought of in terms of the types of answers they give to the question of interdisciplinary communication, especially in terms of the following key words (where the numbers correspond to the groups from the previous sentence): (1) consensus, (2) incommensurability, and (3) invention.
The increasing pursuit of replicable research and actual replication of research is a political project that articulates a very specific technology of accountability for science. This project was initiated in response to concerns about the openness and trustworthiness of science. Though applicable and valuable in many fields, here we argue that this value cannot be extended everywhere, since the epistemic content of fields, as well as their accountability infrastructures, differ. Furthermore, we argue that there are limits to replicability across all fields; but in some fields, including parts of the humanities, these limits severely undermine the value of replication to account for the value of research.
Currently, established research evaluation focuses on scientific impact – that is, the impact of research on science itself. We discuss extending research evaluation to cover productive interactions and the impact of research on practice and society. The results are based on interviews with scientists from (organic) agriculture and a review of the literature on broader/social/societal impact assessment and the evaluation of interdisciplinary and transdisciplinary research. There is broad agreement about what activities and impacts of research are relevant for such an evaluation. However, the extension of research evaluation is hampered by a lack of easily usable data. To reduce the effort involved in data collection, the usability of existing documentation procedures (e.g., proposals and reports for research funding) needs to be increased. We propose a structured database for the evaluation of scientists, projects, programmes and institutions, one that will require little additional effort beyond existing reporting require ments.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.