We study how scholar collaboration varies across disciplines in science, social science, arts and humanities and the effects of author collaboration on impact and quality of co-authored papers. Impact is measured with the aid of citations collected by papers, while quality is determined by the judgements expressed by peer reviewers. To this end, we take advantage of the dataset provided by the first-ever national research assessment exercise of Italian universities, which involved 20 disciplinary areas, 102 research structures, 18500 research products, and 6661 peer reviewers. Collaboration intensity neatly varies across disciplines: it is inescapable is most sciences and negligible in most humanities. We measured a general positive association between the cardinality of the author set of a paper and the citational impact and peer quality of the contribution. The correlation is stronger when the affiliations of authors are heterogeneous. There exist, however, notable and interesting counter-examples.
Abstract. Given the current availability of different bibliometric indicators and of production and citation data sources, the following two questions immediately arise: do the indicators' scores differ when computed on different data sources? More importantly, do the indicator-based rankings significantly change when computed on different data sources? We provide a case study for computer science scholars and journals evaluated on Web of Science and Google Scholar databases. The study concludes that Google Scholar computes significantly higher indicators' scores than Web of Science. Nevertheless, citation-based rankings of both scholars and journals do not significantly change when compiled on the two data sources, while rankings based on the h index show a moderate degree of variation 1 .
Co-authorship in publications within a discipline uncovers interesting properties of the analyzed field. We represent collaboration in academic papers of computer science in terms of differently grained networks, namely affiliation and collaboration networks. We also build those sub-networks that emerge from either conference or journal co-authorship only. We take advantage of the network science paraphernalia to take a picture of computer science collaboration including all papers published in the field since 1936. Furthermore, we observe how collaboration in computer science evolved over time since 1960. We investigate bibliometric properties such as size of the discipline, productivity of scholars, and collaboration level in papers, as well as global network properties such as reachability and average separation distance among scholars, distribution of the number of scholar collaborators, network resilience and dependence on star collaborators, network clustering, and network assortativity by number of collaborators.
A bibliometric view of the publishing frequency and impact of conference proceedings compared to archival journal publication. The role of conference publications in computer science is controversial. Conferences have the undeniable advantages of providing fast and regular publication of papers and of bringing researchers together by offering the opportunity to present and discuss the paper with peers. These peculiar features of conferences are particularly important because computer science is a relatively young and fast-evolving discipline. The fundamental role of conferences in computer science is underlined with strength in the best-practices memo for evaluating computer scientists and engineers for promotion and tenure published in 1999 by the U.S. Computing Research Association (CRA) and, more recently, in a study of the Informatics Europe, whose preliminary results are summarized in Choppy et al. Recently, Communications published a series of thought-provoking Viewpoint columns and letters that swim against the tide. These contributions highlight many flaws of the conference system, in particular when compared to archival journals, and also suggest a game-based solution to scale the academic publication process to Internet scale. Some of the mentioned flaws are: short time for referees to review the papers, limited number of pages for publication, limited time for authors to polish the paper after receiving comments from reviewers, and overload of the best researchers as reviewers in conference program committees. The result is a deadline-driven publication system, in which " we submit a paper when we reach an appropriate conference deadline instead of when the research has been properly fleshed out ," that " encourages and rewards production of publishing quarks---units of intellectual endeavor that can be generated, summarized, and reviewed in a calendar year " (interestingly, the author of the latter claim is CRA Board Chair Dan Reed). Furthermore, the current conference system " leads to an emphasis on safe papers (incremental and technical) versus those that explore new models and research directions outside the established core areas of the conferences ." " And arguably it is the more innovative papers that suffer, because they are time consuming to read and understand, so they are the most likely to be either completely misunderstood or underappreciated by an increasingly error-prone process ." Are we driving on the wrong side of the publication road? The question is raised by Moshe Vardi in a May 2009 Communications editor's letter. This article gives an alternative view on this hot issue: the bibliometric perspective. Bibliometrics has become a standard tool of science policy and research management in the last decades. In particular, academic institutions increasingly rely on bibliometric analysis for making decisions regarding hiring, promotion, tenure, and funding of scholars. I investigate the frequency and impact of conference publications in computer science, comparing with journal articles. I stratify the set of computer science publications by author, topic, and nation; in particular, I analyze publications of the most prolific, most popular, and most prestigious scholars in computer science.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.