ResearchGate is a social network site for academics to create their own profiles, list their publications, and interact with each other. Like Academia.edu, it provides a new way for scholars to disseminate their work and hence potentially changes the dynamics of informal scholarly communication. This article assesses whether ResearchGate usage and publication data broadly reflect existing academic hierarchies and whether individual countries are set to benefit or lose out from the site. The results show that rankings based on ResearchGate statistics correlate moderately well with other rankings of academic institutions, suggesting that ResearchGate use broadly reflects the traditional distribution of academic capital. Moreover, while Brazil, India, and some other countries seem to be disproportionately taking advantage of ResearchGate, academics in China, South Korea, and Russia may be missing opportunities to use ResearchGate to maximize the academic impact of their publications.
In this paper we introduce a new data gathering method "Web/URL Citation" and use it and Google Scholar as a basis to compare traditional and Web-based citation patterns across multiple disciplines. For this, we built a sample of 1,650 articles from 108 Open Access (OA) journals published in 2001 in four science and four social science disciplines. We recorded the number of citations to the sample articles using several methods based upon the ISI Web of Science, Google Scholar and the Google search engine (Web/URL citations). For each discipline, we found significant correlations between ISI citations and both Google Scholar and Google Web/URL citations; with similar results when using total or average citations, and when comparing within and across (most) journals. We also investigated disciplinary differences. Google Scholar citations were more numerous than ISI citations in our four social science disciplines as well as in computer science, suggesting that Google Scholar is a more comprehensive tool for citation tracking in the social sciences and perhaps also in fast-moving fields where conference papers are highly valued and published online. The results for Web/URL citations suggested that counting a maximum of one hit per site produces a better measure for assessing the impact of OA journals or articles, because replicated web citations are very common within individual sites. The results can be considered as additional evidence that there is some commonality between traditional and Web-extracted citations.
Academic social network sites Academia.edu and ResearchGate, and reference sharing sites Mendeley, Bibsonomy, Zotero, and CiteULike, give scholars the ability to publicize their research outputs and connect with each other. With millions of users, these are a significant addition to the scholarly communication and academic information‐seeking eco‐structure. There is thus a need to understand the role that they play and the changes, if any, that they can make to the dynamics of academic careers. This article investigates attributes of philosophy scholars on Academia.edu, introducing a median‐based, time‐normalizing method to adjust for time delays in joining the site. In comparison to students, faculty tend to attract more profile views but female philosophers did not attract more profile views than did males, suggesting that academic capital drives philosophy uses of the site more than does friendship and networking. Secondary analyses of law, history, and computer science confirmed the faculty advantage (in terms of higher profile views) except for females in law and females in computer science. There was also a female advantage for both faculty and students in law and computer science as well as for history students. Hence, Academia.edu overall seems to reflect a hybrid of scholarly norms (the faculty advantage) and a female advantage that is suggestive of general social networking norms. Finally, traditional bibliometric measures did not correlate with any Academia.edu metrics for philosophers, perhaps because more senior academics use the site less extensively or because of the range informal scholarly activities that cannot be measured by bibliometric methods.
Citation indictors are increasingly used in some subject areas to support peer review in the evaluation of researchers and departments. Nevertheless, traditional journal-based citation indexes may be inadequate for the citation impact assessment of book-based disciplines. This article examines whether online citations from Google Books and Google Scholar can provide alternative sources of citation evidence. To investigate this, we compared the citation counts to 1,000 books submitted to the 2008 U.K. Research Assessment Exercise (RAE) from Google Books and Google Scholar with Scopus citations across seven book-based disciplines (archaeology; law; politics and international studies; philosophy; sociology; history; and communication, cultural, and media studies). Google Books and Google Scholar citations to books were 1.4 and 3.2 times more common than were Scopus citations, and their medians were more than twice and three times as high as were Scopus median citations, respectively. This large number of citations is evidence that in book-oriented disciplines in the social sciences, arts, and humanities, online book citations may be sufficiently numerous to support peer review for research evaluation, at least in the United Kingdom. IntroductionBooks and monographs are primary research outputs in the arts and humanities and in many social sciences (Glänzel & Schoepflin, 1999;Hicks, 2004;Huang & Chang, 2008; Received March 28, 2011; revised June 16, 2011; accepted June 17, 2011 © 2011 ASIS&T • Published online 26 August 2011 in Wiley Online Library (wileyonlinelibrary.com). DOI: 10.1002/asi.21608 Nederhof, 2006), but it is difficult for subject experts to evaluate the quality of books on a large scale because books tend to be much longer than are journal articles. In the context of U.K. research evaluation, for example, Taylor and Walker (2009) argued that "Given the time constraints facing panel members, it is obvious that not all publications could be considered in detail, and certainly not by more than one panel member in the majority of cases"(p. 3). To support this, there were more than 14,000 monographs overall in the 2008 U.K. Research Assessment Exercise (RAE), 14 per reviewer, but in bookoriented disciplines there were up to 100 books per reviewer (e.g., 1,665 monographs for history's 17 reviewers). While it could be argued that the selective reading of any individual text (which Taylor & Walker implied must occur) may be adequate for its overall quality assessment, this practice seems likely to increase the chance of errors and reviewer susceptibility to extraneous factors such as institutional reputation. Citation analysis also has been widely used for research evaluation, but has its own problems, errors, and biases (for an in-depth review, see MacRoberts & MacRoberts, 1996). For instance, influential research can be uncited, and even types of influential research can remain uncited within a particular field (MacRoberts & MacRoberts, 2010). Consequently, it seems to be widely accepted in the field th...
The evaluation of research outputs in the form of journal articles is important to help with monitoring performance and to allocate funds. Elsevier's Scopus and Clarivate's Web of Science (WoS) are the two main sources for identifying outputs. For non-English-speaking countries, it is especially important that most of the scientific activity evaluated is represented in the bibliometric database used. All documents published in Scopus and WoS during 2018 (6,094,079 documents) were therefore analysed and compared for their languages and research areas. The most comprehensive source for each language and research area were identified and some coverage problems have been found.
Although Mendeley bookmarking counts appear to correlate moderately with conventional citation metrics, it is not known whether academic publications are bookmarked in Mendeley in order to be read or not. Without this information, it is not possible to give a confident interpretation of altmetrics derived from Mendeley. In response, a survey of 860 Mendeley users shows that it is reasonable to use Mendeley bookmarking counts as an indication of readership because most (55%) users with a Mendeley library had read or intended to read at least half of their bookmarked publications. This was true across all broad areas of scholarship except for the arts and humanities (42%). About 85% of the respondents also declared that they bookmarked articles in Mendeley to cite them in their publications, but some also bookmark articles for use in professional (50%), teaching (25%), and educational activities (13%). Of course, it is likely that most readers do not record articles in Mendeley and so these data do not represent all readers. In conclusion, Mendeley bookmark counts seem to be indicators of readership leading to a combination of scholarly impact and wider professional impact.
Academics can now use the web and the social websites to disseminate scholarly information in a variety of different ways. Although some scholars have taken advantage of these new online opportunities, it is not clear how widespread their uptake is or how much impact they can have. This study assesses the extent to which successful scientists have social web presences, focusing on one influential group: highly cited researchers working at European institutions. It also assesses the impact of these presences. We manually and systematically identified if the European highly cited researchers had profiles in Google Scholar, Microsoft Academic Search, Mendeley, Academia and LinkedIn or any content in SlideShare. We then used URL mentions and altmetric indicators to assess the impact of the web presences found. Although most of the scientists had an institutional website of some kind, few had created a profile in any social website investigated, and LinkedIn -the only non-academic site in the list -was the most popular. Scientists having one kind of social web profile were more likely to have another in many cases, especially in the life sciences and engineering. In most cases it was possible to estimate the relative impact of the profiles using a readily available statistic and there were disciplinary differences in the impact of the different kinds of profiles. Most social web profiles had some evidence of uptake, if not impact; nevertheless, the value of the indicators used is unclear.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.