Review, promotion, and tenure (RPT) processes significantly affect how faculty direct their own career and scholarly progression. Although RPT practices vary between and within institutions, and affect various disciplines, ranks, institution types, genders, and ethnicity in different ways, some consistent themes emerge when investigating what faculty would like to change about RPT. For instance, over the last few decades, RPT processes have generally increased the value placed on research, at the expense of teaching and service, which often results in an incongruity between how faculty actually spend their time vs. what is considered in their evaluation. Another issue relates to publication practices: most agree RPT requirements should encourage peer-reviewed works of high quality, but in practice, the value of publications is often assessed using shortcuts such as the prestige of the publication venue, rather than on the quality and rigor of peer review of each individual item. Open access and online publishing have made these issues even murkier due to misconceptions about peer review practices and concerns about predatory online publishers, which leaves traditional publishing formats the most desired despite their restricted circulation. And, efforts to replace journal-level measures such as the impact factor with more precise article-level metrics (e.g., citation counts and altmetrics) have been slow to integrate with the RPT process. Questions remain as to whether, or how, RPT practices should be changed to better reflect faculty work patterns and reduce pressure to publish in only the most prestigious traditional formats. To determine the most useful way to change RPT, we need to assess further the needs and perceptions of faculty and administrators, and gain a better understanding of the level of influence of written RPT guidelines and policy in an often vague process that is meant to allow for flexibility in assessing individuals.
Much of the work done by faculty at both public and private universities has significant public dimensions: it is often paid for by public funds; it is often aimed at serving the public good; and it is often subject to public evaluation. To understand how the public dimensions of faculty work are valued, we analyzed review, promotion, and tenure documents from a representative sample of 129 universities in the US and Canada. Terms and concepts related to public and community are mentioned in a large portion of documents, but mostly in ways that relate to service, which is an undervalued aspect of academic careers. Moreover, the documents make significant mention of traditional research outputs and citation-based metrics: however, such outputs and metrics reward faculty work targeted to academics, and often disregard the public dimensions. Institutions that seek to embody their public mission could therefore work towards changing how faculty work is assessed and incentivized.
The Journal Impact Factor (JIF) was originally designed to aid libraries in deciding which journals to index and purchase for their collections. Over the past few decades, however, it has become a relied upon metric used to evaluate research articles based on journal rank. Surveyed faculty often report feeling pressure to publish in journals with high JIFs and mention reliance on the JIF as one problem with current academic evaluation systems. While faculty reports are useful, information is lacking on how often and in what ways the JIF is currently used for review, promotion, and tenure (RPT). We therefore collected and analyzed RPT documents from a representative sample of 129 universities from the United States and Canada and 381 of their academic units. We found that 40% of doctoral, research-intensive (R-type) institutions and 18% of master’s, or comprehensive (M-type) institutions explicitly mentioned the JIF, or closely related terms, in their RPT documents. Undergraduate, or baccalaureate (B-type) institutions did not mention it at all. A detailed reading of these documents suggests that institutions may also be using a variety of terms to indirectly refer to the JIF. Our qualitative analysis shows that 87% of the institutions that mentioned the JIF supported the metric’s use in at least one of their RPT documents, while 13% of institutions expressed caution about the JIF’s use in evaluations. None of the RPT documents we analyzed heavily criticized the JIF or prohibited its use in evaluations. Of the institutions that mentioned the JIF, 63% associated it with quality, 40% with impact, importance, or significance, and 20% with prestige, reputation, or status. In sum, our results show that the use of the JIF is encouraged in RPT evaluations, especially at research-intensive universities, and indicates there is work to be done to improve evaluation processes to avoid the potential misuse of metrics like the JIF.
Despite growing interest in Open Access (OA) to scholarly literature, there is an unmet need for large-scale, up-to-date, and reproducible studies assessing the prevalence and characteristics of OA. We address this need using oaDOI, an open online service that determines OA status for 67 million articles. We use three samples, each of 100,000 articles, to investigate OA in three populations: 1) all journal articles assigned a Crossref DOI, 2) recent journal articles indexed in Web of Science, and 3) articles viewed by users of Unpaywall, an open-source browser extension that lets users find OA articles using oaDOI. We estimate that at least 28% of the scholarly literature is OA (19M in total) and that this proportion is growing, driven particularly by growth in Gold and Hybrid. The most recent year analyzed (2015) also has the highest percentage of OA (45%). Because of this growth, and the fact that readers disproportionately access newer articles, we find that Unpaywall users encounter OA quite frequently: 47% of articles they view are OA. Notably, the most common mechanism for OA is not Gold, Green, or Hybrid OA, but rather an under-discussed category we dub Bronze: articles made free-to-read on the publisher website, without an explicit Open license. We also examine the citation impact of OA articles, corroborating the so-called open-access citation advantage: accounting for age and discipline, OA articles receive 18% more citations than average, an effect driven primarily by Green and Hybrid OA. We encourage further research using the free oaDOI service, as a way to inform OA policy and practice.
In this article, we investigate the surge in use of COVID-19-related preprints by media outlets. Journalists are a main source of reliable public health information during crises and, until recently, journalists have been reluctant to cover preprints because of the associated scientific uncertainty. Yet, uploads of COVID-19 preprints and their uptake by online media have outstripped that of preprints about any other topic. Using an innovative approach combining altmetrics methods with content analysis, we identified a diversity of outlets covering COVID-19-related preprints during the early months of the pandemic, including specialist medical news outlets, traditional news media outlets, and aggregators. We found a ubiquity of hyperlinks as citations and a multiplicity of framing devices for highlighting the scientific uncertainty associated with COVID-19 preprints. These devices were rarely used consistently (e.g., mentioning that the study was a preprint, unreviewed, preliminary, and/or in need of verification). About half of the stories we analyzed contained framing devices emphasizing uncertainty. Outlets in our sample were much less likely to identify the research they mentioned as preprint research, compared to identifying it as simply "research." This work has significant implications for public health communication within the changing media landscape. While current best practices in public health risk communication promote identifying and promoting trustworthy sources of information, the uptake of preprint research by online media presents new challenges. At the same time, it provides new opportunities for fostering greater awareness of the scientific uncertainty associated with health research findings.
Purpose: This study aims to contribute to the understanding of how the potential of altmetrics varies around the world by measuring the percentage of articles with non-zero metrics (coverage) for articles published from a developing region (Latin America). Design/methodology/approach: This study uses article metadata from a prominent Latin American journal portal, SciELO, and combines it with altmetrics data from Altmetric.com and with data collected by author-written scripts. The study is primarily descriptive, focusing on coverage levels disaggregated by year, country, subject area, and language. Findings: Coverage levels for most of the social media sources studied was zero or negligible. Only three metrics had coverage levels above 2%-Mendeley, Twitter, and Facebook. Of these, Twitter showed the most significant differences with previous studies. Mendeley coverage levels reach those found by previous studies, but it takes up to two years longer for articles to be saved in the reference manager. For the most recent year, coverage was less than half than what was found in previous studies. The coverage levels of Facebook appear similar (around 3%) to that of previous studies. Research limitations/implications: The Altmetric.com data used for some of the analyses was collected for a six month period. For other analyses, Altmetric.com data was only available for a single country (Brazil). Originality/value: The results of this study have implications for the altmetrics research community and for any stakeholders interested in using altmetrics for evaluation. It suggests the need of careful sample selection when wishing to make generalizable claims about altmetrics.
The growing presence of research shared on social media, coupled with the increase in freely available research, invites us to ask whether scientific articles shared on platforms like Twitter diffuse beyond the academic community. We explore a new method for answering this question by identifying 11 articles from two open access biology journals that were shared on Twitter at least 50 times and by analyzing the follower network of users who tweeted each article. We find that diffusion patterns of scientific articles can take very different forms, even when the number of times they are tweeted is similar. Our small case study suggests that most articles are shared within single-connected communities with limited diffusion to the public. The proposed approach and indicators can serve those interested in the public understanding of science, science communication, or research evaluation to identify when research diffuses beyond insular communities.
Purpose: Social annotation (SA) is a genre of learning technology that enables the annotation of digital resources for information sharing, social interaction, and knowledge production. This case study examines the perceived value of SA in multiple undergraduate courses.Design/methodology/approach: Fifty-nine students in three upper-level undergraduate courses at a Canadian university participated in SA-enabled learning activities during the winter 2019 semester. A survey was administered to measure how SA contributed to students’ perceptions of learning and sense of community.Findings: A majority of students reported that SA supported their learning despite differences in course subject, how SA was incorporated and encouraged, and how widely SA was used during course activities. While findings about the perceived value of SA as contributing to course community were mixed, students reported that peer annotations aided comprehension of course content, confirmation of ideas, and engagement with diverse perspectives.Research limitations/implications: Studies about the relationships among SA, learning, and student perception should continue to engage learners from multiple courses and multiple disciplines, with indicators of perception measured using reliable instrumentation.Practical implications: Researchers and faculty should carefully consider how the technical, instructional, and social aspects of SA may be used to enable course-specific, personal, and peer-supported learning.Originality/value: This study found greater variance in how undergraduate students perceived SA as contributing to course community. Most students also perceived their own and peer annotations as productively contributing to learning. This study offers a more complete view of social factors that affect how SA is perceived by undergraduate students.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.