Data sharing by researchers is a centerpiece of Open Science principles and scientific progress. For a sample of 6019 researchers, we analyze the extent/frequency of their data sharing. Specifically, the relationship with the following four variables: how much they value data citations, the extent to which their data-sharing activities are formally recognized, their perceptions of whether sufficient credit is awarded for data sharing, and the reported extent to which data citations motivate their data sharing. In addition, we analyze the extent to which researchers have reused openly accessible data, as well as how data sharing varies by professional age-cohort, and its relationship to the value they place on data citations. Furthermore, we consider most of the explanatory variables simultaneously by estimating a multiple linear regression that predicts the extent/frequency of their data sharing. We use the dataset of the State of Open Data Survey 2019 by Springer Nature and Digital Science. Results do allow us to conclude that a desire for recognition/credit is a major incentive for data sharing. Thus, the possibility of receiving data citations is highly valued when sharing data, especially among younger 2 researchers, irrespective of the frequency with which it is practiced. Finally, the practice of data sharing was found to be more prevalent at late research career stages, despite this being when citations are less valued and have a lower motivational impact. This could be due to the fact that later-career researchers may benefit less from keeping their data private.
Since Lawrence in 2001 proposed the open access (OA) citation advantage, the potential benefit of OA in relation to the citation impact has been discussed in depth. The methodology to test this postulate ranges from comparing the impact factors of OA journals versus traditional ones, to comparing citations of OA versus non-OA articles published in the same non-OA journals. However, conclusions are not entirely consistent among fields, and two possible explications have been suggested in those fields where a citation advantage has been observed for OA: the early view and the selection bias postulates. In this study, a longitudinal and multidisciplinary analysis of the gold OA citation advantage is developed. All research articles in all journals for all subject categories in the multidisciplinary database Web of Science are considered. A total of 1,137,634 articles -86,712 OA articles (7.6%) and 1,050,922 non-OA articles (92.4%) -published in 2009 are analysed. The citation window considered goes from 2009 to 2014, and data are aggregated for the 249 disciplines (subject categories). At journal level, we also study the evolution of journal impact factors for OA and non-OA journals in those disciplines whose OA prevalence is higher (top 36 subject categories). As the main conclusion, there is no generalizable gold OA citation advantage, neither at article nor at journal level.2
The journal Impact Factor (IF) is not comparable among fields of Science and Social Science because of systematic differences in publication and citation behaviour across disciplines. In this work, a decomposing of the field aggregate impact factor into five normally distributed variables is presented. Considering these factors, a Principal Component Analysis is employed to find the sources of the variance in the JCR subject categories of Science and Social Science. Although publication and citation behaviour differs largely across disciplines, principal components explain more than 78% of the total variance and the average number of references per paper is not the primary factor explaining the variance in impact factors across categories. The Categories Normalized Impact Factor (CNIF) based on the JCR subject category list is proposed and compared with the IF. This normalization is achieved by considering all the indexing categories of each journal. An empirical application, with one hundred journals in two or more subject categories of economics and business, shows that the gap between rankings is reduced around 32% in the journals analyzed. This gap is obtained as the maximum distance among the ranking percentiles from all categories where each journal is included.
The journal impact factor is not comparable among fields of science and social science because of systematic differences in publication and citation behaviour across disciplines. In this work, a source normalization of the journal impact factor is proposed. We use the aggregate impact factor of the citing journals as a measure of the citation potential in the journal topic, and we employ this citation potential in the normalization of the journal impact factor to make it comparable between scientific fields. An empirical application comparing some impact indicators with our topic normalized impact factor in a set of 224 journals from four different fields shows that our normalization, using the citation potential in the journal topic, reduces the betweengroup variance with respect to the within-group variance in a higher proportion than the rest of indicators analysed. The effect of journal self-citations over the normalization process is also studied.
Journal metrics are employed for the assessment of scientific scholar journals from a general bibliometric perspective. In this context, the Thomson Reuters journal impact factors (JIF) are the citation-based indicators most used. The 2-year journal impact factor (2-JIF) counts citations to one and two year old articles, while the 5-year journal impact factor (5-JIF) counts citations from one to five year old articles. Nevertheless, these indicators are not comparable among fields of science for two reasons: (i) each field has a different impact maturity time, and (ii) because of systematic differences in publication and citation behaviour across disciplines. In fact, the 5-JIF firstly appeared in the Journal Citation Reports (JCR) in 2007 with the purpose of making more comparable impacts in fields in which impact matures slowly. However, there is not an optimal fixed impact maturity time valid for all the fields. In some of them two years provides a good performance whereas in others three or more years are necessary. Therefore, there is a problem when comparing a journal from a field in which impact matures slowly with a journal from a field in which impact matures rapidly. In this work, we propose the 2-year maximum journal impact factor (2M-JIF), a new impact indicator that considers the 2-year rolling citation time window of maximum impact instead of the previous 2-year time window. Finally, an empirical application comparing 2-JIF, 5-JIF, and 2M-JIF shows that the maximum rolling target window reduces the between-group variance with respect to the within-group variance in a random sample of about six hundred journals from eight different fields.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.