©
iForest -Biogeosciences and Forestry
IntroductionThe quantitative evaluation of researchers' activity is based on the principle that scientific productivity is related to the degree to which the results of investigations are published. Research performance is a multidimensional concept influenced by the ability of researchers to accomplish multiple tasks, including publishing, teaching, fund-raising, public relations, participation to meetings and conferences, and administrative duties (Nagpaul 1995). In this context, publishing capacity must be seen not only as a task for researchers but also as an indicator of research performance.Evaluation of researcher productivity follows two main approaches: (i) peer-review, which entails researchers submitting their products to panels of appointed experts who conduct evaluations; and (ii) bibliometric, which entails calculation of indexes based on numbers of publications and citations.The latter indexes may also be used to inform peer-review evaluations. The literature debating the pros and cons of both approaches is vast. Abramo & D'Angelo (2011a) provide a recent overview contrasting the peer-review and the bibliometric systems for national research assessments.In Italy, research evaluation has been "strongly neglected" in the past (Abramo et al. 2011
Agenzia Nazionale per la Valutazione delSistema Universitario e della Ricerca) and the National University Council (CUN, Consiglio Nazionale Universitario) proposed bibliometric methods and a set of criteria to this purpose. This new evaluation system will serve as the basis for assessing the qualifications of candidates for new research positions in universities and other research institutions and for annual budget allocations from MIUR to universities and research institutes.The CUN (2011) announced multiple recommendations specifically addressing different thematic areas of the Italian research system. For the agriculture and veterinary area, one of the proposed main criteria was the number of scientific publications in journals included in the Thomson Reuters Web of Science (WOS) and/or in the Elsevier SciVerse SCOPUS. The ANVUR (2011) and CUN (2011) proposed adoption of several criteria based on the number of publications, number of citations, and the h-index calculated using the SCOPUS or WOS databases. Both CUN (2011) andANVUR (2011) focus more on the criteria to be used in the evaluations than on the databases which they consider as equivalent.
Citation databasesCitation databases, also called bibliographic databases, are used to combine information related to bibliographic productivity and to facilitate the identification of authors of publications and sources of publication citations. A large number of thematic citation databases are available, but their coverage is limited to specific academic or scientific areas. Other databases are more general and have been constructed to cover the overall academic productivity. Web of Science, SCOPUS and Google Scholar are the most well-known databases, and all three are...