Resumen. Los procesos de evaluación de las revistas académicas y científicas se han transformado en los últimos años debido al incremento tanto del número de revistas como del volumen de artículos recibidos. Además, el uso de internet ha generado un fuerte aperturismo de la ciencia, con cada vez más revistas en acceso abierto. Los equipos editoriales han tenido que adaptarse y decidirse por un modelo de evaluación que se adecue a este nuevo entorno. Dado este contexto, en este artículo se expone el caso particular del sistema de evaluación de Encrucijadas. Revista Crítica de Ciencias Sociales. Esta revista plantea un modelo de evaluación abierta en el que se conozcan tanto autores como evaluadores durante todo el proceso de revisión. Mediante un cuestionario precodificado semiabierto, se analiza cuál fue la percepción sobre ese sistema de evaluación tanto de autores como evaluadores. La conclusión principal es que hay una aceptación del modelo por parte de los dos colectivos, si bien hay ligeras diferencias por edad, sexo y colectivo Palabras clave: revistas, proceso editorial, evaluación por pares abierta, evaluación por doble ciego.[en] Potentiality and feasibility of open peer review: the case of Encrucijadas. Revista Crítica de Ciencias Sociales Abstract. Evaluation processes of scientific and academic journals have been transformed in recent years due to the increase in both the number of journals and the volume of papers received. Furthermore, the use of Internet has generated a strong openness of science, with more and more open access journals. Editorial teams have had to adapt to this new setting and decide on an evaluation model. In this context, this paper presents the case of the evaluation system of the Spanish journal Encrucijadas. Revista Crítica de Ciencias Sociales. This journal proposes an open peer review in which both authors and reviewers are known each other during all the evaluation process. We use a precoded semi-_____________ 1 Socióloga, profesional independiente
Clear evidence of falsification of data should now close the door on this damaging vaccine scare
There is little evidence from this study to support changing current practice by blinding or unmasking to improve the quality of reviews. Blinding or unmasking might, however, have other advantages in the peer review process, such as ensuring that the review process is seen to be fair. In view of the difference between the results of this study and previous research, it is not possible to generalize from this study to other settings, particularly the many biomedical journals that are more specialized. Further research should encompass a wide variety of different types and sizes of journals.
Objective To determine the effects of training on the quality of peer review. Design Single blind randomised controlled trial with two intervention groups receiving different types of training plus a control group. Setting and participants Reviewers at a general medical journal. Interventions Attendance at a training workshop or reception of a self taught training package focusing on what editors want from reviewers and how to critically appraise randomised controlled trials. Main outcome measures Quality of reviews of three manuscripts sent to reviewers at four to six monthly intervals, evaluated using the validated review quality instrument; number of deliberate major errors identified; time taken to review the manuscripts; proportion recommending rejection of the manuscripts. Results Reviewers in the self taught group scored higher in review quality after training than did the control group (score 2.85 v 2.56; difference 0.29, 95% confidence interval 0.14 to 0.44; P = 0.001), but the difference was not of editorial significance and was not maintained in the long term. Both intervention groups identified significantly more major errors after training than did the control group (3.14 and 2.96 v 2.13; P < 0.001), and this remained significant after the reviewers' performance at baseline assessment was taken into account. The evidence for benefit of training was no longer apparent on further testing six months after the interventions. Training had no impact on the time taken to review the papers but was associated with an increased likelihood of recommending rejection (92% and 84% v 76%; P = 0.002). Conclusions Short training packages have only a slight impact on the quality of peer review. The value of longer interventions needs to be assessed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.