Similarity reports of plagiarism detectors should be approached with caution as they may not be sufficient to support allegations of plagiarism. This study developed a 50-item rubric to simplify and standardize evaluation of academic papers. In the spring semester of 2011-2012 academic year, 161 freshmen’s papers at the English Language Teaching Department of Çanakkale Onsekiz Mart University, Turkey, were assessed using the rubric. Validity and reliability were established. The results indicated citation as a particularly problematic aspect, and indicated that fairer assessment could be achieved by using the rubric along with plagiarism detectors’ similarity results.
There is a general belief that software must be able to easily do things that humans find difficult. Since finding sources for plagiarism in a text is not an easy task, there is a widespread expectation that it must be simple for software to determine if a text is plagiarized or not. Software cannot determine plagiarism, but it can work as a support tool for identifying some text similarity that may constitute plagiarism. But how well do the various systems work? This paper reports on a collaborative test of 15 web-based text-matching systems that can be used when plagiarism is suspected. It was conducted by researchers from seven countries using test material in eight different languages, evaluating the effectiveness of the systems on single-source and multi-source documents. A usability examination was also performed. The sobering results show that although some systems can indeed help identify some plagiarized content, they clearly do not find all plagiarism and at times also identify nonplagiarized material as problematic.
A clear understanding of terminology is crucial in any academic field. When it is clear that complex interdisciplinary concepts are interpreted differently depending on the academic field, geographical setting or cultural values, it is time to take action. Given this, the Glossary for Academic Integrity, newly developed by the European Network for Academic Integrity project, served as the basis for compiling a comprehensive taxonomy of terms related to academic integrity. Following a rigorous coding exercise, the taxonomy was partitioned into three constituent components-Integrity, Misconduct and Neutral terms. A review of relevant literature sources is included, and the strengths and weaknesses of existing taxonomies are discussed in relation to this new offering. During the creation of these artefacts the authors identified and resolved many differences between their individual interpretative understandings of concepts/terms and the viewpoints of others. It is anticipated that the freely-available glossary and taxonomy will be explored and valued by researchers, teachers, students and the general public alike.
This study examines the decision-making behaviors of raters with varying levels of experience while assessing EFL essays of distinct qualities. The data were collected from 28 raters with varying levels of rating experience and working at the English language departments of different universities in Turkey. Using a 10-point analytic rubric, each rater voice-recorded their thoughts through think-aloud protocols (TAPs) while scoring 16 essays of distinct text qualities and provided brief score explanations. Data collected from TAPs were analyzed by using a coding scheme adapted from Cumming, Kantor, and Powers (2002). The results revealed that text quality has a larger effect than rating experience on raters’ decision-making behaviors. In addition, raters prioritized aspects of style, grammar, and mechanics when rating low-quality essays, but emphasized rhetoric and their general impressions of the text for high-quality essays. Furthermore, low-experienced raters differed more in their behaviors while assessing scripts of distinct qualities than did the medium- and high-experienced groups. The findings suggest that raters’ scoring behaviors might evolve with practice, resulting in less variation in their decisions. As such, this research provides implications for developing strategy-based rater training programs, which might help to increase consistency across raters of different experience levels.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.