No assessment is entirely free of bias. This paper presents findings concerning the way raters in the research group evaluate the extent to which they are influenced by various types of rater bias when grading their students’ written compositions. The sources of bias covered in the article include the teacher’s knowing the student writer and his or her proficiency in English, the difficulty of the writing task, distressful content likely to trigger the rater’s emotional reaction, the test taker’s views clashing with those of the rater, students’ progress, and the like. The data were gathered by the participants in the study via a questionnaire. In addition, the researcher’s interpretation of the respondents’ answers was verified through interviews. Although the two research methods and self-evaluation have their drawbacks, the results reveal interesting, relevant and important information on aspects which make written composition assessment less reliable and valid. The findings confirm the need to raise raters’ awareness of the causes of bias to which they are most susceptible, bringing them closer to effectively addressing the problem of assessment bias. The research involving eleven lecturers teaching Language in Use at the Department of English and American Studies at the Faculty of Arts, University of Ljubljana, is a part of a much larger project based on the author’s PhD thesis.
V članku avtorica analizira in ovrednoti dva zgleda razčlenjevalnih meril za ocenjevanje razlagalnih/utemeljevalnih sestavkov, in sicer Meril Holly L. Jacobs et al. (1981) in Navodil za ocenjevanje Vicki Spandlove et al. (1990). Avtorica prispevka je obravnavana merila izbrala zato, ker so med učitelji in učiteljicami še vedno zelo priljubljena. Pri pretresanju dobrih in slabih plati meril se osredotoča na lastnosti opisnikov. V skladu s priporočili Splošnega evropskega jezikovnega okvira/SEJO (2001: 205–207) ugotavlja, do kakšne mere so dorečeni, jasni, kratki in neodvisni. Analiza razkriva, koliko sta si zgleda meril podobna in v čem se razlikujeta, obravnava njune dobre in šibke plati in pripelje do zaključka, da bi učitelji in učiteljice morali navodila za ocenjevanje kritično ovrednotiti, po potrebi izboljšati in prilagoditi izobraževalnemu kontekstu, v katerem delujejo, preden jih začnejo uporabljati.
Uvodni in zaključni odstavek zaslužita posebno pozornost, saj močno vplivata na učinek celotnega pisnega sestavka. Naloga uvodnega odstavka je, da napove glavno misel sestavka, pa tudi, da pri bralcu zbudi zanimanje in ga tako prepriča, naj prebere celotno besedilo. V zaključku pisec bralca še enkrat spomni na glavno temo sestavka, hkrati pa poskrbi za zaključne misli, ki bodo bralcu ostale v spominu tudi potem, ko besedilo odloži. Članek obravnava težave, s katerimi se pri pisanju uvodnih in zaključnih odstavkov srečujejo dijaki in študentje angleščine. Razdeljene so v tri skupine: motnje v koherenci, neustrezna dolžina in neustrezen slog. Članek vsebuje tudi tipe vaj, s pomočjo katerih lahko študentje opisane težave uzavestijo in postopno odpravijo.
Essay titles are important (de)motivating factors that have an immense influence on the quality of students’ writing. The article focuses on two questionnaires aimed at students of English, and at lecturers teaching, writing skills at the Department of English at the Faculty of Arts in Ljubljana. Both groups of respondents were asked to consider a list of essay titles taken from various authentic sources, deciding whether, to what extent, and under what circumstances they found them suitable. In addition, the respondents were asked to paraphrase each title in their own words to convey their interpretation and understanding of a particular title. The results and conclusions arrived at by means of the questionnaires are presented and compared to my prior expectations, stemming primarily from my teaching experience. The topic is also discussed in the light of what experts on essay writing say about essay titles.
Teachers’ marginal and end comments are an essential part of teaching and evaluating students’ written work. However, the method can backfire when teachers resort to insincere formulaic praise, fall into the trap of over-commenting, and lose sight of the actual author of the text, appropriating it in the process. The challenges of providing effective written feedback require an examination of students’ attitudes toward it, both to reassure teachers that they are doing better than they think they are, but also to make them aware that there is much room for improvement. For one thing, the remaining weaknesses can be addressed in systematic teacher training on written feedback, which has been lacking. Second, these same teachers should then teach their students how to interpret marginal and final comments and use them to revise their work. The article reviews the research to date in this area and presents a case study that sheds more light on the topic, making it clear that more systematic and holistic research and training would be needed in this area.
Content and coherence are the categories most difficult to evaluate fairly when raters use analytic scoring scales. Readers inevitably interpret texts in their own idiosyncratic ways, depending on their knowledge, experience, ethical considerations, and other personal biases that they cannot completely set aside when grading a text. This is also true for descriptors, which are themselves short texts. To make matters worse, due to the very nature of writing but also the lack of consensus among experts in discourse research, writing theory, and writing assessment, descriptors are categorized vaguely and inconsistently. As a result, raters seeking useful evaluation criteria are confronted with descriptors that cover the same concept, such as “relevance”, being categorized in one set of criteria as relating to the content of the written text and in another as belonging to the category of coherence. Nevertheless, the objectivity of the evaluation of written work can be increased. The article examines the relationship between content and coherence, which is reflected in the way the two concepts are defined in the relevant literature, as well as in some descriptors used in two grading scales used in Slovenia. The empirical part of the paper presents a case study involving 46 secondary school teachers, whose responses to a questionnaire confirm the subjectivity of the understanding of individual descriptors and the need for adequate training of teachers in the use of analytic scoring scales, regular standardization in the schools where they work, evaluation of the assessment scales they use and their possible adaptation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.