Learning to write and using writing for communication and learning are not natural activities, such as learning to speak. They require a great deal of schooling. The kind of schooling that teachers offer novices or more advanced writers has changed over time.The history of writing education starts in ancient Greece at about 500 BC, where writing was a part of both rhetoric education and of more elementary schooling for clerks and other craftsmen who needed a certain technical writing ability to record information for trade and administrative purposes ( Murphy, 2001 ). It was not until the installation of the public school system in the 19th century that writing education started to gain mass. Utensils such as blackboard and chalk, stylus, slate, and pencil or pen then gained popularity in classrooms. Until the second half of the 20th century, the purpose of writing instruction was mainly to teach mechanics and conventions: handwriting, sentence construction (grammar), spelling, and punctuation. In the past half century, writing teachers began to pay more attention to text, content, style, and creativity.Another change that occurred in the past 50 years is the transition from product-oriented to process-oriented writing instruction, stimulated by researchers such as Britton, Moffett, Emig, and Graves in the 1960s and 1970s. In addition, at the beginning of the 1980s cognitive psychologists such as Young, Hayes, and Flower started to perceive writing as a problem-solving activity. This led to the design and validation of writing process models, with the specifi c aim of applying the acquired insights as tools for teaching writing. The process A Short Overview of Some Insights From ResearchConstituting process variables to predict text quality. From the Hayes and Flower model (1980) , Breetvelt and colleagues distilled the main subprocesses in think-aloud protocols of 15-year-olds writing documented argumentative essays
Current theory about writing states that the quality of (meta)cognitive processing (i.e. planning, text production, revising, et cetera) is, at least partly, determined by the temporal distribution of (meta)cognitive activities across task execution. Put simply, the quality of task execution is determined more by when activities are applied than by how often they are applied. Planning and revising are two extreme writing styles, in which (meta)cognitive activities are temporally differently distributed across the writing process. Planners are writers who generate plans before text production. Revisers use text production as a means to arrive at a content plan. The present study investigates the question whether the online (meta)cognitive processing of secondary school students during writing tasks, as measured by think aloud techniques and keystroke logging, can be predicted by their responses to an offline questionnaire which measures to what degree students considered themselves to be planners and revisers. It was expected that different reported writing styles would entail different temporal distributions of six (meta)cognitive activities: reading the assignment, planning, text production, reading own text, evaluating own text and revising. This hypothesis was partly confirmed. The results show that the online temporal distributions of reading the assignment and planning are different for different degrees of reported writing styles. On the basis of these results, the validity of both the questionnaire and the concept of planner and reviser styles are discussed.Keywords Metacognitive and cognitive processes . Metacognitive knowledge . Offline reports . Online reports . Writing Writing a coherent and readable text involves handling many (meta)cognitive activities, such as generating, planning, translating ideas into language and making revisions when Metacognition Learning (2011) 6:229-253
It is the consensus that, as a result of the extra constraints placed on working memory, texts written in a second language (L2) are usually of lower quality than texts written in the first language (L1) by the same writer. However, no method is currently available for quantifying the quality difference between L1 and L2 texts. In the present study, we tested a rating procedure for enabling quality judgments of L1 and L2 texts on a single scale. Two main features define this procedure: 1) raters are bilingual or near native users of both the L1 and L2; 2) ratings are performed with L1 and L2 benchmark texts. Direct comparisons of observed L1 and L2 scores are only warranted if the ratings with L1 and L2 benchmarks are parallel tests and if the ratings are reliable. Results showed that both conditions are met. Effect sizes (Cohen’s d) indicate that, while score variances are large, there is a relatively large added L2 effect: in the investigated population, L2 text scores were much lower than L1 text scores. The tested rating procedure is a promising method for cross-national comparisons of writing proficiency.
No abstract
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.