Abstract:Once writers complete a first draft, they are often encouraged to evaluate their writing and prioritize what to revise. Yet, this process can be both daunting and difficult. This study looks at how students used a semantic concept mapping tool to re-present the content and organization of their initial draft of an informational text. We examine the processes of students at two different schools as they remediated their own texts and how those processes impacted the development of their rhetorical, conceptual, … Show more
“…Cope, Kalantzis, Abd-El-Khalick, & Bagley (2013) -Coded annotations, supported by machine learning where users train the system to recognize higher order thinking.5 -Ontology-referenced maps that prompt knowledge creators and reviewers to add a second layer of meaning to text, image and data; this is direct support to learners, as well as machine learning training data. Olmanson et al (2016) We need to broaden the range of data types and data points for assessment. The dominance of select response assessments is based on the ease of their mechanization (Kalantzis & Cope, 2012 "bubble tests."…”
Section: How Artificial Intelligence Opens Up a New Assessment Paradigm And Education 20mentioning
“…Cope, Kalantzis, Abd-El-Khalick, & Bagley (2013) -Coded annotations, supported by machine learning where users train the system to recognize higher order thinking.5 -Ontology-referenced maps that prompt knowledge creators and reviewers to add a second layer of meaning to text, image and data; this is direct support to learners, as well as machine learning training data. Olmanson et al (2016) We need to broaden the range of data types and data points for assessment. The dominance of select response assessments is based on the ease of their mechanization (Kalantzis & Cope, 2012 "bubble tests."…”
Section: How Artificial Intelligence Opens Up a New Assessment Paradigm And Education 20mentioning
“…Since the late 1990s, teachers have worked to integrate new media and aspects of new literacies into the curriculum. The range of integration rationales includes an interest in leveraging platform affinity and novelty to inject excitement into content areas (Olmanson and Abrams 2013), rethinking student participation in learning spaces (Vasudevan 2010), encouraging the expression of student identities (Rust 2015), closing the digital divide, and mirroring collaborative ecologies of the twenty-first-century workplace and better facilitating the inclusion of multimodality in academic texts to fulfill evolving State and national expectations (Olmanson et al 2015).…”
Section: New Media Literacies In Schoolsmentioning
Gordon Commission on the Future of Assessment in Education. (2013). To assess, to teach, to learn: A vision for the future of assessment. Technical report.
“…In one example of visual markup, we have created in our "Scholar" environment a tool whereby students highlight sections of information texts (readings, their own texts, their peers' texts) in different colors in order to identify CCSS information text ideas of concept, definition, fact, example, and opinion. This creates nodes for a diagram beside the text in which they outline the structure of the 6 information presentation (Olmanson et al, 2015). Additional user structuring directly supports the assessment process.…”
Section: Structured Embedded Datamentioning
confidence: 99%
“…In our Scholar research and development, we have created a tool that traces learner thinking in the form of a sequence of moves as users create a visualization of the underlying logic of their information and argument texts. The question then is, what patterns of thinking predict successful or less successful written texts (Olmanson et al, 2015)?…”
The prospect of “big data” at once evokes optimistic views of an information-rich future and concerns about surveillance that adversely impacts our personal and private lives. This overview article explores the implications of big data in education, focusing by way of example on data generated by student writing. We have chosen writing because it presents particular complexities, highlighting the range of processes for collecting and interpreting evidence of learning in the era of computer-mediated instruction and assessment as well as the challenges. Writing is significant not only because it is central to the core subject area of literacy; it is also an ideal medium for the representation of deep disciplinary knowledge across a number of subject areas. After defining what big data entails in education, we map emerging sources of evidence of learning that separately and together have the potential to generate unprecedented amounts of data: machine assessments, structured data embedded in learning, and unstructured data collected incidental to learning activity. Our case is that these emerging sources of evidence of learning have significant implications for the traditional relationships between assessment and instruction. Moreover, for educational researchers, these data are in some senses quite different from traditional evidentiary sources, and this raises a number of methodological questions. The final part of the article discusses implications for practice in an emerging field of education data science, including publication of data, data standards, and research ethics.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.