Abstract. We propose the concept of Contested Collective Intelligence (CCI) as a distinctive subset of the broader Collective Intelligence design space. CCI is relevant to the many organizational contexts in which it is important to work with contested knowledge, for instance, due to different intellectual traditions, competing organizational objectives, information overload or ambiguous environmental signals. The CCI challenge is to design sociotechnical infrastructures to augment such organizational capability. Since documents are often the starting points for contested discourse, and discourse markers provide a powerful cue to the presence of claims, contrasting ideas and argumentation, discourse and rhetoric provide an annotation focus in our approach to CCI. Research in sensemaking, computer--supported discourse and rhetorical text analysis motivate a conceptual framework for the combined human and machine annotation of texts with this specific focus. This conception is explored through two tools: a social--semantic web application for human annotation and knowledge mapping (Cohere), plus the discourse analysis component in a textual analysis software tool (Xerox Incremental Parser: XIP). As a step towards an integrated platform, we report a case study in which a document corpus underwent independent human and machine analysis, providing quantitative and qualitative insight into their respective contributions. A promising finding is that significant contributions were signalled by authors via explicit rhetorical moves, which both human analysts and XIP could readily identify. Since working with contested knowledge is at the heart of CCI, the evidence that automatic detection of contrasting ideas in texts is possible through rhetorical discourse analysis is progress towards the effective use of automatic discourse analysis in the CCI framework. Contested Collective Intelligence 2
When used effectively, reflective writing tasks can deepen learners' understanding of key concepts, help them critically appraise their developing professional identity, and build qualities for lifelong learning. As such, reflective writing is attracting substantial interest from universities concerned with experiential learning, reflective practice, and developing a holistic conception of the learner. However, reflective writing is for many students a novel genre to compose in, and tutors may be inexperienced in its assessment. While these conditions set a challenging context for automated solutions, natural language processing may also help address the challenge of providing real time, formative feedback on draft writing. This paper reports progress in designing a writing analytics application, detailing the methodology by which informally expressed rubrics are modelled as formal rhetorical patterns, a capability delivered by a novel web application. Preliminary tests on an independently human-annotated corpus are encouraging, showing improvements from the first to second version, but with much scope for improvement. We discuss a range of issues: the prevalence of false positives in the tests, areas for future technical improvements, the issue of gaming the system, and the participatory design process that has enabled work across disciplinary boundaries to develop the prototype to its current state.
Research into the teaching and assessment of student writing shows that many students find academic writing a challenge to learn, with legal writing no exception. Improving the availability and quality of timely formative feedback is an important aim. However, the time-consuming nature of assessing writing makes it impractical for instructors to provide rapid, detailed feedback on hundreds of draft texts which might be improved prior to submission. This paper describes the design of a natural language processing (NLP) tool to provide such support. We report progress in the development of a web application called AWA (Academic Writing Analytics), which has been piloted in a Civil Law degree. We describe: the underlying NLP platform and the participatory design process through which the law academic and analytics team tested and refined an existing rhetorical parser for the discipline; the user interface design and evaluation process; and feedback from students, which was broadly positive, but also identifies important issues to address. We discuss how our approach is positioned in relation to concerns regarding automated essay grading, and ways in which AWA might provide more actionable feedback to students. We conclude by considering how this design process addresses the challenge of making explicit to
When assessing student essays, educators look for the students' ability to present and pursue well-reasoned and strong arguments. Such scholarly argumentation is often articulated by rhetorical metadiscourse. Educators will be necessarily examining metadiscourse in students' writing as signals of the intellectual moves that make their reasoning visible. Therefore students and educators could benefit from available powerful automated textual analysis that is able to detect rhetorical metadiscourse. However, there is a need to validate such technologies in higher education contexts, since they were originally developed in noneducational applications. This paper describes an evaluation study of a particular language analysis tool, the Xerox Incremental Parser (XIP), on undergraduate social science student essays, using the mark awarded as a measure of the quality of the writing. As part of this exploration, the study presented in this paper seeks to assess the quality of the XIP through correlational studies and multiple regression analysis.
No abstract
The evaluation of scientific performance is gaining importance in all research disciplines. The basic process of the evaluation is peer reviewing, which is a time-consuming activity. In order to facilitate and speed up peer reviewing processes we have developed an exploratory NLP system in the field of educational sciences. The system highlights key sentences, which are supposed to reflect the most important threads of the article The highlighted sentences offer guidance on the content-level while structural elements -the title, abstract, keywords, section headingsgive an orientation about the design of the argumentation in the article. The system is implemented using a discourse analysis module called concept matching applied on top of the Xerox Incremental Parser, a rule-based dependency parser. The first results are promising and indicate the directions for the future development of the system.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.