Abstract. Drawing on sociocultural discourse analysis and argumentation theory, we motivate a focus on learners' discourse as a promising site for identifying patterns of activity which correspond to meaningful learning and knowledge construction. However, software platforms must gain access to qualitative information about the rhetorical dimensions to discourse contributions to enable such analytics. This is difficult to extract from naturally occurring text, but the emergence of more-structured annotation and deliberation platforms for learning makes such information available. Using the Cohere web application as a research vehicle, we present examples of analytics at the level of individual learners and groups, showing conceptual and social network patterns, which we propose as indicators of meaningful learning.
Abstract. We propose the concept of Contested Collective Intelligence (CCI) as a distinctive subset of the broader Collective Intelligence design space. CCI is relevant to the many organizational contexts in which it is important to work with contested knowledge, for instance, due to different intellectual traditions, competing organizational objectives, information overload or ambiguous environmental signals. The CCI challenge is to design sociotechnical infrastructures to augment such organizational capability. Since documents are often the starting points for contested discourse, and discourse markers provide a powerful cue to the presence of claims, contrasting ideas and argumentation, discourse and rhetoric provide an annotation focus in our approach to CCI. Research in sensemaking, computer--supported discourse and rhetorical text analysis motivate a conceptual framework for the combined human and machine annotation of texts with this specific focus. This conception is explored through two tools: a social--semantic web application for human annotation and knowledge mapping (Cohere), plus the discourse analysis component in a textual analysis software tool (Xerox Incremental Parser: XIP). As a step towards an integrated platform, we report a case study in which a document corpus underwent independent human and machine analysis, providing quantitative and qualitative insight into their respective contributions. A promising finding is that significant contributions were signalled by authors via explicit rhetorical moves, which both human analysts and XIP could readily identify. Since working with contested knowledge is at the heart of CCI, the evidence that automatic detection of contrasting ideas in texts is possible through rhetorical discourse analysis is progress towards the effective use of automatic discourse analysis in the CCI framework. Contested Collective Intelligence 2
Abstract. Collaborative Computer-Supported Argument Visualization (CCSAV) is a technical methodology that offers support for online collective deliberation over complex dilemmas. As compared with more traditional conversational technologies, like wikis and forums, CCSAV is designed to promote more critical thinking and evidence-based reasoning, by using representations that highlight conceptual relationships between contributions, and through computational analytics that assess the structural integrity of the network. However, to date, CCSAV tools have achieved adoption primarily in small-scale educational contexts, and only to a limited degree in real world applications. We hypothesise that by reifying conversations as logical maps to address the shortcomings of chronological streams, CCSAV tools underestimate the importance of participation and interaction in enhancing collaborative knowledge-building. We argue, therefore, that CCSAV platforms should be socially augmented in order to improve their mediation capability. Drawing on Clark and Brennan's influential Common Ground theory, we designed a Debate Dashboard, which augmented a CCSAV tool with a set of widgets that deliver meta-information about participants and the interaction process. An empirical study simulating a moderately sized collective deliberation scenario provides evidence that this experimental version outperformed the control version on a range of indicators, including usability, mutual understanding, quality of perceived collaboration, and accuracy of individual decisions. No evidence was found that the addition of the Debate Dashboard impeded the quality of the argumentation or the richness of content. Keywords:Computer-supported argument visualization, Grounding process, Common Ground, Debate Dashboard, Collective deliberation, Visual feedback Supporting collective deliberation through socially-augmented knowledge mapping toolsComputer-supported argument visualization (CSAV) platforms assist their users in identifying, structuring, and settling issues using argument maps (Buckingham Shum, 2003). An argument map is a visual representation of the informal logical structure of an argument (Walton, 2008). Depending on the representational scheme, it displays the constituent elements of the argument (such as issues, claims, premises, and evidence) as a tree or network, with nodes in the network expressing the elements, and arrows expressing key relationships, such as evidential support and challenge (van Gelder, 2007) or the underlying argumentation scheme, such as argument by analogy, or argument by authority (Reed and Rowe, 2004;Walton, et al., 2008;Buckingham Shum and Okada, 2008).A wide range of representational schemes has been devised within different research communities (e.g. law; design; philosophy). Computational argumentation research has developed more formal logic and mathematical models with an interest in reasoning over the model in order to evaluate claims or prove properties automatically (e.g. Rahwan and Simari, 2009). However...
Online technologies have facilitated the development of Virtual Communities of Practice (virtual CoPs) to support health professionals collaborate online to share knowledge, improve performance and support the spread of innovation and best practices. Research, however, shows that many virtual CoPs do not achieve their expected potential because online interaction among healthcare professionals is generally low. Focusing on health visitors, who are UK qualified midwives or nurses who have undertaken additional qualifications as specialist public health workers in the community, the paper examines the factors that influence online interaction among health visitors collaborating to share knowledge and experience in a virtual CoP. The paper makes suggestions for how to improve online interaction among health professionals in virtual CoPs by increasing the size of membership in order to take advantage of both posting and viewing contributions, facilitating moderation to improve networking among geographically dispersed members groups and improving the topic relevance in order to stimulate contributions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.