Abstract:We examine correlations between dialogue behaviors and learning in tutoring, using two corpora of spoken tutoring dialogues: a human-human corpus and a human-computer corpus. To formalize the notion of dialogue behavior, we manually annotate our data using a tagset of student and tutor dialogue acts relative to the tutoring domain. A unigram analysis of our annotated data shows that student learning correlates both with the tutor's dialogue acts and with the student's dialogue acts. A bigram analysis shows tha… Show more
“…In fact, it may be that different behaviors are actually optimally effective in computer and human tutors. This hypothesis is supported by our prior research, which has shown that although our students learn significantly from both our human tutor and ITSPOKE, their behaviors are very different (Forbes-Riley and Litman, 2008;Litman and Forbes-Riley, 2006b). However, we do not want to conclude that human tutor-based affect adaptations are less effective in general, because although our Complex adaptation was derived from statistical generalizations about human tutor responses to uncertainty, the effectiveness of these responses was not empirically tested before implementation.…”
Section: Evaluating the Adaptations: Student Learning Resultssupporting
We describe the design and evaluation of two different dynamic student uncertainty adaptations in wizarded versions of a spoken dialogue tutoring system. The two adaptive systems adapt to each student turn based on its uncertainty, after an unseen human "wizard" performs speech recognition and natural language understanding and annotates the turn for uncertainty. The design of our two uncertainty adaptations is based on a hypothesis in the literature that uncertainty is an "opportunity to learn"; both adaptations use additional substantive content to respond to uncertain turns, but the two adaptations vary in the complexity of these responses. The evaluation of our two uncertainty adaptations represents one of the first controlled experiments to investigate whether substantive dynamic responses to student affect can significantly improve performance in computer tutors. To our knowledge we are the first study to show that dynamically responding to uncertainty can significantly improve learning during computer tutoring. We also highlight our ongoing evaluation of our uncertainty-adaptive systems with respect to other important performance metrics, and we discuss how our corpus can be used by the wider computer speech and language community as a linguistic resource supporting further research on effective affect-adaptive spoken dialogue systems in general.
“…In fact, it may be that different behaviors are actually optimally effective in computer and human tutors. This hypothesis is supported by our prior research, which has shown that although our students learn significantly from both our human tutor and ITSPOKE, their behaviors are very different (Forbes-Riley and Litman, 2008;Litman and Forbes-Riley, 2006b). However, we do not want to conclude that human tutor-based affect adaptations are less effective in general, because although our Complex adaptation was derived from statistical generalizations about human tutor responses to uncertainty, the effectiveness of these responses was not empirically tested before implementation.…”
Section: Evaluating the Adaptations: Student Learning Resultssupporting
We describe the design and evaluation of two different dynamic student uncertainty adaptations in wizarded versions of a spoken dialogue tutoring system. The two adaptive systems adapt to each student turn based on its uncertainty, after an unseen human "wizard" performs speech recognition and natural language understanding and annotates the turn for uncertainty. The design of our two uncertainty adaptations is based on a hypothesis in the literature that uncertainty is an "opportunity to learn"; both adaptations use additional substantive content to respond to uncertain turns, but the two adaptations vary in the complexity of these responses. The evaluation of our two uncertainty adaptations represents one of the first controlled experiments to investigate whether substantive dynamic responses to student affect can significantly improve performance in computer tutors. To our knowledge we are the first study to show that dynamically responding to uncertainty can significantly improve learning during computer tutoring. We also highlight our ongoing evaluation of our uncertainty-adaptive systems with respect to other important performance metrics, and we discuss how our corpus can be used by the wider computer speech and language community as a linguistic resource supporting further research on effective affect-adaptive spoken dialogue systems in general.
“…Several recent studies of human tutorial dialogue have looked at particular aspects of restatements, for example, (Chi and Roy, 2010;Becker et al, 2011;Dzikovska et al, 2008;Litman and Forbes-Riley, 2006). One study examines face-toface naturalistic tutorial dialogue in which a tutor helps a student work through a physics problem (Chi and Roy, 2010).…”
Although restating part of a student's correct response correlates with learning and various types of restatements have been incorporated into tutorial dialogue systems, this tactic has not been tested in isolation to determine if it causally contributes to learning. When we explored the effect of tutor restatements that support inference on student learning, it did not benefit all students equally. We found that students with lower incoming knowledge tend to benefit more from an increased level of these types of restatement while students with higher incoming knowledge tend to benefit more from a decreased level of such restatements. This finding has implications for tutorial dialogue system design since an inappropriate use of restatements could dampen learning.
“…Each dialogue contains 47 student turns and 43 tutor turns on average. This corpus was collected in tandem with a computer tutoring corpus using our ITSPOKE spoken dialogue tutoring system; the human tutor and ITSPOKE performed the same task [11]. Each dialogue consists of a question-answer discussion between tutor and student about one qualitative physics problem.…”
Section: Human Tutoring Spoken Dialoguesmentioning
confidence: 99%
“…Here we distinguish two labels 3 : the uncertain label is used for answers expressing uncertainty or confusion about the material being learned, and the non-uncertain label is used for all other answers. The same annotator also manually labeled each student answer for correctness, based on the human tutor's response to the answer [11]. Here we distinguish two labels 4 : the correct label is used for answers the tutor considered to be wholly or partly correct, and the incorrect label is used for answers the tutor considered to be wholly incorrect.…”
Section: Student Uncertainty and Correctness Annotationsmentioning
Abstract. We use a χ 2 analysis on our spoken dialogue tutoring corpus to investigate dependencies between uncertain student answers and 9 dialogue acts the human tutor uses in his response to these answers. Our results show significant dependencies between the tutor's use of some dialogue acts and the uncertainty expressed in the prior student answer, even after factoring out the answer's (in)correctness. Identification and analysis of these dependencies is part of our empirical approach to developing an adaptive version of our spoken dialogue tutoring system that responds to student affective states as well as to student correctness.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.