This paper describes an agent-based approach to realizing interactive pedagogical drama. Characters choose their actions autonomously, while director and cinematographer agents manage the action and its presentation in order to maintain story structure, achieve pedagogical goals, and present the dynamic story to as to achieve the best dramatic effect. Artistic standards must be maintained while permitting substantial variability in story scenario. To achieve these objectives, scripted dialog is deconstructed into elements that are portrayed by agents with emotion models. Learners influence how the drama unfolds by controlling the intentions of one or more characters, who then behave in accordance with those intentions. Interactions between characters create opportunities to move the story in pedagogically useful directions, which the automated director exploits. This approach is realized in the multimedia title Carmen's Bright IDEAS, an interactive health intervention designed to improve the problem solving skills of mothers of pediatric cancer patients.
graduated from the University of Illinois at Urbana-Champaign with a Bachelor's degree in engineering mechanics and is now pursuing a master's in Curriculum and Instruction through the Digital Environments for Learning, Teaching, and Agency (DELTA) program. She is interested in engineering design and lends her technical background to her research with the Collaborative Learning Lab, exploring how to improve ill-structured tasks for engineering students in order to promote collaborative problem solving and provide experience relevant to authentic work in industry.
Text-to-speech synthesis can play an important role in interactive education and training applications, as voices for animated agents. Such agents need high-quality voices capable of expressing intent and emotion.This paper presents preliminary results in an effort aimed at synthesizing expressive military speech for training applications. Such speech has acoustic and prosodic characteristics that can differ markedly from ordinary conversational speech. A limited domain synthesis approach is used employing samples of expressive speech, classified according to speaking style. The resulting synthesizer was tested both in isolation and in the context of a virtual reality training scenario with animated characters.
Text-to-speech synthesis can play an important role in interactive education and training applications, as voices for animated agents. Such agents need high-quality voices capable of expressing intent and emotion.This paper presents preliminary results in an effort aimed at synthesizing expressive military speech for training applications. Such speech has acoustic and prosodic characteristics that can differ markedly from ordinary conversational speech. A limited domain synthesis approach is used employing samples of expressive speech, classified according to speaking style. The resulting synthesizer was tested both in isolation and in the context of a virtual reality training scenario with animated characters.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.