In cooperation, the workers must know how co-workers behave. However, an agent's policy, which is embedded in a statistical machine learning model, is hard to understand, and requires much time and knowledge to comprehend. Therefore, it is difficult for people to predict the behavior of machine learning robots, which makes Human Robot Cooperation challenging. In this paper, we propose Instruction-based Behavior Explanation (IBE), a method to explain an autonomous agent's future behavior. In IBE, an agent can autonomously acquire the expressions to explain its own behavior by reusing the instructions given by a human expert to accelerate the learning of the agent's policy. IBE also enables a developmental agent, whose policy may change during the cooperation, to explain its own behavior with sufficient time granularity.
Most of agents that learn policy for tasks with reinforcement learning (RL) lack the ability to communicate with people, which makes human-agent collaboration challenging. We believe that, in order for RL agents to comprehend utterances from human colleagues, RL agents must infer the mental states that people attribute to them because people sometimes infer an interlocutor's mental states and communicate on the basis of this mental inference. This paper proposes PublicSelf model, which is a model of a person who infers how the person's own behavior appears to their colleagues. We implemented the PublicSelf model for an RL agent in a simulated environment and examined the inference of the model by comparing it with people's judgment. The results showed that the agent's intention that people attributed to the agent's movement was correctly inferred by the model in scenes where people could find certain intentionality from the agent's behavior.
CCS CONCEPTS• Computing methodologies → Theory of mind; KEYWORDS Reinforcement learning, Bayesian inference, Public self-awareness, Theory of mind, PublicSelf model, Human-agent interaction ACM Reference Format:
This paper conducts the first trial to apply a masked language AI model and the ''Gini coefficient'' to the field of English study. We propose an algorithm named CLOZER that generates open cloze questions that inquiry knowledge of English learners. Open cloze questions (OCQ) have been attracting attention for both measuring the ability and facilitating the learning of English learners. However, since OCQ is in free form, teachers have to ensure that only a ground truth answer and no additional words will be accepted in the blank. A remarkable benefit of CLOZER is to relieve teachers of the burden of producing OCQ. Moreover, CLOZER provides a self-study environment for English learners by automatically generating OCQ. We evaluated CLOZER through quantitative experiments on 1,600 answers and show its effectiveness statistically. Compared with human-generated questions, we also revealed that CLOZER can generate OCQs better than the average non-native English teacher. Additionally, we conducted a field study at a high school to clarify the benefits and hurdles when introducing CLOZER. Then, on the basis of our findings, we proposed several design improvements.INDEX TERMS Open cloze test, automatic question generation, masked language model, field study.
I. INTRODUCTIONAnswer a word that will fit in the blank in the following sentence: ''If you want to go to a top university, you should ( ) English hard.'' 1 Some of you might have struggled to answer such a question in the past. A question like this that asks you to fill in a gap with a word is called an Open Cloze Test or Open Cloze Question (OCQ) [1], and it is widely used in language assessment tests for second language (L2) learners, such as in Cambridge Assessment English tests. Compared to the commonly used Multiple Choice Question (MCQ), where both the correct answer and several wrong answers (often called detractors) are provided for each question, the OCQ The associate editor coordinating the review of this manuscript and approving it for publication was Francisco J. Garcia-Penalvo . 1 The answer is study.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.