Courses in artificial intelligence and related topics often cover methods for reasoning under uncertainty, decision theory, and game theory. However, these methods can seem very abstract when students first encounter them, and they are often taught using simple “toy” problems. Our goal is to help students to operationalize this knowledge by designing sophisticated autonomous agents that must make complex decisions in games that capture their interest. We describe a tournament-based pedagogy that we have used in two different courses with two different games based on current research topics in artificial intelligence to engage students in designing agents that use strategic reasoning. Many students find this structure very engaging, and we find that students develop a deeper understanding of the abstract strategic reasoning concepts introduced in the courses.
Game theory is a tool for modeling multi-agent decision problems and has been used to analyze strategies in domains such as poker, security, and trading agents. One method for solving very large games is to use abstraction techniques to shrink the game by removing detail, solve the reduced game, and then translate the solution back to the original game. We present a methodology for evaluating the robustness of different game-theoretic solution concepts to the errors introduced by the abstraction process. We present an initial empirical study of the robustness of several solution methods when using abstracted games.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.