We present an evolutionary adaptive eye-tracking framework aiming for low-cost human computer interaction. The main focus is to guarantee eye-tracking performance without using high-cost devices and strongly controlled situations. The performance optimization of eye tracking is formulated into the dynamic control problem of deciding on an eye tracking algorithm structure and associated thresholds/parameters, where the dynamic control space is denoted by genotype and phenotype spaces. The evolutionary algorithm is responsible for exploring the genotype control space, and the reinforcement learning algorithm organizes the evolved genotype into a reactive phenotype. The evolutionary algorithm encodes an eye-tracking scheme as a genetic code based on image variation analysis. Then, the reinforcement learning algorithm defines internal states in a phenotype control space limited by the perceived genetic code and carries out interactive adaptations. The proposed method can achieve optimal performance by compromising the difficulty in the real-time performance of the evolutionary algorithm and the drawback of the huge search space of the reinforcement learning algorithm. Extensive experiments were carried out using webcam image sequences and yielded very encouraging results. The framework can be readily applied to other low-cost vision-based human computer interactions in solving their intrinsic brittleness in unstable operational environments.
a b s t r a c tThis paper introduces an adaptive visual tracking method that combines the adaptive appearance model and the optimization capability of the Markov decision process. Most tracking algorithms are limited due to variations in object appearance from changes in illumination, viewing angle, object scale, and object shape. This paper is motivated by the fact that tracking performance degradation is caused not only by changes in object appearance but also by the inflexible controls of tracker parameters. To the best of our knowledge, optimization of tracker parameters has not been thoroughly investigated, even though it critically influences tracking performance. The challenge is to equip an adaptive tracking algorithm with an optimization capability for a more flexible and robust appearance model. In this paper, the Markov decision process, which has been applied successfully in many dynamic systems, is employed to optimize an adaptive appearance model-based tracking algorithm. The adaptive visual tracking is formulated as a Markov decision process based dynamic parameter optimization problem with uncertain and incomplete information. The high computation requirements of the Markov decision process formulation are solved by the proposed prioritized Q-learning approach. We carried out extensive experiments using realistic video sets, and achieved very encouraging and competitive results.
This paper presents an evolutionary and adaptive framework for efficient visual tracking based on a hybrid POMDP formulation. The main focus is to guarantee visual tracking performance under varying environments without strongly-controlled situations or high-cost devices. The performance optimization is formulated as a dynamic adaptation of the system control parameters, i.e., threshold and adjusting parameters in a visual tracking algorithm, based on the hybrid of offline and online POMDPs. The hybrid POMDP allows the agent to construct world-belief models under uncertain environments in offline, and focus on the optimization of the system control parameters over the current world model in real-time. Since the visual tracking should satisfy strict real-time constraints, we restrict our attention to simpler and faster approaches instead of exploring the belief space of each world model directly. The hybrid POMDP is thus solved by an evolutionary adaptive framework employing the GA (Genetic Algorithm) and real-time Q-learning approaches in the optimally reachable genotype and phenotype spaces, respectively. Experiments were carried out extensively in the area of eye tracking using videos of various structures and qualities, and yielded very encouraging results. The framework can achieve an optimal performance by balancing the tracking accuracy and realtime constraints.
Recognition and image processing technology depends on illumination variation. One of the most important factors is the parameters of algorithms. When it comes to select these values, the system has different types of recognition accuracy. In this paper, we propose performance improvement of the eye tracking system that depends on some environments such as, people, location, and illumination. Optimized threshold parameter was decided by using reinforcement learning. When the system accuracy goes down, reinforcement learning used to train the value of parameters. According to the experimental results, the performance of eye tracking system can be improved from 3% to 14% by using reinforcement learning. The improved eye tracking system can be effectively used for human-computer interaction.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.