Numerous serious exergames advocate the use of engaging avatars to motivate a consistent exercise regimen. However, the process of specifying the prescribed exercise, implementing it as avatar animation, and developing an accurate feedback-providing mechanism is complex and requires a high level of expertise in game engines, control languages, and hardware devices. Furthermore, in the context of rehabilitation exergames, the requirements for accurate assessment and timely and precise feedback can be quite stringent. At the same time, the Kinect T M motioncapture sensor offers a natural interface to game consoles, and its affordability and wide availability represents a huge opportunity for at-home exergames. In this paper, we describe our work towards a system that envisions to simplify the process of developing rehabilitation exergames with Kinect T M. The system relies on a language for specifying postures and movements between them, and includes an editor that enables rehabilitation therapists to specify the prescribed exercise, by editing a demonstration of the exercise. This exercise-specification grammar is used to drive the animation of an avatar and the provision of quality feedback, by comparing the player's postures (as captured by the Kinect T M) against those of the coaching avatar and the grammar.
In recent years, we have been witnessing a rapid increase of research on exergames—i.e., computer games that require users to move during gameplay as a form of physical activity and rehabilitation. Properly balancing the need to develop an effective exercise activity with the requirements for a smooth interaction with the software system and an engaging game experience is a challenge. Model-driven software engineering enables the fast prototyping of multiple system variants, which can be very useful for exergame development. In this paper, we propose a framework, PhyDSLK, which eases the development process of personalized and engaging Kinect-based exergames for rehabilitation purposes, providing high-level tools that abstract the technical details of using the Kinect sensor and allows developers to focus on the game design and user experience. The system relies on model-driven software engineering technologies and is made of two main components: (i) an authoring environment relying on a domain-specific language to define the exergame model encapsulating the gameplay that the exergame designer has envisioned and (ii) a code generator that transforms the exergame model into executable code. To validate our approach, we performed a preliminary empirical evaluation addressing development effort and usability of the PhyDSLK framework. The results are promising and provide evidence that people with no experience in game development are able to create exergames with different complexity levels in one hour, after a less-than-two-hour training on PhyDSLK. Also, they consider PhyDSLK usable regardless of the exergame complexity.
Automatic human facial recognition is an important and complicated task; it is necessary to design algorithms capable of recognizing the constant patterns in the face and to use computing resources efficiently. In this paper we present a novel algorithm to recognize the human face in real time; the system's input is the depth and color data from the Microsoft KinectTM device. The algorithm recognizes patterns/shapes on the point cloud topography. The template of the face is based in facial geometry; the forensic theory classifies the human face with respect to constant patterns: cephalometric points, lines, and areas of the face. The topography, relative position, and symmetry are directly related to the craniometric points. The similarity between a point cloud cluster and a pattern description is measured by a fuzzy pattern theory algorithm. The face identification is composed by two phases: the first phase calculates the face pattern hypothesis of the facial points, configures each point shape, the related location in the areas, and lines of the face. Then, in the second phase, the algorithm performs a search on these face point configurations.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.