Abstract. As described in this paper, we propose a supervisory system that considers actual situations and social aspects of users in a ubiquitous computing environment. To realize gentle and safe supervision while providing efficient supervisory services, the system must recognize the situations of a watched person, such as the person's physical condition. To achieve this, we have proposed a ubiquitous supervisory system "uEyes", which introduces Social Context Awareness: a distinguishing feature for supervision. Using this feature, the system can combine environmental information acquired from sensors in the real world and common-sense knowledge related to human activities in daily life. As described in this paper, we specifically examine design of Social Context Awareness using ontology technologies. Based on this advanced feature, a live video streaming system is configured autonomously depending on the users' circumstances in runtime. We implemented a uEyes prototype for supervising elderly people and performed some experiments based on several scenarios. Based on those experimental results, we confirmed that the social contexts are handled effectively to support the supervision.
This paper presents a gentle system for supervising care-support services that fulfill users' actual requirements based on their physical locations and statuses of system components in ubiquitous computing environments. To address issues of traditional supervisory systems based only on location information and the situation of one side of the watching site or the watched site, we consider all the detailed situations (contexts) of associated entities such as devices, software, networks, and users, in both sides, in addition to the users' physical locations. We propose a ubiquitous supervision system called uEyes to realize this. We introduce an autonomous decision making ability and cooperative behavior to each entity in uEyes, based on agentbased computing technologies. Using advanced features of the entities, live video streaming systems for watching over people can be constructed autonomously according to the multiple contexts of the entities, on both sides in runtime. We implemented a prototype system of uEyes for watching over elderly people, and performed some experiments based on several scenarios. For instance, we assumed a scenario in which a son, who is caring for his ailing father, needs to see his father's facial color and expression with high-quality video. For that purpose, a live video streaming system involving a high-resolution camera and a display device was autonomously configured in run-time. We confirmed that supervision services that fulfill detailed users' requirements can be provided effectively.
In this paper we propose a gentle system for watching over for ubiquitous care-support services that fulfill users' actual requirements. In traditional systems of this domain, only the location information of a user on "one" side of the watching site or the watched site is considered. In this work, we cope not only with the user's location but also all the detailed situations (contexts) of associated entities such as devices, software, networks, and users, in "both" sides. To do this, we propose a ubiquitous watching over system "uEyes". In uEyes, autonomous decision making ability and cooperative behavior are introduced to each entity. Based on the advanced feature of the entities, live video streaming system is dynamically constructed according to the contexts of the entities, on both sides in runtime. As a result, watching over service that fulfills detailed users' requirements can be effectively provided.We implemented a prototype of uEyes for watching over elderly people, and performed some experiments based on several scenarios. For instance, we assumed a scenario that a son, who is taking care of his ailing father, wanted to see his facial color and expression in high quality video. We confirmed that the live video streaming system involving high resolution camera and display devices is dynamically configured in run-time.
Purpose -Real-time multimedia supervisory systems generally include a distributed system that delivers live video input captured with cameras at the watched person's site, using a PC or hand-held device at the distant supervisor's site. The system comprises many entities such as cameras for image capture, transmission software, network connections, receiver software, a display device, multimedia processing software and hardware, control software, etc. The purpose of this paper is to realize a safe and convenient supervisory system that autonomously provides users with services that fulfill users' requirements related to quality and privacy in a ubiquitous information environment. Design/methodology/approach -A system is designed by integrating environmental information acquired from the real world and knowledge related to human social activities. A real space understanding mechanism is proposed to infer the situations and relationships of users by combining sensing information and social knowledge. Social knowledge related to human relationships, the life style of the watched person, home structure, etc. is used with ontology to infer the situations of users. Findings -An early prototype was implemented for supervising elderly people and some experiments were performed based on several scenarios. Results obtained from some experiments confirmed that this supervisory system can provide real-time multimedia supervisory services for elderly people, with reasonable quality-of-service and privacy that meet the users' requirements. Originality/value -The system described in this paper assesses the situation of users and surrounding environmental information to provide appropriate supervisory services. This paper provides insight into the design and development of ubiquitous application systems to realize comfortable and safe services using a combination of environmental information and social knowledge.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.