-This paper presents an initial proof-of-concept implementation of a comprehensively intelligent built-environment based on mutually informing Design-to-Robotic-Production and -Operation (D2RP&O) strategies and methods developed at Delft University of Technology (TUD). In this implementation, D2RP is expressed via deliberately differentiated and function-specialized components, while D2RO expressions subsume an extended Ambient Intelligence (AmI) enabled by a CyberPhysical System (CPS). This CPS, in turn, is built on a heterogeneous, scalable, self-healing, and partially meshed Wireless Sensor and Actuator Network (WSAN) whose nodes may be clustered dynamically ad hoc to respond to varying computational needs.Two principal and innovative functionalities are demonstrated in this implementation: (1) costeffective yet robust Human Activity Recognition (HAR) via Support Vector Machine (SVM) and kNearest Neighbor (k-NN) classification models, and (2) appropriate corresponding reactions that promote the occupant's spatial experience and wellbeing via continuous regulation of illumination with respect to colors and intensities to correspond to engaged activities.The present implementation attempts to provide a fundamentally different approach to intelligent built-environments, and to promote a highly sophisticated alternative to existing intelligent solutions whose disconnection between architectural considerations and computational services limits their operational scope and impact.
This paper presents the implementation of a facial-identity and-expression recognition mechanism that confirms or negates physical and/or computational actuations in an intelligent built-environment. Said mechanism is built via Google Brain's TensorFlow (as regards facial identity recognition) and Google Cloud Platform's Cloud Vision API (as regards facial gesture recognition); and it is integrated into the ongoing development of an intelligent built-environment framework, viz., Design-to-Robotic-Production &-Operation (D2RP&O), conceived at Delft University of Technology (TUD). The present work builds on the inherited technological ecosystem and technical functionality of the Design-to-Robotic-Operation (D2RO) component of said framework; and its implementation is validated via two scenarios (physical and computational). In the first scenario-and building on an inherited adaptive mechanism-if building-skin components perceive a rise in interior temperature levels, natural ventilation is promoted by increasing degrees of aperture. This measure is presently confirmed or negated by a corresponding facial expression on the part of the user in response to said reaction, which serves as an intuitive override / feedback mechanism to the intelligent building-skin mechanism's decision-making process. In the second scenario-and building on another inherited mechanism-if an accidental fall is detected and the user remains consciously or unconsciously collapsed, a series of automated emergency notifications (e.g., SMS, email, etc.) are sent to family and/or caretakers by particular mechanisms in the intelligent built-environment. The precision of this measure and its execution are presently confirmed by (a) identity detection of the victim, and (b) recognition of a reflexive facial gesture of pain and/or displeasure. The work presented in this paper promotes a considered relationship between the architecture of the builtenvironment and the Information and Communication Technologies (ICTs) embedded and/or deployed.
Scaffolding assembly constitutes a potentially dangerous and time-consuming task within the construction process. In most industrialized nations, said assembly is the process in which most of the causalities of the Construction Industry happen, especially in projects characterized by high complexity and restricted operation space. The repetitiveness of the profile elements and of the assembly operations may open the possibility for automating the scaffolding construction process, which is however a difficult task due to the unstructured environment of construction sites and the implied strong collaboration of human and machine agents. As a possible automation solution, the startup KEWAZO proposes a novel robotic scaffolding assembly system. The solution focuses on the development of small-sized robotic climbing modules controlled as an integrated system. This paper focuses on the development of the robotic gripper of said modular system. The gripper system is validated through static analysis and the construction of a fully functional prototype. Furthermore, the system is integrated with a voice identification / authentication and control mechanism that enables it to recognize a variety of human identities and to engage with verbal commands according to the authority and privileges assigned to each individual.
This paper presents a context-aware lighttracking and-redirecting system guided by hand-gesture recognition. It is conceived as yet another mechanism within an ongoing development of a more intuitive and technically sophisticated Ambient Intelligence / Active and Assisted Living ecosystem. The detailed system consists of individual nodes that are strategically installed across regions of a buildingenvelope, which enables this latter to draw or deflect direct natural light into or away from specific locations within the built-environment as requested by the user(s) via recognized hand-gestures. Each node is capable of sending and receiving sensed-data continuously via ZigBee with one another as well as with microcontrollers embedded within the interior builtenvironment. Said microcontrollers are equipped with cameras via which four hand-gestures may be recognized. The first or initializing hand-gesture engages the system and enables it to recognize any of the remaining hand-gestures. The second redirects light towards the position of the detected hand-gesture, while the third redirects it away from said position. Finally, the fourth gesture turns the light-tracking and-redirecting system off.
This paper details the development of an opensource eye-and gaze-tracking mechanism designed for open, scalable, and decentralized Active and Assisted Living (AAL) ecosystems built on Wireless Sensor and Actuator Networks (WSANs). Said mechanism is deliberately conceived as yet another service-feature in an ongoing implementation of an extended intelligent built-environment framework, one motivated and informed by both Information and Communication Technologies (ICTs) as well as by emerging Architecture, Engineering, and Construction (AEC) considerations. It is nevertheless designed as a compatible and subsumable servicefeature for existing above-characterized AAL frameworks. The eye-and gaze-tracking mechanism enables the user (1) to engage (i.e., open, shut, slide, turn-on/-off, etc.) with a variety of actuable objects and systems deployed within an intelligent builtenvironment via sight-enabled identification, selection, and confirmation; and (2) to extract and display personal identity information from recognized familiar faces viewed by the user. The first feature is intended principally (although not exclusively) for users with limited mobility, with the intention to support independence with respect to the control of remotely actuable mechanisms within the built-environment. The second feature is intended to compensate for loss of memory and/or visual acuity associated principally (although not exclusively) with the natural aging process. As with previously developed service-features, the present mechanism intends to increase the quality of life of its user(s) in an affordable, intuitive, and highly intelligent manner.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.