There is increasing interest in deploying building-scale, general-purpose, and high-fidelity sensing to drive emerging smart building applications. However, the real-world deployment of such systems is challenging due to the lack of system and architectural support. Most existing sensing systems are purpose-built, consisting of hardware that senses a limited set of environmental facets, typically at low fidelity and for short-term deployment. Furthermore, prior systems with high-fidelity sensing and machine learning fail to scale effectively and have fewer primitives, if any, for privacy and security. For these reasons, IoT deployments in buildings are generally short-lived or done as a proof of concept. We present the design of Mites, a scalable end-to-end hardware-software system for supporting and managing distributed general-purpose sensors in buildings. Our design includes robust primitives for privacy and security, essential features for scalable data management, as well as machine learning to support diverse applications in buildings. We deployed our Mites system and 314 Mites devices in Tata Consultancy Services (TCS) Hall at Carnegie Mellon University (CMU), a fully occupied, five-story university building. We present a set of comprehensive evaluations of our system using a series of microbenchmarks and end-to-end evaluations to show how we achieved our stated design goals. We include five proof-of-concept applications to demonstrate the extensibility of the Mites system to support compelling IoT applications. Finally, we discuss the real-world challenges we faced and the lessons we learned over the five-year journey of our stack's iterative design, development, and deployment.
Motion capture technologies reconstruct human movements and have wide-ranging applications. Mainstream research on motion capture can be divided into vision-based methods and inertial measurement unit (IMU)-based methods. The vision-based methods capture complex 3D geometrical deformations with high accuracy, but they rely on expensive optical equipment and suffer from the line-of-sight occlusion problem. IMU-based methods are lightweight but hard to capture fine-grained 3D deformations. In this work, we present a configurable self-sensing IMU sensor network to bridge the gap between the vision-based and IMU-based methods. To achieve this, we propose a novel kinematic chain model based on the four-bar linkage to describe the minimum deformation process of 3D deformations. We also introduce three geometric priors, obtained from the initial shape, material properties and motion features, to assist the kinematic chain model in reconstructing deformations and overcome the data sparsity problem. Additionally, to further enhance the accuracy of deformation capture, we propose a fabrication method to customize 3D sensor networks for different objects. We introduce origami-inspired thinking to achieve the customization process, which constructs 3D sensor networks through a 3D-2D-3D digital-physical transition. The experimental results demonstrate that our method achieves comparable performance with state-of-the-art methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.