Pervasive computing systems can be modeled effectively as populations of interacting autonomous components. The key challenge to realizing such models is in getting separately-specified and -developed sub-systems to discover and interoperate with each other in an open and extensible way, supported by appropriate middleware services. In this paper, we argue that nature-inspired coordination models offer a promising way of addressing this challenge. We first frame the various dimensions along which nature-inspired coordination models can be defined, and survey the most relevant proposals in the area. We describe the nature-inspired coordination model developed within the SAPERE project as a synthesis of existing approaches, and show how it can effectively support the multifold requirements of modern and emerging pervasive services. We conclude by identifying what we think are the open research challenges in this area, and identify some research directions that we believe are promising.
Social networks are perhaps the purest example of "Web 2.0" services, and offer a sophisticated tool for accessing the preferences and properties of individuals and groups. Thus, they potentially allow up-to-date, richly annotated contextual data to be acquired as a side effect of users' everyday use of the services. In this paper, we explore how such "social sensing" could be integrated into pervasive systems. We frame and survey the possible approaches to such an integration, and eventually discuss the open issues and challenges facing researchers.
Recognising human activities from sensors embedded in an environment or worn on bodies is an important and challenging research topic in pervasive computing. Existing work on activity recognition is mainly concerned with identifying single user sequential activities from well-scripted or pre-segmented sequences of sensor events. However a real-world environment often contains multiple users, with each performing activities simultaneously, in their own way and with no explicit instructions to follow. Recognising multiuser concurrent activities is challenging, but essential for designing applications for real environments. This paper presents a novel Knowledge-driven approach for Concurrent Activity Recognition (KCAR). Within KCAR, we explore the semantics underlying each sensor event and use semantic dissimilarity to segment a continuous sensor sequence into fragments, each of which corresponds to one ongoing activity. We exploit the Pyramid Match Kernel, with a strength in approximate matching on hierarchical concepts, to recognise activities of varying grained constraints from a potentially noisy sensor sequence. We conduct an empirical evaluation on a large-scale real-world data set that is collected over one year and consists of 2.8 millions of sensor events. Our results demonstrate that KCAR achieves an average recognition accuracy of 91%.
Here we present the overall objectives and approach of the SAPERE (“Self-aware Pervasive Service Ecosystems”) project, focussed on the development of a highly-innovative nature-inspired framework, suited for the decentralized deployment, execution, and management, of self-aware and adaptive pervasive services in future network scenarios
With a rising ageing population, smart home technologies have been demonstrated as a promising paradigm to enable technology-driven healthcare delivery. Smart home technologies, composed of advanced sensing, computing, and communication technologies, offer an unprecedented opportunity to keep track of behaviours and activities of the elderly and provide context-aware services that enable the elderly to remain active and independent in their own homes. However, experiments in developed prototypes demonstrate that abnormal sensor events hamper the correct identification of critical (and potentially life-threatening) situations, and that existing learning, estimation, and time-based approaches to situation recognition are inaccurate and inflexible when applied to multiple people sharing a living space. We propose a novel technique, called CLEAN, that integrates the semantics of sensor readings with statistical outlier detection. We evaluate the technique against four real-world datasets across different environments including the datasets with multiple residents. The results have shown that CLEAN can successfully detect sensor anomaly and improve activity recognition accuracies.
Location is a core concept in most pervasive computing systems. Beyond simple uses such as pinpointing an individual's position or identifying a region's occupants, location is a key index for richer querying of an individual's or environment's context.Although at first glance a simple concept, location information's representation has many forms and subtleties, each suited to particular application classes. 1 To provide application developers with easy access to location information, we must support different positioning systems with varying data formats as well as fusion algorithms to estimate position from multiple readings. We also need a data access approach that hides this complexity and heterogeneity from the developer. This problem has no general solution, necessitating specific frameworks for working with specific kinds of data.To meet the needs of location-based applications, we've developed lightweight space and sensing models and a set of extensible components that support customization and emerging technologies. The space model supports a range of geometric and relative-spatial-positioning descriptions found in the literature. The sensing model abstracts over various types of positioning systems and incorporates the capture of uncertainty, serving as a foundation on which developers can apply sensor-fusion techniques. Our programming framework, LOC8, sits atop the space and sensing models, providing a rich API for querying location data and exploring its many representations. RequirementsA location model should support location data representations from different positioning technologies and extensible metadata descriptions. Many well-known systems can report an entity's coordinate or symbolic position, from GPS and Active Badge to more recent systems such as Ubisense and the fingerprint-based positioning system. 2 Beyond these are less conventional and less expensive methods of reporting an entity's location. For example, a Bluetooth spotter, which can detect the presence of mobile phones, PDAs, and laptops, might position a device within 10 meters of a known point. We can use this information to infer the device owner's position. Using a location model supporting a range of expressive representations for spaces, spatial relationships, and positioning systems, the authors created LOC8, a programming framework for exploring location data's multifaceted representations and uses. Environments frequently contain multiple positioning systems, so translating readings into a common language of location-centric primitives is important. Because no positioning technology claims to provide perfect accuracy, this language must also provide quality measures to support sensor-fusion techniques for uncertain data. Quantifying uncertainty associated with positioning systems has proved a hot topic in recent years. 3,4 A space model provides a set of primitives that allow descriptions of regions of space and the relationships between them. Such primitives must support the mapping of positioning systems' different ...
Pervasive and sensor-driven systems are by nature open and extensible, both in terms of input and tasks they are required to perform. Data streams coming from sensors are inherently noisy, imprecise and inaccurate, with di↵ering sampling rates and complex correlations with each other. These characteristics pose a significant challenge for traditional approaches to storing, representing, exchanging, manipulating and programming with sensor data. Semantic Web technologies provide a uniform framework for capturing these properties. O↵ering powerful representation facilities and reasoning techniques, these technologies are rapidly gaining attention towards facing a range of issues such as data and knowledge modelling, querying, reasoning, service discovery, privacy and provenance. This article reviews the application of the Semantic Web to pervasive and sensor-driven systems with a focus on information modelling and reasoning along with streaming data and uncertainty handling. The strengths and weaknesses of current and projected approaches are analysed and a roadmap is derived for using the Semantic Web as a platform, on which open, standard-based, pervasive, adaptive and sensor-driven systems can be deployed.
Recognising human activities is a problem characteristic of a wider class of systems in which algorithms interpret multi-modal sensor data to extract semantically meaningful classifications. Machine learning techniques have demonstrated progress, but the lack of underlying formal semantics impedes the potential for sharing and re-using classifications across systems. We present a top-level ontology model that facilitates the capture of domain knowledge. This model serves as a conceptual backbone when designing ontologies, linking the meaning implicit in elementary information to higher-level information that is of interest to applications. In this way it provides the common semantics for information at different levels of granularity that supports the communication, re-use and sharing of ontologies between systems.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.