The emergence of the Internet of Things (IoT) and the Semantic Web of Things is leading to intensive cloud processing executing reasoning rules over large volumes of enriched data. However, the role of the cloud in existing approaches as a central point for both data processing and provisioning reduces scalability and introduces latency. This paper sketches a new approach for rule-based reasoning, enabling distributed rule evaluation on edge nodes, and reducing the latency for IoT applications while avoiding the total dependence on a central node. This approach is evaluated in a simulated smart building.
KEYWORDScloud computing, edge computing, rule-based reasoning, semantic web of things
INTRODUCTIONThe Semantic Web of Things (SWoT) emerged from the interaction between the Internet of Things (IoT) and the Semantic Web (SW). It is driven by the need for interoperability at the core of the IoT, to which the expressiveness of the SW's formalisms is seen as a solution. 1 Integrating SW technologies into the IoT allows the transformation of raw data into rich, meaningful information that can be distributed with knowledge to applications, thus breaking the vertical silos of domain-specific development.In this configuration, applications both hold the business logic and are consumers of the information produced within the network. However, the SWoT is also challenged by the constrained and dynamic nature of the IoT. This does not comply to the traditional principles and application domains of the SW. Typical IoT deployments follow a hierarchical architecture captured in the Lower, Middle and Upper Node (LMU-N) architectural pattern, 2 a multitiered IoT architectural pattern where devices connect to cloud servers via intermediate gateways. The entry point to the network for applications is usually a powerful node, for example, a cloud server offering web services. Data collected by constrained nodes on the edge should therefore be moved upstream toward server nodes so that it can be processed and sent to relevant applications connected to the cloud. An application's business logic can be implemented by production rules, where high-level symptoms are inferred from lower-level data, for example, inferring the high-level event "Such a room is an uncomfortable place" from luminosity and temperature observations. These rules are generally applied in the cloud ( (3,4) ), forwarding only the result to applications in order to reduce bandwidth consumption and its processing load. However, this architecture creates a bottleneck, since the data must be concentrated in a single node, the cloud server, for processing. This leads to multiple issues. Firstly, it is not a scalable design, and it is unfit for large deployments, 5 such as in smart cities scenarii. Secondly, the cost of semantic reasoning increases rapidly with the size of the knowledge base (KB), 6 thus processing all rules on a growing data instance introduces a delay, which can be unacceptable for time-sensitive applications (ie, e-health, security, emergencies). Thir...