Industry 4.0 is revolutionizing industrial production by bridging the physical and the virtual worlds and further improving digitalization. Two essential building blocks in industry 4.0 are digital twins (DT) and the internet of things (IoT). While IoT is about connecting resources and collecting data about the physical world, DTs are the virtual representations of resources organizing and managing information and being tightly integrated with artificial intelligence, machine learning and cognitive services to further optimize and automate production. The concepts of DTs and IoT are overlapping when it comes to describing, discovering and accessing resources. Currently, there are multiple DT and IoT standards covering these overlapping aspects created by different organizations with different backgrounds and perspectives. With regard to interoperability, which is presumably the most important aspect of industry 4.0, this barrier needs to be overcome by consolidation of standards. The objective of this paper is to investigate current DT and IoT standards and provide insights to stimulate this consolidation. Overlapping aspects are identified and a classification scheme is created and applied to the standards. The results are compared, aspects with high similarity or divergence are identified and a proposal for stimulating consolidation is presented. Consensus between standards are found regarding the elements a resource should consist of and which serialization format(s) and network protocols to use. Controversial topics include which query language to use for discovery as well as if geo-spatial, temporal and historical data should be explicitly supported.
This paper investigates the usability of Future Internet technologies (aka “Generic Enablers of the Future Internet”) in the context of environmental applications. The paper incorporates the best aspects of the state-of-the-art in environmental informatics with geospatial solutions and scalable processing capabilities of Internet-based tools. It specifically targets the promotion of the “Environmental Observation Web” as an observation-centric paradigm for building the next generation of environmental applications. In the Environmental Observation Web, the great majority of data are considered as observations. These can be generated from sensors (hardware), numerical simulations (models), as well as by humans (human sensors). Independently from the observation provenance and application scope, data can be represented and processed in a standardised way in order to understand environmental processes and their interdependencies. The development of cross-domain applications is then leveraged by technologies such as Cloud Computing, Internet of Things, Big Data Processing and Analytics. For example, “the cloud” can satisfy the peak-performance needs of applications which may occasionally use large amounts of processing power at a fraction of the price of a dedicated server farm. The paper also addresses the need for Specific Enablers that connect mainstream Future Internet capabilities with sensor and geospatial technologies. Main categories of such Specific Enablers are described with an overall architectural approach for developing environmental applications and exemplar use cases
Sensors provide some of the basic input data for risk management of natural and man-made hazards. Here the word ‘sensors’ covers everything from remote sensing satellites, providing invaluable images of large regions, through instruments installed on the Earth's surface to instruments situated in deep boreholes and on the sea floor, providing highly-detailed point-based information from single sites. Data from such sensors is used in all stages of risk management, from hazard, vulnerability and risk assessment in the pre-event phase, information to provide on-site help during the crisis phase through to data to aid in recovery following an event. Because data from sensors play such an important part in improving understanding of the causes of risk and consequently in its mitigation, considerable investment has been made in the construction and maintenance of highly-sophisticated sensor networks. In spite of the ubiquitous need for information from sensor networks, the use of such data is hampered in many ways. Firstly, information about the presence and capabilities of sensor networks operating in a region is difficult to obtain due to a lack of easily available and usable meta-information. Secondly, once sensor networks have been identified their data it is often difficult to access due to a lack of interoperability between dissemination and acquisition systems. Thirdly, the transfer and processing of information from sensors is limited, again by incompatibilities between systems. Therefore, the current situation leads to a lack of efficiency and limited use of the available data that has an important role to play in risk mitigation. In view of this situation, the European Commission (EC) is funding a number of Integrated Projects within the Sixth Framework Programme concerned with improving the accessibility of data and services for risk management. Two of these projects: ‘Open Architecture and Spatial Data Infrastructure for Risk Management’ (ORCHESTRA, http://www.eu-orchestra.org/) and ‘Sensors Anywhere’ (SANY, http://sany-ip.eu/) are discussed in this article. These projects have developed an open distributed information technology architecture and have implemented web services for the accessing and using data emanating, for example, from sensor networks. These developments are based on existing data and service standards proposed by international organizations. The projects seek to develop the ideals of the EC directive INSPIRE (http://inspire.jrc.it), which was launched in 2001 and whose implementation began this year (2007), into the risk management domain. Thanks to the open nature of the architecture and services being developed within these projects, they can be implemented by any interested party and can be accessed by all potential users. The architecture is based around a service-oriented approach that makes use of Internet-based applications (web services) whose inputs and outputs conform to standards. The benefit of this philosophy is that it is expected to favor the emergence of an operational ma...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.