Today, digitalization decisively penetrates all the sides of the modern society. One of the key enablers to maintain this process secure is authentication. It covers many different areas of a hyper-connected world, including online payments, communications, access right management, etc. This work sheds light on the evolution of authentication systems towards Multi-Factor Authentication (MFA) starting from Single-Factor Authentication (SFA) and through Two-Factor Authentication (2FA). Particularly, MFA is expected to be utilized for human-to-everything interactions by enabling fast, user-friendly, and reliable authentication when accessing a service. This paper surveys the already available and emerging sensors (factor providers) that allow for authenticating a user with the system directly or by involving the cloud. The corresponding challenges from the user as well as the service provider perspective are also reviewed. The MFA system based on reversed Lagrange polynomial within Shamir's Secret Sharing (SSS) scheme is further proposed to enable more flexible authentication. This solution covers the cases of authenticating the user even if some of the factors are mismatched or absent. Our framework allows for qualifying the missing factors by authenticating the user without disclosing sensitive biometric data to the verification entity. Finally, a vision of the future trends in MFA is discussed.
One of the more recent avenues towards more flexible installations and execution is the transition from monolithic architecture to microservice architecture. In such architecture, where microservices can be more liberally updated, relocated, and replaced, building liquid software also becomes simpler, as adaptation and deployment of code is easier than when using a monolithic architecture where almost everything is connected. In this paper, we study this type of transition. The objective is to identify the reasons why the companies decide to make such transition, and identify the challenges that companies may face during this transition. Our method is a survey based on di↵erent publications and case studies conducted about these architectural transitions from monolithic architecture to microservices. Our findings reveal that typical reasons moving towards microservice architecture are complexity, scalability and code ownership. The challenges, on the other hand, can be separated to architectural challenges and organizational challenges. The conclusion is that when a software company grows big enough in size and starts facing problems regarding the size of the codebase, that is when microservices can be a good way to handle the complexity and size. Even though the transition provides its own challenges, these challenges can be easier to solve than the challenges that monolithic architecture presents to company.
System design where cyber-physical applications are securely coordinated from the cloud may simplify the development process. However, all private data are then pushed to these remote 'swamps', and human users lose the actual control as compared to when the applications are executed directly on their devices. At the same time, computing at the network edge is still lacking support for such straightforward multi-device development, which is essential for a wide range of dynamic cyber-physical services. In this work, we propose a novel programming model as well as contribute the associated secure connectivity framework for leveraging safe coordinated device proximity as an additional degree of freedom between the remote cloud and the safety-critical network edge, especially under uncertain environment constraints.
Software is the key enabling technology (KET) as digitalization is cross-cutting future energy systems spanning the production sites, distribution networks, and consumers particularly in electricity smart grids. In this paper, we identify systematically what particular software competencies are required in the future energy systems focusing on electricity system smart grids. The realizations of that can then be roadmapped to specific software capabilities of the different future 'software houses' across the networks. Our instrumental method is software competence development scenario path construction with environmental scanning of the related systems elements. The vision of future software-enabled smart energy systems with software houses is mapped with the already progressing scenarios of energy systems transitions on the one hand coupled with the technology foresight of software on the other hand. Grounding on the Smart Grid Reference Architecture Model (SGAM), it tabulates the distinguished software competencies and attributes them to the different parties-including customers/consumers (Internet of People, IoP)-involved in future smart energy systems. The resulting designations can then be used to recognize and measure the necessary software competencies (e.g., fog computing) in order to be able to develop them inhouse, or for instance to partner with software companies, depending on the future desirability. Software-intensive systems development competence becomes one of the key success factors for such cyber-physical-social systems (CPSS). Further futures research work is chartered with the Futures Map frame. This paper contributes preliminarily toward that by identifying pictures of the software-enabled futures and the connecting software competencebased scenario paths.
The availability of open source assets for almost all imaginable domains has led the software industry to opportunistic design—an approach in which people develop new software systems in an ad hoc fashion by reusing and combining components that were not designed to be used together. In this paper we investigate this emerging approach. We demonstrate the approach with an industrial example in which Node.js modules and various subsystems are used in an opportunistic way. Furthermore, to study opportunistic reuse as a phenomenon, we present the results of three contextual interviews and a survey with reuse practitioners to understand to what extent opportunistic reuse offers improvements over traditional systematic reuse approaches.
The Internet has traditionally been a device-oriented architecture where devices with IP addresses are first-class citizens, able to serve and consume content or services, and their owners take part in the interaction only through those devices. The Internet of People (IoP) is a recent paradigm where devices become proxies of their users, and can act on their behalf. To realize IoP, new policies and rules for how devices can take actions are required. The role of context information grows as devices act autonomously based on the environment and existing social relationships between their owners. In addition, the social profiles of device owners determine e.g. how altruistic or resourceconserving they are in collaborative computing scenarios. In this paper we focus on community formation in IoP, a prerequisite for enabling collaborative scenarios, and discuss main challenges and propose potential solutions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.