Embedded devices are gaining popularity day by day due to the expanded use of Internet of Things applications. However, these embedded devices have limited capabilities concerning power and memory. Thus, the applications need to be tailored in such a way to perform the specified tasks within the constrained resources with the same accuracy. In Real-Time task scheduling, one of the challenging factors is the intelligent modelling of input tasks in such a way that it produces not only logically correct output within the deadline but also consumes minimum CPU power. Algorithms like Rate Monotonic and Earliest Deadline First compute hyper-period of input tasks for periodic repetition of the same set of tasks on CPU. However, at times when the tasks are not adequately modelled, they lead to an enormously high value of hyper-period which result in more CPU cycles and power consumption. Many state-of-the-art solutions are presented in this regard, but the main problem is that they limit tasks from having all possible period values; however, with the vision of Industry 4.0, where most of the tasks will be doing some critical manufacturing activities, it is highly discouraged to prevent them of a certain period. In this paper, we present a resource-aware approach to minimise the hyper-period of input tasks based on device profiles and allows tasks of every possible period value to admit. The proposed work is compared with similar existing techniques, and results indicate significant improvements regarding power consumptions.
Real-Time Internet of Things (RT-IoT) is a newer technology paradigm envisioned as a global inter-networking of devices and physical things enabling real-time communication over the Internet. The research in Edge Computing and 5G technology is making way for the realisation of future IoT applications. In RT-IoT tasks will be performed in real-time for the remotely controlling and automating of various jobs and therefore, missing their deadline may lead to hazardous situations in many cases. For instance, in the case of safety-critical and mission-critical IoT systems, a missed task could lead to a human loss. Consequently, these systems must be simulated, as a result, and tasks should only be deployed in a real scenario if the deadline is guaranteed to be met. Numerous simulation tools are proposed for traditional real-time systems using desktop technologies, but these relatively older tools do not adapt to the new constraints imposed by the IoT paradigm. In this paper, we design and implement a cloud-based novel architecture for the formal verification of IoT jobs and provide a simulation environment for a typical RT-IoT application where the feasibility of real-time remote tasks is perceived. The proposed tool, to the best of our knowledge, is the first of its kind effort to support not only the feasibility analysis of real-time tasks but also to provide a real environment in which it formally monitors and evaluates different IoT tasks from anywhere. Furthermore, it will also act as a centralised server for evaluating and tracking the real-time scheduled jobs in a smart space. The novelty of the platform is purported by a comparative analysis with the state-of-art solutions against attributes which is vital for any open-source tools in general and IoT in specifics.
Industrial revolution is advancing, and the augmented role of autonomous technology and embedded Internet of Things (IoT) systems is at its vanguard. In autonomous technology, real-time systems and real-time computing are of core importance. It is crucial for embedded IoT devices to respond in real-time; along with fulfilling all the constraints. Many combinations for existing approaches have been proposed with different trade-offs between the resources constraints and tasks dropping rate. Hence, it highlights the significance of a task scheduler which not only takes care of complex nature task input; but also maximizes the CPU throughput. A complex nature task input is when combinations of hard real-time tasks and soft real-time tasks, with different priorities and urgency measures, arrive at the scheduler. In this work, we propose a custom tailored adaptive and intelligent scheduling algorithm for the efficient execution and management of hard and soft real time tasks in embedded IoT systems. The proposed scheduling algorithm aims to distribute the CPU resources fairly to the possibly starving, in overloaded cases, soft real-time tasks while focusing on the execution of high priority hard real-time tasks as its primary objective. The proposal is achieved with the help of two intelligent measures; Urgency Measure (UM) and Failure Measure (FM). The proposed mechanism reduces the rate of tasks missed and the rate of tasks starved, by utilizing the free CPU units for maximum CPU utilization and quick response times. We have performed comparisons of our proposed scheme based on performance metrics as percentage of task instances missed, number of tasks with missed instances, and tasks starvation rate to evaluate the CPU utilization. We first compare our proposed approach with multiple traditional and combined scheduling approaches, and then we evaluate the effect of intelligent modules by comparing the intelligent FEF with non-intelligent FEF. We also evaluate the proposed algorithm in contrast to the most commonly-used hybrid scheduling scheme in embedded systems. The results show that the proposed algorithm out performs the other algorithms, by significantly reducing the task starvation rate and increasing the CPU utilization.
Electricity, the most important form of energy and an indispensable resource, primarily for commercial and residential smart buildings, faces challenges requiring its hyper efficient consumption and production. Therefore, accurate energy consumption predictions are required in order to manage and optimize the energy consumption of smart buildings. Many studies have taken advantage of the power and robustness of neural networks (NN) when it comes to accurate predictions. A few studies have also used the particle swarm optimization (PSO) algorithm along with NNs to enhance and optimize the predictions. In this work, we study prediction learning using PSO-based neural networks (PSO-NN) and propose modifications in order to increase prediction accuracy. Our proposed modifications are re-generation based PSO-NN (R-PSO-NN) and velocity boost-based PSO-NN (VB-PSO-NN). The performance metrics used are: prediction accuracy, number of particles used, and number of epochs required. We compare the results of NN, PSO-NN, R-PSO-NN and VB-PSO-NN based on the performance metrics.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.