Gamification is the use of game elements in domains other than games. Gamification use is often suggested for difficult activities because it enhances users’ engagement and motivation level. Due to such benefits, the use of gamification is also proposed in education environments to improve students’ performance, engagement, and satisfaction. Computer science in higher education is a tough area of study and thus needs to utilize various already explored benefits of gamification. This research develops an empirical study to evaluate the effectiveness of gamification in teaching computer science in higher education. Along with the learning outcomes, the effect of group size on students’ satisfaction level is also measured. Furthermore, the impact of gamification over time is analyzed throughout a semester to observe its effectiveness as a long-term learning technique. The analysis, covering both learning outcome and students’ satisfaction, suggests that gamification is an effective tool to teach tough courses at higher education level; however, group size should be taken into account for optimal classroom size and better learning experience.
With all the recent advancements in the electronic world, hardware is becoming smaller, cheaper and more powerful; while the software industry is moving towards service-oriented integration technologies. Hence, service oriented architecture is becoming a popular platform for the development of applications for distributed embedded real-time system (DERTS). With rapidly increasing diversity of services on the internet, new demands have been raised concerning the efficient discovery of heterogeneous device services in the dynamic environment of DERTS. Context-awareness principles have been widely studied for DERTS; hence, it can be used as an additional set of service selection criteria. However, in order to use context information effectively, it should be presented in an unambiguous way and the dynamic nature of the embedded and real-time systems should be considered. To address these challenges, the authors present a service discovery framework for DERTS which uses context-aware ontology of embedded and real-time systems and a semantic matching algorithm to facilitate the discovery of device services in embedded and real-time system environments. The proposed service discovery framework also considers the associated priorities with the requirements posed by the requester during the service discovery process.
Heterogeneous devices are connected with each other through wireless links within a Cyber Physical System. These devices undergo resource constraints such as battery, bandwidth, memory, and computing power. Moreover, the massive interconnections of these devices result in network latency and reduced speed. Edge computing offers a solution to this problem in which devices transmit the preprocessed actionable data in a formal way resulting in reduced data traffic and improved speed. However, to provide the same level of security to each piece of information is not feasible due to limited resources. In addition, not all the data generated by Internet of Things (IoT) devices require a high level of security. Context-awareness principles can be employed to select an optimal algorithm based on device specifications and required information confidentiality level. For context-awareness, it is essential to consider the dynamic requirements of data confidentiality as well as device available resources. This paper presents a context-aware encryption protocol suite that selects optimal encryption algorithm according to device specifications and the level of data confidentiality. The results presented herein clearly exhibit that the devices were able to save 79% memory consumption, 56% battery consumption and 68% execution time by employing the proposed context-aware encryption protocol suite.
Distributed Embedded Real-Time Systems (DERTS) consists of hundreds of interconnected devices, typically small and wirelessly-connected, which are designed to work for a long period of time. The massive interconnection of devices and the usage of heterogeneous languages, operating platforms and data standards make DERTS a competitive and complex system. In addition, the DERTS program is setting a trend of moving away from centralized, high-cost products towards lower cost and high volume products. In this regard, there is nothing more natural than considering the use of Service Oriented Architecture (SOA) to assist in the development of DERTS. This is because SOA enables different devices to exchange data regardless of issues of complexity. Moreover, context-awareness, which is widely studied for DERTS, also plays an important role for effective communication among devices. Thus, to build service-based DERTS while managing the complexity, context-aware ontologies are the best solution. In this paper, we developed a context-aware ontology for DERTS which is known as ConOntDERTS. To evaluate ConOntDERTS, we used two methods. In the first method, a criteria-based ontology evaluation was used; while in the second method, a survey was conducted to show that the results produced by ConOntDERTS were almost the same as human perception. Results of the evaluation show the consistency and feasibility of our ontology and the statistical test results show that ConOntDERTS ontology can produce consistent results with human perception.
In healthcare, the analysis of patients’ activities is one of the important factors that offer adequate information to provide better services for managing their illnesses well. Most of the human activity recognition (HAR) systems are completely reliant on recognition module/stage. The inspiration behind the recognition stage is the lack of enhancement in the learning method. In this study, we have proposed the usage of the hidden conditional random fields (HCRFs) for the human activity recognition problem. Moreover, we contend that the existing HCRF model is inadequate by independence assumptions, which may reduce classification accuracy. Therefore, we utilized a new algorithm to relax the assumption, allowing our model to use full-covariance distribution. Also, in this work, we proved that computation wise our method has very much lower complexity against the existing methods. For the experiments, we used four publicly available standard datasets to show the performance. We utilized a 10-fold cross-validation scheme to train, assess, and compare the proposed model with the conditional learning method, hidden Markov model (HMM), and existing HCRF model which can only use diagonal-covariance Gaussian distributions. From the experiments, it is obvious that the proposed model showed a substantial improvement with p value ≤0.2 regarding the classification accuracy.
Outlier detection in data streams is considered a significant task in data mining that targets the discovery of elements in an unprecedented data arrival rate. The fast arrival of data demands fast computation within the shortest period, and with minimal memory usage. Detecting distance-based outliers in such a scenario are more complicated. Existing techniques such as the two best-known methods -Micro-Cluster Outlier Detection (MCOD) and Thresh_LEAP have presented some solutions to these challenges. However, the combination of the strength of both techniques can be a lot more improvement to the individual methods proposed. Therefore, in this paper, we propose a method called Micro-Cluster with Minimal Probing (MCMP), which is a hybrid approach of the combination of the strength of MCOD and Thresh_LEAP. We offer a new distance-based outlier detection technique to minimize the computational cost in detecting distance-based outliers effectively. The proposed MCMP technique is comprised of two approaches. Firstly, we adopt micro-clusters to mitigate the range query search. Then, to deal with the objects outside the microclusters, we propose the concept of differentiating between strong and trivial inliers. The proposed method improves the computational speed and memory consumption, while simultaneously maintaining the outlier detection accuracy. Our experiments are conducted on both real-world and synthetic data sets. We varied the window size (w), neighbor count threshold (k) and distance threshold (R), and observed that our method outperforms the state-of-the-art methods in both CPU time and memory consumption in the majority of the datasets.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.