Ontology is defined as an explicit specification of a shared conceptualization. Currently, ontology is used in semantic web, information retrieval, artificial intelligence, information systems, knowledge management, etc. The development of ontology involves a structural and logical complexity that is comparable to the development of software artifacts. Therefore, ontology building requires a methodology to ensure its reliability. In this context, there are several methodologies proposed for building ontologies. However, most of the existing methodologies failed to provide sufficient details for the activities and techniques employed in them with a defined ontology lifecycle. To build ontologies that are reliable, long lived and continually adapted, the ontology engineering (OE) should be supported by the software engineering (OE). But, SE was not initially meant to support the development of software artifacts such as ontologies. There is a significant gap between them in terms of popularity and maturity level. The aim of this paper is to bridge this gap by proposing an Agile Methodology for Ontology Development (AMOD). AMOD adopts the agile principles and practices in the ontology development. The final framework of AMOD fits the various ontology activities into the phases of the Scrum agile methodology. It has three phases: pre-game, development and post-game. AMOD was applied to develop ontology for software project time management. Additionally, a compliance analysis of different ontology methodologies with respect to the IEEE Standard was made. Results showed that AMOD resulted in 56% satisfaction for IEEE standard processes. This resembles 22% enhancement in the satisfaction against the other methodologies.
The benefits of requirement traceability are well known and documented. The traceability links between requirements and code are fundamental in supporting different activities in the software development process, including change management and software maintenance. These links can be obtained using manual or automatic means. Manual trace retrieval is a time-consuming task. Automatic trace retrieval can be performed via various tools such as Information retrieval or machine learning techniques. Meanwhile, a big concern associated with automated trace retrieval is the low precision problem primarily caused by the term mismatches across documents to be traced. This study proposes an approach that addresses the term mismatch problem to obtain the greatest improvements in the trace retrieval accuracy. The proposed approach uses clustering in the automated trace retrieval process and performs an experimental evaluation against previous benchmarks. The results show that the proposed approach improves the trace retrieval precision.
Requirement Engineering (RE) plays an important role in the success of software development life cycle. As RE is the starting point of the life cycle, any changes in requirements will be costly and time consuming. Failure in determining accurate requirements leads to errors in specifications and therefore to a mal system architecture. In addition, most of software development environments are characterized by user requests to change some requirements.Scrum as one of agile development methods that gained a great attention because of its ability to deal with the changing environments. This paper presents and discusses the current situation of RE activities in Scrum, how Scrum benefits from RE techniques and future challenges in this respect.
Relative positions are recent solutions to overcome the limited accuracy of GPS in urban environment. Vehicle positions obtained using V2I communication are more accurate because the known roadside unit (RSU) locations help predict errors in measurements over time. The accuracy of vehicle positions depends more on the number of RSUs; however, the high installation cost limits the use of this approach. It also depends on nonlinear localization nature. They were neglected in several research papers. In these studies, the accumulated errors increased with time due to the linearity localization problem. In the present study, a cooperative localization method based on V2I communication and distance information in vehicular networks is proposed for improving the estimates of vehicles’ initial positions. This method assumes that the virtual RSUs based on mobility measurements help reduce installation costs and facilitate in handling fault environments. The extended Kalman filter algorithm is a well-known estimator in nonlinear problem, but it requires well initial vehicle position vector and adaptive noise in measurements. Using the proposed method, vehicles’ initial positions can be estimated accurately. The experimental results confirm that the proposed method has superior accuracy than existing methods, giving a root mean square error of approximately 1 m. In addition, it is shown that virtual RSUs can assist in estimating initial positions in fault environments.
Code smell is a software characteristic that indicates bad symptoms in code design which causes problems related to software quality. The severity of code smells must be measured because it will help the developers when determining the priority of refactoring efforts. Recently, several studies focused on the prediction of design patterns errors using different detection tools. Nowadays, there is a lack of empirical studies regarding how to measure severity of code smells and which learning model is best to detect the severity of code smells. To overcome such gap, this paper focuses on measuring the severity classification of code smells depending on several machine learning models such as regression models, multinominal models, and ordinal classification models. The Local Interpretable Model Agnostic Explanations (LIME) algorithm was further used to explain the machine learning model's predictions and interpretability. On the other side, we extract the prediction rules generated by the Projective Adaptive Resonance Theory (PART) algorithm in order to study the effectiveness of using software metrics to predict code smells. The results of the experiments have shown that the accuracy of severity classification model is enhanced than baseline and ranking correlation between the predicted and actual model reaches 0.92–0.97 by using Spearman's correlation measure.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.