The entire world is transforming quickly under the present innovations. The Internet has become a basic requirement for everybody with the Web being utilized in every field. With the rapid increase in social network applications, people are using these platforms to voice them their opinions with regard to daily issues. Gathering and analyzing peoples' reactions toward buying a product, public services, and so on are vital. Sentiment analysis (or opinion mining) is a common dialogue preparing task that aims to discover the sentiments behind opinions in texts on varying subjects. In recent years, researchers in the field of sentiment analysis have been concerned with analyzing opinions on different topics such as movies, commercial products, and daily societal issues. Twitter is an enormously popular microblog on which clients may voice their opinions. Opinion investigation of Twitter data is a field that has been given much attention over the last decade and involves dissecting "tweets" (comments) and the content of these expressions. As such, this paper explores the various sentiment analysis applied to Twitter data and their outcomes.
Highlights Convalescent plasma is adjunct treatment in COVID-19 moderate and severe disease. Clinical improvement is noticed in COVID-19 disease after convalescent plasma. Mortality is significantly reduced after COVID-19 convalescent plasma treatment.
Life-saving decisions in vehicular ad hoc networks (VANETs) depend on the availability of highly accurate, up-to-date, and reliable data exchanged by neighboring vehicles. However, spreading inaccurate, unreliable, and false data by intruders create traffic illusions that may cause loss of lives and assets. Although several solutions for misbehavior detection have been proposed to address these issues, those solutions lack adequate representation and the adaptability to vehicular context. The use of predefined static thresholds and lack of comprehensive context representation have rendered the existing solutions limited to specific scenarios and attack types, which impedes their generalizability. This paper addresses these limitations by proposing an ensemble-based hybrid context-aware misbehavior detection system (EHCA-MDS) model. EHCA-MDS has been developed in four phases, as follows. The static thresholds have been replaced by dynamic ones created on the fly by analyzing the spatial and temporal properties of the mobility information collected from neighboring vehicles. Kalman filter-based algorithms were used to collect the mobility information of neighboring vehicles. Three sets of features were then derived, each of which has a different perspective, namely data consistency, data plausibility, and vehicle behavior. These features were used to construct a dynamic context reference using the Hampel filter. The Hampel-based z-score was used to evaluate the vehicles based on their behavioral activities, data consistency, and plausibility. For comprehensive features representation, multifaceted, non-parametric-based statistical classifiers were constructed and updated online using a Hampel filter-based algorithm. For accurate representation, the output of the statistical classifiers, vehicles' scores, context reference parameters, and the derived features were used as input to an ensemble learning-based algorithm. Such representation helps to identify the misbehaving vehicles more effectively. The proposed EHCA-MDS model was evaluated in the presence of different types of misbehaving vehicles under different context scenarios through extensive simulations, utilizing a real-world traffic dataset. The results show that the accuracy and robustness of the proposed EHCA-MDS under different vehicular dynamic context scenarios were higher than existing solutions, which confirms its feasibility and effectiveness to improve the performance of VANET critical applications. IntroductionRoad collisions are increasing, and they are being expected to be the fifth leading cause of death by 2030 [1,2]. Annually, millions of people lose their lives on roads worldwide due to traffic accidents [1], with 40 times more suffering from injuries. These accidents are also the main cause of traffic congestion, which in turn has a great impact on the economy [3,4], and billions of dollars are lost due to the treatment of injuries, loss of property, lost working hours, and high fuel consumption [5]. Several studies reveal that more than 95%...
An essential objective of software development is to locate and fix defects ahead of schedule that could be expected under diverse circumstances. Many software development activities are performed by individuals, which may lead to different software bugs over the development to occur, causing disappointments in the not-so-distant future. Thus, the prediction of software defects in the first stages has become a primary interest in the field of software engineering. Various software defect prediction (SDP) approaches that rely on software metrics have been proposed in the last two decades. Bagging, support vector machines (SVM), decision tree (DS), and random forest (RF) classifiers are known to perform well to predict defects. This paper studies and compares these supervised machine learning and ensemble classifiers on 10 NASA datasets. The experimental results showed that, in the majority of cases, RF was the best performing classifier compared to the others.
With the rapid increase in the popularity of social networks, the propagation of rumors is also increasing. Rumors can spread among thousands of users immediately without verification and can cause serious damages. Recently, several research studies have been investigated to control online rumors automatically by mining rich text available on the open network with deep learning techniques. In this paper, we conducted a systematic literature review for rumor detection using deep neural network approaches. A total of 108 studies were retrieved using manual research from five databases (IEEE Explore, Springer Link, Science Direct, ACM Digital Library, and Google Scholar). The considered studies are then examined in our systematic review to answer the seven research questions that we have formulated to deeply understand the overall trends in the use of deep learning methods for rumor detection. Apart from this, our systematic review also presents the challenges and issues that are faced by the researchers in this area and suggests promising future research directions. Our review will be beneficial for researchers in this domain as it will facilitate researchers' comparison with the existing works due to the availability of a complete description of the used performance matrices, dataset characteristics, and the deep learning model used per each work. Our review will also assist researchers in finding the available annotated datasets that can be used as benchmarks for comparing their new proposed approaches with the existing state-of-the-art works.
With the growth of e-services in the past two decades, the concept of web accessibility has been given attention to ensure that every individual can benefit from these services without any barriers. Web accessibility is considered one of the main factors that should be taken into consideration while developing webpages. Web Content Accessibility Guidelines 2.0 (WCAG 2.0) have been developed to guide web developers to ensure that web contents are accessible for all users, especially disabled users. Many automatic tools have been developed to check the compliance of websites with accessibility guidelines such as WCAG 2.0 and to help web developers and content creators with designing webpages without barriers for disabled people. Despite the popularity of accessibility evaluation tools in practice, there is no systematic way to compare the performance of web accessibility evaluators. This paper first presents two novel frameworks. The first one is proposed to compare the performance of web accessibility evaluation tools in detecting web accessibility issues based on WCAG 2.0. The second framework is utilized to evaluate webpages in meeting these guidelines. Six homepages of Saudi universities were chosen as case studies to substantiate the concept of the proposed frameworks. Furthermore, two popular web accessibility evaluators, Wave and SiteImprove, are selected to compare their performance. The outcomes of studies conducted using the first proposed framework showed that SiteImprove outperformed WAVE. According to the outcomes of the studies conducted, we can conclude that web administrators would benefit from the first framework in selecting an appropriate tool based on its performance to evaluate their websites based on accessibility criteria and guidelines. Moreover, the findings of the studies conducted using the second proposed framework showed that the homepage of Taibah University is more accessible than the homepages of other Saudi universities. Based on the findings of this study, the second framework can be used by web administrators and developers to measure the accessibility of their websites. This paper also discusses the most common accessibility issues reported by WAVE and SiteImprove.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.