The world is facing the COVID-19 pandemic, leading to an unprecedented change in the lifestyle routines of millions. Beyond the general physical health, financial, and social repercussions of the pandemic, the adopted mitigation measures also present significant challenges in the population’s mental health and health programs. It is complex for public organizations to measure the population’s mental health in order to incorporate it into their own decision-making process. Traditional survey methods are time-consuming, expensive, and fail to provide the continuous information needed to respond to the rapidly evolving effects of governmental policies on the population’s mental health. A significant portion of the population has turned to social media to express the details of their daily life, rendering this public data a rich field for understanding emotional and mental well-being. This study aims to track and measure the sentiment changes of the Mexican population in response to the COVID-19 pandemic. To this end, we analyzed 760,064,879 public domain tweets collected from a public access repository to examine the collective shifts in the general mood about the pandemic evolution, news cycles, and governmental policies using open sentiment analysis tools. Sentiment analysis polarity scores, which oscillate around -0.15, show a weekly seasonality according to Twitter’s usage and a consistently negative outlook from the population. It also remarks on the increased controversy after the governmental decision to terminate the lockdown and the celebrated holidays, which encouraged the people to incur social gatherings. These findings expose the adverse emotional effects of the ongoing pandemic while showing an increase in social media usage rates of 2.38 times, which users employ as a coping mechanism to mitigate the feelings of isolation related to long-term social distancing. The findings have important implications in the mental health infrastructure for ongoing mitigation efforts and feedback on the perception of policies and other measures. The overall trend of the sentiment polarity is 0.0001110643.
Abstract. This paper describes the use of a hybrid evolutionary optimization algorithm (HEOA) for computing the wavefront aberration from real interferometric data. By finding the near-optimal solution to an optimization problem, this algorithm calculates the Zernike polynomial expansion coefficients from a Fizeau interferogram, showing the validity for the reconstruction of the wavefront aberration. The proposed HEOA incorporates the advantages of both a multimember evolution strategy and locally weighted linear regression in order to minimize an objective function while avoiding premature convergence to a local minimum. The numerical results demonstrate that our HEOA is robust for analyzing real interferograms degraded by noise. © The Authors. Published by SPIE under a Creative Commons Attribution 3.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Latent Semantic Analysis (LSA) is a method that allows us to automatically index and retrieve information from a set of objects by reducing the term-by-document matrix using the Singular Value Decomposition (SVD) technique. However, LSA has a high computational cost for analyzing large amounts of information. The goals of this work are (i) to improve the execution time of semantic space construction, dimensionality reduction, and information retrieval stages of LSA based on heterogeneous systems and (ii) to evaluate the accuracy and recall of the information retrieval stage. We present a heterogeneous Latent Semantic Analysis (hLSA) system, which has been developed using General-Purpose computing on Graphics Processing Units (GPGPUs) architecture, which can solve large numeric problems faster through the thousands of concurrent threads on multiple CUDA cores of GPUs and multi-CPU architecture, which can solve large text problems faster through a multiprocessing environment. We execute the hLSA system with documents from the PubMed Central (PMC) database. The results of the experiments show that the acceleration reached by the hLSA system for large matrices with one hundred and fifty thousand million values is around eight times faster than the standard LSA version with an accuracy of 88% and a recall of 100%.
Many living organisms have DNA in their cells that is responsible for their biological features. DNA is an organic molecule of two complementary strands of four different nucleotides wound up in a double helix. These nucleotides are adenine (A), thymine (T), guanine (G), and cytosine (C). Genes are DNA sequences containing the information to synthesize proteins. The genes of higher eukaryotic organisms contain coding sequences, known as exons and non-coding sequences, known as introns, which are removed on splice sites after the DNA is transcribed into RNA. Genome annotation is the process of identifying the location of coding regions and determining their function. This process is fundamental for understanding gene structure; however, it is time-consuming and expensive when done by biochemical methods. With technological advances, splice site detection can be done computationally. Although various software tools have been developed to predict splice sites, they need to improve accuracy and reduce false-positive rates. The main goal of this research was to generate Deep Splicer, a deep learning model to identify splice sites in the genomes of humans and other species. This model has good performance metrics and a lower false-positive rate than the currently existing tools. Deep Splicer achieved an accuracy between 93.55% and 99.66% on the genetic sequences of different organisms, while Splice2Deep, another splice site detection tool, had an accuracy between 90.52% and 98.08%. Splice2Deep surpassed Deep Splicer on the accuracy obtained after evaluating C. elegans genomic sequences (97.88% vs. 93.62%) and A. thaliana (95.40% vs. 94.93%); however, Deep Splicer’s accuracy was better for H. sapiens (98.94% vs. 97.15%) and D. melanogaster (97.14% vs. 92.30%). The rate of false positives was 0.11% for human genetic sequences and 0.25% for other species’ genetic sequences. Another splice prediction tool, Splice Finder, had between 1% and 3% of false positives for human sequences, while other species’ sequences had around 4% and 10%.
The prediction of vessel maritime navigation has become an exciting topic in the last years, especially considering economics, commercial exchange, and security. In addition, vessel monitoring requires better systems and techniques that help enterprises and governments to protect their interests. Specifically, the prediction of vessel movements is essential for safety and tracking. However, the applications of prediction techniques have a high cost related to computational efficiency and low resource saving. This article presents a sample method to select historical data on vessel-specific routes to optimize the computational performance of the prediction of vessel positions and route estimation in real-time. These historical navigation data can help to estimate a complete path and perform vessel position predictions through time. This Select Best AIS Data in Prediction Vessel Movements and Route Estimation (PreMovEst) method works in a Vessel Traffic Service database to save computational resources when predictions or route estimations are executed. This article discusses AIS data and the artificial neural network. This work aims to present a prediction model that correctly predicts the physical movement in the route. It supports path planning for the Vessel Traffic Service. After testing the method, the results obtained for route estimation have a precision of 76.15%, and those for vessel position predictions through time have an accuracy of 81.043%.
Maritime safety and security are being constantly jeopardized. Therefore, identifying maritime flow irregularities (semi-)automatically may be crucial to ensure maritime security in the future. This paper presents a Ship Semantic Information-Based, Image Similarity Calculation System (Ship-SIBISCaS), which constitutes a first step towards the automatic identification of this kind of maritime irregularities. In particular, the main goal of Ship-SIBISCaS is to automatically identify the type of ship depicted in a given image (such as abandoned, cargo, container, hospital, passenger, pirate, submersible, three-decker, or warship) and, thus, classify it accordingly. This classification is achieved in Ship-SIBISCaS by finding out the similarity of the ship image and/or description with other ship images and descriptions included in its knowledge base. This similarity is calculated by means of an LSA algorithm implementation that is run on a parallel architecture consisting of CPUs and GPUs (i.e., a heterogeneous system). This implementation of the LSA algorithm has been trained with a collection of texts, extracted from Wikipedia, that associate some semantic information to ImageNet ship images. Thanks to its parallel architecture, the indexing process of this image retrieval system has been accelerated 10 times (for the 1261 ships included in ImageNet). The range of the precision of the image similarity method is 46% to 92% with 100% recall (that is, a 100% coverage of the database).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.