The emergence of anti-social behaviour in online environments presents a serious issue in today’s society. Automatic detection and identification of such behaviour are becoming increasingly important. Modern machine learning and natural language processing methods can provide effective tools to detect different types of anti-social behaviour from the pieces of text. In this work, we present a comparison of various deep learning models used to identify the toxic comments in the Internet discussions. Our main goal was to explore the effect of the data preparation on the model performance. As we worked with the assumption that the use of traditional pre-processing methods may lead to the loss of characteristic traits, specific for toxic content, we compared several popular deep learning and transformer language models. We aimed to analyze the influence of different pre-processing techniques and text representations including standard TF-IDF, pre-trained word embeddings and also explored currently popular transformer models. Experiments were performed on the dataset from the Kaggle Toxic Comment Classification competition, and the best performing model was compared with the similar approaches using standard metrics used in data analysis.
This paper connects two large research areas, namely sentiment analysis and human–robot interaction. Emotion analysis, as a subfield of sentiment analysis, explores text data and, based on the characteristics of the text and generally known emotional models, evaluates what emotion is presented in it. The analysis of emotions in the human–robot interaction aims to evaluate the emotional state of the human being and on this basis to decide how the robot should adapt its behavior to the human being. There are several approaches and algorithms to detect emotions in the text data. We decided to apply a combined method of dictionary approach with machine learning algorithms. As a result of the ambiguity and subjectivity of labeling emotions, it was possible to assign more than one emotion to a sentence; thus, we were dealing with a multi-label problem. Based on the overview of the problem, we performed experiments with the Naive Bayes, Support Vector Machine and Neural Network classifiers. Results obtained from classification were subsequently used in human–robot experiments. Despise the lower accuracy of emotion classification, we proved the importance of expressing emotion gestures based on the words we speak.
Machine learning techniques have been increasingly used in astronomical applications and have proven to successfully classify objects in image data with high accuracy. The current work uses archival data from the Faint Images of the Radio Sky at Twenty Centimeters (FIRST) to classify radio galaxies into four classes: Fanaroff-Riley Class I (FRI), Fanaroff-Riley Class II (FRII), Bent-Tailed (BENT), and Compact (COMPT). The model presented in this work is based on Convolutional Neural Networks (CNNs). The proposed architecture comprises three parallel blocks of convolutional layers combined and processed for final classification by two feed-forward layers. Our model classified selected classes of radio galaxy sources on an independent testing subset with an average of 96% for precision, recall, and F1 score. The best selected augmentation techniques were rotations, horizontal or vertical flips, and increase of brightness. Shifts, zoom and decrease of brightness worsened the performance of the model. The current results show that model developed in this work is able to identify different morphological classes of radio galaxies with a high efficiency and performance.
Lightning strikes can be routinely observed by the radio receivers that measure electromagnetic signals in the very low frequency range. The acquired pulses called atmospherics provide valuable details about source lightning discharges and also about the Earth‐ionosphere waveguide where they can propagate thousands of kilometers. The automatic acquisition of the events requires also automatic methods for extraction of atmospherics' details to confirm the observed trends with statistical significance. For this purpose, we have developed a method based on a deep learning approach to automatically obtain the type of atmospherics, their exact time, and the frequency range from the frequency‐time spectrograms. The method that employs convolutional neural networks is designed to be adaptable to scientific needs and provide reliable results according to the requirements on the sensitivity to events, computation performance, and precision of extracted details. The efficiency and specific steps of our method are demonstrated for data recorded by the ELMAVAN‐G instrument. The comprehensive description of the method allows its usage for regular characterization of the ionospheric D‐layer or for other similar applications in the future.
Structures in the solar corona are the main drivers of space weather processes that might directly or indirectly affect the Earth. Thanks to the most recent space-based solar observatories, with capabilities to acquire high-resolution images continuously, the structures in the solar corona can be monitored over the years with a time resolution of minutes. For this purpose, we have developed a method for automatic segmentation of solar corona structures observed in EUV spectrum that is based on a deep learning approach utilizing Convolutional Neural Networks. The available input datasets have been examined together with our own dataset based on the manual annotation of the target structures. Indeed, the input dataset is the main limitation of the developed model’s performance. Our SCSS-Net model provides results for coronal holes and active regions that could be compared with other generally used methods for automatic segmentation. Even more, it provides a universal procedure to identify structures in the solar corona with the help of the transfer learning technique. The outputs of the model can be then used for further statistical studies of connections between solar activity and the influence of space weather on Earth.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.