The deep learning being a subcategory of the machine learning follows the human instincts of learning by example to produce accurate results. The deep learning performs training to the computer frame work to directly classify the tasks from the documents available either in the form of the text, image, or the sound. Most often the deep learning utilizes the neural network to perform the accurate classification and is referred as the deep neural networks; one of the most common deep neural networks used in a broader range of applications is the convolution neural network that provides an automated way of feature extraction by learning the features directly from the images or the text unlike the machine learning that extracts the features manually. This enables the deep learning neural networks to have a state of art accuracy that mostly expels even the human performance. So the paper is to present the survey on the deep learning neural network architectures utilized in various applications for having an accurate classification with an automated feature extraction.
Online abuse directed towards women on the social media platform Twitter has attracted considerable attention in recent years. An automated method to effectively identify misogynistic abuse could improve our understanding of the patterns, driving factors, and effectiveness of responses associated with abusive tweets over a sustained time period. However, training a neural network (NN) model with a small set of labelled data to detect misogynistic tweets is difficult. This is partly due to the complex nature of tweets which contain misogynistic content, and the vast number of parameters needed to be learned in a NN model. We have conducted a series of experiments to investigate how to train a NN model to detect misogynistic tweets effectively. In particular, we have customised and regularised a Convolutional Neural Network (CNN) architecture and shown that the word vectors pre-trained on a task-specific domain can be used to train a CNN model effectively when a small set of labelled data is available. A CNN model trained in this way yields an improved accuracy over the state-of-the-art models.
In this world of information and experience era, microblogging sites have been commonly used to express people feelings including fear, panic, hate and abuse. Monitoring and control of abuse on social media, especially during pandemics such as COVID-19, can help in keeping the public sentiment and morale positive. Developing the fear and hate detection methods based on machine learning requires labelled data. However, obtaining the labelled data in suddenly changed circumstances as a pandemic is expensive and acquiring them in a short time is impractical. Related labelled hate data from other domains or previous incidents may be available. However, the predictive accuracy of these hate detection models decreases significantly if the data distribution of the target domain, where the prediction will be applied, is different. To address this problem, we propose a novel concept of unsupervised progressive domain adaptation based on a deep-learning language model generated through multiple text datasets. We showcase the efficacy of the proposed method in hate speech and fear detection on the tweets collection during COVID-19 where the labelled information is unavailable.
Teleportation is a new and exciting field of future communication. Security in data communication is a major headache now a day. Among the encryption technologies that are available at present, shared key is the most reliable which depends on secure key generation and distribution. Teleportation/ Entanglement is a perfect solution in this regard as for the no cloning theorem of quantum mechanics any attempt to intercept the key by the eavesdropper will be detectable immediately. To use the quantum key distribution efficiently and effectively, rigorous study and research on Teleportation/Entanglement is needed. We have reviewed and presented Teleportation concept, its process, road blocks, and successes that are achieved recently in a straightforward manner.
Topic modelling methods such as Latent Dirichlet Allocation (LDA) have been successfully applied to various fields, since these methods can effectively characterize document collections by using a mixture of semantically rich topics. So far, many models have been proposed. However, the existing models typically outperform on full analysis on the whole collection to find all topics but difficult to capture coherent and specifically meaningful topic representations. Furthermore, it is very challenging to incorporate user preferences into existing topic modelling methods to extract relevant topics. To address these problems, we develop a novel personalized Association-based Topic Selection (ATS) model, which can identify semantically valid and relevant topics from a set of raw topics based on the semantical relatedness between users’ preferences and the structured patterns captured in topics. The advantage of the proposed ATS model is that it enables an interactive topic modelling process driven by users’ specific interests. Based on three benchmark datasets, namely, RCV1, R8, and WT10G under the context of information filtering (IF) and information retrieval (IR), our rigorous experiments show that the proposed ATS model can effectively identify relevant topics with respect to users’ specific interests, and hence to improve the performance of IF and IR.
Social media platforms have become a common place for information exchange among their users. People leave traces of their emotions via text expressions. A systematic collection, analysis, and interpretation of social media data across time and space can give insights into local outbreaks, mental health, and social issues. Such timely insights can help in developing strategies and resources with an appropriate and efficient response. This study analysed a large Spatio-temporal tweet dataset of the Australian sphere related to COVID19. The methodology included a volume analysis, topic modelling, sentiment detection, and semantic brand score to obtain an insight into the COVID19 pandemic outbreak and public discussion in different states and cities of Australia over time. The obtained insights are compared with independently observed phenomena such as government-reported instances.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.