In this paper, we describe our approach and system description for NLP4IF 2019 Workshop: Shared Task on Fine-Grained Propaganda Detection. Given a sentence from a news article, the task is to detect whether the sentence contains a propagandistic agenda or not. The main contribution of our work is to evaluate the effectiveness of various transfer learning approaches like ELMo, BERT, and RoBERTa for propaganda detection. We show the use of Document Embeddings on the top of Stacked Embeddings combined with LSTM for identification of propagandistic context in the sentence. We further provide analysis of these models to show the effect of oversampling on the provided dataset. In the final testset evaluation, our system ranked 21st with F 1-score of 0.43 in the SLC Task.
Summary
Adversaries can compute the secret information of a program, such as the key for encryption routines, from side channels in the light of timing‐based and access‐based CPU cache behaviours. As a result, it is crucial to understand whether a program is vulnerable to side‐channel cache leakage or not. Yet how we can find out such a vulnerability in a program remains a problem. In this paper, we revisit this problem and contemplate a test‐generation methodology, which, in both timing‐based and access‐based dimensions, systematically discovers the cache side‐channel leakage of an arbitrary software program. At the core of our test‐generation framework is an algorithm that explores the program's input space and adapts at runtime according to observed cache performance in the executed tests. We have implemented our test generator for timing‐based and access‐based attack tests and evaluated it with open‐source subject programs, including ones from OPENSSL and Linux GDK libraries. Our extensive evaluation effectively discloses the vulnerabilities of these real‐world software to both timing‐based and access‐based cache attacks. We also empirically show that our test generator achieves higher and comparable effectiveness, respectively, in simulations and real hardware platforms with regard to revealing cache side‐channel leakage than do state‐of‐the‐art fuzz testing tools.
In this paper we present our approach and the system description for Sub Task A of SemEval 2019 Task 9: Suggestion Mining from Online Reviews and Forums. Given a sentence, the task asks to predict whether the sentence consists of a suggestion or not. Our model is based on Universal Language Model Finetuning for Text Classification. We apply various pre-processing techniques before training the language and the classification model. We further provide detailed analysis of the results obtained using the trained model. Our team ranked 10th out of 34 participants, achieving an F1 score of 0.7011. We publicly share our implementation 1 .
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.