Phishing and disinformation are popular social engineering attacks with attackers invariably applying influence cues in texts to make them more appealing to users. We introduce Lumen, a learning-based framework that exposes influence cues in text: (i) persuasion, (ii) framing, (iii) emotion, (iv) objectivity/subjectivity, (v) guilt/blame, and (vi) use of emphasis. Lumen was trained with a newly developed dataset of 3K texts comprised of disinformation, phishing, hyperpartisan news, and mainstream news. Evaluation of Lumen in comparison to other learning models showed that Lumen and LSTM presented the best F1-micro score, but Lumen yielded better interpretability. Our results highlight the promise of ML to expose influence cues in text, toward the goal of application in automatic labeling tools to improve the accuracy of human-based detection and reduce the likelihood of users falling for deceptive online content.
Brazil is home to over 200M people, the majority of which have access to the Internet. Over 11M Brazilians live in favelas, or informal settlements with no outside government regulation, often ruled by narcos or militias. Victims of intimate partner violence (IPV) in these communities are made extra vulnerable not only by lack of access to resources, but by the added layer of violence caused by criminal activity and police confrontations. In this paper, we use an unintended harms framework [15] to analyze the unique online privacy needs of favela women and present research questions that we urge tech abuse researchers to consider.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.