“…We develop algorithms to simulate and estimate Omm and show convergence of our learning scheme using a synthetic dataset. We demonstrate real-world applicability by testing Omm on a dataset of Facebook and Twitter discussions containing moderate and far-right opinions about bushfires and climate change [23]. We show Omm predicts opinion market shares better than the state-of-the-art baseline [35] and uncovers latent competitive and cooperative interactions across opinions: self-reinforcement attributable to the echo chamber effect and interactions between far-right sympathizers and opponents.…”
Section: Discussionmentioning
confidence: 95%
“…We construct the Bushfire Opinions dataset, containing 90 days of Twitter and Facebook discussions about bushfires and climate change. The Facebook postings are a subset of the SocialSense dataset [23]; we select the posts & comments relating to bushfires and climate change (the SocialSense also contains discussions around COVID-19). These were collected using CrowdTangle by crawling public far-right Australian Facebook groups, identified via a digital ethnographic study (see [23] and the online appendix [1] for more details).…”
Section: Dataset and Far-right Opinion Labelingmentioning
confidence: 99%
“…The Facebook postings are a subset of the SocialSense dataset [23]; we select the posts & comments relating to bushfires and climate change (the SocialSense also contains discussions around COVID-19). These were collected using CrowdTangle by crawling public far-right Australian Facebook groups, identified via a digital ethnographic study (see [23] and the online appendix [1] for more details). We build the Twitter discussions using the Twitter Academic v2 API; we collect tweets emitted between November 1, 2019 to January 29, 2020 that mention bushfire keywords such as bushfire, arson, australiaburns, or climate hoax (see full list in the online appendix [1]).…”
Section: Dataset and Far-right Opinion Labelingmentioning
confidence: 99%
“…Moderate and far-right opinion labeling. We use the textual opinion classifiers developed by Kong et al [23] to label Facebook and Twitter postings; we select the following most prevalent six opinions, covering 95% of Twitter and 81% of Facebook postings: (0) Greens policies are the cause of the Australian bushfires.…”
Section: Dataset and Far-right Opinion Labelingmentioning
confidence: 99%
“…We test Omm on a dataset of Facebook and Twitter discussions containing moderate and far-right opinions about bushfires and climate change [23]. Omm shows strong predictive and interpretation capabilities.…”
Recent years have seen the rise of extremist views in the opinion ecosystem we call social media. Allowing online extremism to persist has dire societal consequences, and efforts to mitigate it are continuously explored. Positive interventions, controlled signals that add attention to the opinion ecosystem with the aim of boosting certain opinions, are one such pathway for mitigation. This work proposes a platform to test the effectiveness of positive interventions, through the Opinion Market Model (Omm), a two-tier model of the online opinion ecosystem jointly accounting for both inter-opinion interactions and the role of positive interventions. The first tier models the size of the opinion attention market using the multivariate discrete-time Hawkes process; the second tier leverages the market share attraction model to model opinions cooperating and competing for market share given limited attention. On a synthetic dataset, we show the convergence of our proposed estimation scheme. On a dataset of Facebook and Twitter discussions containing moderate and far-right opinions about bushfires and climate change, we show superior predictive performance over the state-of-the-art and the ability to uncover latent opinion interactions. Lastly, we use Omm to demonstrate the effectiveness of mainstream media coverage as a positive intervention in suppressing far-right opinions.
“…We develop algorithms to simulate and estimate Omm and show convergence of our learning scheme using a synthetic dataset. We demonstrate real-world applicability by testing Omm on a dataset of Facebook and Twitter discussions containing moderate and far-right opinions about bushfires and climate change [23]. We show Omm predicts opinion market shares better than the state-of-the-art baseline [35] and uncovers latent competitive and cooperative interactions across opinions: self-reinforcement attributable to the echo chamber effect and interactions between far-right sympathizers and opponents.…”
Section: Discussionmentioning
confidence: 95%
“…We construct the Bushfire Opinions dataset, containing 90 days of Twitter and Facebook discussions about bushfires and climate change. The Facebook postings are a subset of the SocialSense dataset [23]; we select the posts & comments relating to bushfires and climate change (the SocialSense also contains discussions around COVID-19). These were collected using CrowdTangle by crawling public far-right Australian Facebook groups, identified via a digital ethnographic study (see [23] and the online appendix [1] for more details).…”
Section: Dataset and Far-right Opinion Labelingmentioning
confidence: 99%
“…The Facebook postings are a subset of the SocialSense dataset [23]; we select the posts & comments relating to bushfires and climate change (the SocialSense also contains discussions around COVID-19). These were collected using CrowdTangle by crawling public far-right Australian Facebook groups, identified via a digital ethnographic study (see [23] and the online appendix [1] for more details). We build the Twitter discussions using the Twitter Academic v2 API; we collect tweets emitted between November 1, 2019 to January 29, 2020 that mention bushfire keywords such as bushfire, arson, australiaburns, or climate hoax (see full list in the online appendix [1]).…”
Section: Dataset and Far-right Opinion Labelingmentioning
confidence: 99%
“…Moderate and far-right opinion labeling. We use the textual opinion classifiers developed by Kong et al [23] to label Facebook and Twitter postings; we select the following most prevalent six opinions, covering 95% of Twitter and 81% of Facebook postings: (0) Greens policies are the cause of the Australian bushfires.…”
Section: Dataset and Far-right Opinion Labelingmentioning
confidence: 99%
“…We test Omm on a dataset of Facebook and Twitter discussions containing moderate and far-right opinions about bushfires and climate change [23]. Omm shows strong predictive and interpretation capabilities.…”
Recent years have seen the rise of extremist views in the opinion ecosystem we call social media. Allowing online extremism to persist has dire societal consequences, and efforts to mitigate it are continuously explored. Positive interventions, controlled signals that add attention to the opinion ecosystem with the aim of boosting certain opinions, are one such pathway for mitigation. This work proposes a platform to test the effectiveness of positive interventions, through the Opinion Market Model (Omm), a two-tier model of the online opinion ecosystem jointly accounting for both inter-opinion interactions and the role of positive interventions. The first tier models the size of the opinion attention market using the multivariate discrete-time Hawkes process; the second tier leverages the market share attraction model to model opinions cooperating and competing for market share given limited attention. On a synthetic dataset, we show the convergence of our proposed estimation scheme. On a dataset of Facebook and Twitter discussions containing moderate and far-right opinions about bushfires and climate change, we show superior predictive performance over the state-of-the-art and the ability to uncover latent opinion interactions. Lastly, we use Omm to demonstrate the effectiveness of mainstream media coverage as a positive intervention in suppressing far-right opinions.
The usability of the events information on social media has been widely studied recently. Several surveys have reviewed the specific type of events on social media using various techniques. Most of the existing methods for event detection are segregated as they approach certain situations that limit the overall details of events happening consecutively on social media while ignoring the crucial relationship between the evolution of these events. Numerous events that materialize on the social media sphere every day before our eyes jeopardize people’s safety and are referred to by using a high-level concept of dangerous events. The front of dangerous events is broad, yet no known work exists that fully addresses and approaches this issue. This work introduces the term dangerous events and defines its scope in terms of practicality to establish the origins of the events caused by the previous events and their respective relationship. Furthermore, it divides dangerous events into sentiment, scenario, and action-based dangerous events grouped on their similarities. The existing research and methods related to event detection are surveyed, including some available events datasets and knowledge-base to address the problem. Finally, the survey is concluded with suggestions for future work and possible related challenges.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.