Viral social media challenges have erupted across multiple social media platforms. While social media users participate in prosocial challenges designed to support good causes, like the Ice Bucket Challenge, some challenges (e.g., Cinnamon Challenge) can also potentially be dangerous. To understand the influential factors, experiences, and reflections of young adults who participated in a viral social media challenge in the past, we conducted interviews with 30 college students (ages 18-27). We applied behavioral contagion theory as a qualitative lens to understand whether this theory could help explain the factors that contributed to their participation. We found that behavior contagion theory was useful but not fully able to explain how and why young social media users engaged in viral challenges. Thematic analyses uncovered that overt social influence and intrinsic factors (i.e., social pressure, entertainment value, and attention-seeking) also played a key role in challenge participation. Additionally, we identified divergent patterns between prosocial and potentially risky social media challenges. Those who participated in prosocial challenges appeared to be more socially motivated as they saw more similarities between themselves and the individuals that they observed performing the challenges and were more likely to be directly encouraged by their friends to participate. In contrast, those who performed potentially risky challenges often did not see similarities with other challenge participants, nor did they receive direct encouragement from peers; yet, half of these participants said they would not have engaged in the challenge had they been more aware of the potential for physical harm. We consider the benefits and risks that viral social media challenges present for young adults with the intent of optimizing these interactions by mitigating risks, rather than discouraging them altogether.
We collected Instagram data from 150 adolescents (ages 13-21) that included 15,547 private message conversations of which 326 conversations were flagged as sexually risky by participants. Based on this data, we leveraged a human-centered machine learning approach to create sexual risk detection classifiers for youth social media conversations. Our Convolutional Neural Network (CNN) and Random Forest models outperformed in identifying sexual risks at the conversation-level (AUC=0.88), and CNN outperformed at the message-level (AUC=0.85). We also trained classifiers to detect the severity risk level (i.e., safe, low, medium-high) of a given message with CNN outperforming other models (AUC=0.88). A feature analysis yielded deeper insights into patterns found within sexually safe versus unsafe conversations. We found that contextual features (e.g., age, gender, and relationship type) and Linguistic Inquiry and Word Count (LIWC) contributed the most for accurately detecting sexual conversations that made youth feel uncomfortable or unsafe. Our analysis provides insights into the important factors and contextual features that enhance automated detection of sexual risks within youths' private conversations. As such, we make valuable contributions to the computational risk detection and adolescent online safety literature through our human-centered approach of collecting and ground truth coding private social media conversations of youth for the purpose of risk classification.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.