Chatbots have long been advocated for computer-assisted language learning systems to support learners with conversational practice. A particular challenge in such systems is explaining mistakes stemming from ambiguous grammatical constructs. Misplaced modifiers, for instance, do not make sentences ungrammatical, but introduce ambiguity through the misplacement of an adverb or prepositional phrase. In certain cases, the ambiguity gives rise to humor, which can serve to illustrate the mistake itself. We conducted an online experiment with 400 native English speakers to explore the use of a chatbot to harness such humor. In an interaction resembling an advanced grammar exercise, the chatbot presented participants with a phrase containing a misplaced modifier, explained the ambiguity in the phrase, acknowledged (or ignored) the humor that the ambiguity gave rise to, and suggested a correction. Participants then completed a questionnaire, rating the chatbot with respect to ten traits. A quantitative analysis showed a significant increase in how participants rated the chatbot's personality, humor, and friendliness when it acknowledged the humor arising from the misplaced modifier. This effect was observed whether the acknowledgment was conveyed using verbal, nonverbal (emoji), or mixed cues.
Over the past decade, the use of chatbots for educational purposes has gained considerable traction. A similar trend has been observed in social coding platforms, where automated agents support software developers with tasks such as performing code reviews. While incorporating code reviews and social coding platforms into software engineering education has been found to be beneficial, challenges such as steep learning curves and privacy considerations are barriers to their adoption. Furthermore, no study has addressed the role chatbots play in supporting code reviews as a pedagogical tool. To help address this gap, we developed an online learning application that simulates the code review features available on social coding platforms and allows instructors to interact with students using chatbot identities. We then embedded this application within a lesson on software engineering best practices and conducted a controlled in-class experiment. This experiment examined the effect that explaining content via chatbot identities had on three aspects: (i) students' perceived usability of the lesson, (ii) their engagement with the code review process, and (iii) their learning gains. While our findings show that it is feasible to simulate the code review process within an online learning platform and achieve good usability, our quantitative analysis did not yield significant differences across treatment conditions for any of the aspects considered. Nevertheless, our qualitative results suggest that students expect explicit feedback when performing this type of exercise and could thus benefit from automated replies provided by an interactive chatbot. We propose to build on our current findings to further explore this line of research in future work.
Today, intelligent voice assistant (VA) software like Amazon's Alexa, Google's Voice Assistant (GVA) and Apple's Siri have millions of users. These VAs often collect and analyze huge user data for improving their functionality. However, this collected data may contain sensitive information (e.g., personal voice recordings) that users might not feel comfortable sharing with others and might cause significant privacy concerns. To counter such concerns, service providers like Google present their users with a personal data dashboard (called 'My Activity Dashboard'), allowing them to manage all voice assistant collected data. However, a realworld GVA-data driven understanding of user perceptions and preferences regarding this data (and data dashboards) remained relatively unexplored in prior research.To that end, in this work we focused on Google Voice Assistant (GVA) users and investigated the perceptions and preferences of GVA users regarding data and dashboard while grounding them in real GVA-collected user data. Specifically, we conducted an 80-participant survey-based user study to collect both generic perceptions regarding GVA usage as well as desired privacy preferences for a stratified sample of their GVA data. We show that most participants had superficial knowledge about the type of data collected by GVA. Worryingly, we found that participants felt uncomfortable sharing a non-trivial 17.7% of GVA-collected data elements with Google. The current My Activity dashboard, although useful, did not help long-time GVA users effectively manage their data privacy. Our real-data-driven study found that showing users even one sensitive data element can significantly improve the usability of data dashboards. To that end, we built a classifier that can detect sensitive data for data dashboard recommendations with a 95% F1-score and shows 76% improvement over baseline models.This extended version of our USENIX Security '22 paper includes appendices for interested readers.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.