Transparency can empower users to make informed choices about how they use an algorithmic decision-making system and judge its potential consequences. However, transparency is often conceptualized by the outcomes it is intended to bring about, not the specifics of mechanisms to achieve those outcomes. We conducted an online experiment focusing on how different ways of explaining Facebook's News Feed algorithm might affect participants' beliefs and judgments about the News Feed. We found that all explanations caused participants to become more aware of how the system works, and helped them to determine whether the system is biased and if they can control what they see. The explanations were less effective for helping participants evaluate the correctness of the system's output, and form opinions about how sensible and consistent its behavior is. We present implications for the design of transparency mechanisms in algorithmic decision-making systems based on these results.
In "smart speaker'' digital assistant systems such as Google Home, there is no visual user interface, so people must learn about the system's capabilities and limitations by experimenting with different questions and commands. However, many new users give up quickly and limit their use to a few simple tasks. This is a problem for both the user and the system. Users who stop trying out new things cannot learn about new features and functionality, and the system receives less data upon which to base future improvements. Symbiosis---a mutually beneficial relationship---between AI systems like digital assistants and people is an important aspect of developing systems that are partners to humans and not just tools. In order to better understand requirements for symbiosis, we investigated the relationship between the types of digital assistant responses and users' subsequent questions, focusing on identifying interactions that were discouraging to users when speaking with a digital assistant. We conducted a user study with 20 participants who completed a series of information seeking tasks using the Google Home, and analyzed transcripts using a method based on applied conversation analysis. We found that the most common response from the Google Home, a version of "Sorry, I'm not sure how to help'', provided no feedback for participants to build on when forming their next question. However, responses that provided somewhat strange but tangentially related answers were actually more helpful for conversational grounding, which extended the interaction. We discuss the connection between grounding and symbiosis, and present recommendations for requirements for forming partnerships with digital assistants.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.