Activists, journalists, and scholars have long raised critical questions about the relationship between diversity, representation, and structural exclusions in data-intensive tools and services. We build on work mapping the emergent landscape of corporate AI ethics to center one outcome of these conversations: the incorporation of diversity and inclusion in corporate AI ethics activities. Using interpretive document analysis and analytic tools from the values in design field, we examine how diversity and inclusion work is articulated in public-facing AI ethics documentation produced by three companies that create application and services layer AI infrastructure: Google, Microsoft, and Salesforce.We find that as these documents make diversity and inclusion more tractable to engineers and technical clients, they reveal a drift away from civil rights justifications that resonates with the "managerialization of diversity" by corporations in the mid-1980s. The focus on technical artifacts -such as diverse and inclusive datasets -and the replacement of equity with fairness make ethical work more actionable for everyday practitioners. Yet, they appear divorced from broader DEI initiatives and relevant subject matter experts that could provide needed context to nuanced decisions around how to operationalize these values and new solutions. Finally, diversity and inclusion, as configured by engineering logic, positions firms not as "ethics owners" but as ethics allocators; while these companies claim expertise on AI ethics, the responsibility of defining who diversity and inclusion are meant to protect and where it is relevant is pushed downstream to their customers.
CCS CONCEPTS• Social and professional topics → Codes of ethics; Computing / technology policy; • Computing methodologies → Artificial intelligence; • Applied computing → Law.
In early 2017, a journalist and search engine expert wrote about “Google’s biggest ever search quality crisis.” Months later, Google hired him as the first Google “Search Liaison” (GSL). By October 2021, when someone posted to Twitter a screenshot of misleading Google Search results for “had a seizure now what,” users tagged the Twitter account of the GSL in reply. The GSL frequently publicly interacts with people who complain about Google Search on Twitter. This article asks: what functions does the GSL serve for Google? We code and analyze 6 months of GSL responses to complaints on Twitter. We find that the three functions of the GSL are: (1) to naturalize the logic undergirding Google Search by defending how it works, (2) perform repair in responses to complaints, and (3) boundary drawing to control critique. This advances our understanding of how dominant technology companies respond to critiques and resist counter-imaginaries.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.