Abstract:Interest in applying sociological tools to analysing the social nature, antecedents and consequences of artificial intelligence (AI) has been rekindled in recent years. However, for researchers new to this field of enquiry, navigating the expansive literature can be challenging. This paper presents a practical way to help these researchers to think about, search and read the literature more effectively. It divides the literature into three categories. Research in each category is informed by one analytic persp… Show more
“…The need for a sociological conception of AI lies in the missing link to consistent social sciences empirical studies. Existing research is predominantly driven by the technical possibility (machine learning and neural network) applied to social and economic phenomena rather than being spurred by theoretically grounded research questions (Liu, 2021). To be more explicit, in the former case AI systems are more probably blind to social complexity (inequality, diversity, power structures) while in the latter sociological possibilities could inform and drive technological development.…”
Different people have different perceptions about artificial intelligence (AI). It is extremely important to bring together all the alternative frames of thinking—from the various communities of developers, researchers, business leaders, policymakers, and citizens—to properly start acknowledging AI. This article highlights the ‘fruitful collaboration’ that sociology and AI could develop in both social and technical terms. We discuss how biases and unfairness are among the major challenges to be addressed in such a sociotechnical perspective. First, as intelligent machines reveal their nature of ‘magnifying glasses’ in the automation of existing inequalities, we show how the AI technical community is calling for transparency and explainability, accountability and contestability. Not to be considered as panaceas, they all contribute to ensuring human control in novel practices that include requirement, design and development methodologies for a fairer AI. Second, we elaborate on the mounting attention for technological narratives as technology is recognized as a social practice within a specific institutional context. Not only do narratives reflect organizing visions for society, but they also are a tangible sign of the traditional lines of social, economic, and political inequalities. We conclude with a call for a diverse approach within the AI community and a richer knowledge about narratives as they help in better addressing future technical developments, public debate, and policy. AI practice is interdisciplinary by nature and it will benefit from a socio-technical perspective.
“…The need for a sociological conception of AI lies in the missing link to consistent social sciences empirical studies. Existing research is predominantly driven by the technical possibility (machine learning and neural network) applied to social and economic phenomena rather than being spurred by theoretically grounded research questions (Liu, 2021). To be more explicit, in the former case AI systems are more probably blind to social complexity (inequality, diversity, power structures) while in the latter sociological possibilities could inform and drive technological development.…”
Different people have different perceptions about artificial intelligence (AI). It is extremely important to bring together all the alternative frames of thinking—from the various communities of developers, researchers, business leaders, policymakers, and citizens—to properly start acknowledging AI. This article highlights the ‘fruitful collaboration’ that sociology and AI could develop in both social and technical terms. We discuss how biases and unfairness are among the major challenges to be addressed in such a sociotechnical perspective. First, as intelligent machines reveal their nature of ‘magnifying glasses’ in the automation of existing inequalities, we show how the AI technical community is calling for transparency and explainability, accountability and contestability. Not to be considered as panaceas, they all contribute to ensuring human control in novel practices that include requirement, design and development methodologies for a fairer AI. Second, we elaborate on the mounting attention for technological narratives as technology is recognized as a social practice within a specific institutional context. Not only do narratives reflect organizing visions for society, but they also are a tangible sign of the traditional lines of social, economic, and political inequalities. We conclude with a call for a diverse approach within the AI community and a richer knowledge about narratives as they help in better addressing future technical developments, public debate, and policy. AI practice is interdisciplinary by nature and it will benefit from a socio-technical perspective.
“…Social science research on inequality and ADM systems as well as interactions between algorithms and humans goes far beyond what we were able to cover here (e.g. Joyce et al, 2021;Liu, 2021). Other challenges range from, for example, accounting for the agency of algorithms (Lange et al, 2019), social and political challenges with respect to regulation (Mittelstadt, 2019), privacy (Anthony et al, 2017), and governance (Danaher et al, 2017), to artificial intelligence shifting power relationships (Kalluri, 2020), or other social impacts beyond inequality outcomes.…”
Section: Discussionmentioning
confidence: 92%
“…Scholars from various disciplines have called for examining algorithmic outcomes to avoid or mitigate undesired consequences of ADM (Kusner and Loftus, 2020; Zou and Schiebinger, 2018). Previous research from computer science (Mehrabi et al, 2019), legal studies (Wachter, 2020), and philosophical (Mittelstadt et al, 2016) perspectives discussed algorithmic, structural, and ethical problems with ADM. Joyce et al (2021) and Liu (2021) provide a general overview of sociological perspectives on related artificial intelligence.…”
Academic and public debates are increasingly concerned with the question whether and how algorithmic decision-making (ADM) may reinforce social inequality. Most previous research on this topic originates from computer science. The social sciences, however, have huge potentials to contribute to research on social consequences of ADM. Based on a process model of ADM systems, we demonstrate how social sciences may advance the literature on the impacts of ADM on social inequality by uncovering and mitigating biases in training data, by understanding data processing and analysis, as well as by studying social contexts of algorithms in practice. Furthermore, we show that fairness notions need to be evaluated with respect to specific outcomes of ADM systems and with respect to concrete social contexts. Social sciences may evaluate how individuals handle algorithmic decisions in practice and how single decisions aggregate to macro social outcomes. In this overview, we highlight how social sciences can apply their knowledge on social stratification and on substantive domains of ADM applications to advance the understanding of social impacts of ADM.
“…Yapay zekâya ilişkin yakın dönemde sosyoloji içinde ilginin hızlandığı söylenebilir (Liu, 2021). Yapay zekâ üzerine sosyoloji içerisindeki tartışma ve çalışmaları iki genel paradigma içinde sınıflandırmak mümkündür: Birincisi, yapay zekâyı toplumsal ve kültürel bir inşa, toplumsal bir sorun alanı olarak ele alan hümanist (insan-merkezci) yaklaşım, ikincisi ise genelde teknolojiyi özelde ise yapay zekânın toplumsal bir fail olarak düşünülmesi gerektiğine vurgu yapan post-fenomonoloji ve Latouryen sosyolojinin etkisindeki post-humanist yaklaşım.…”
Günümüz dünyasında en çok tartışılan teknolojik yeniliklerden biri olan yapay zekâ, etkinliğini hayatın her alanında her geçen gün biraz daha arttırmaktadır. Yapılan çalışmalar ve istatistikler ortaya koymaktadır ki yapay zekânın toplumsal hayata olan etkisi mütemadiyen artmakta ve toplumsal yaşamı farklı biçimlerde etkilemektedir. Dolayısıyla yapay zekâ yazılım ve bilgisayar bilimlerinin olduğu kadar sosyal bilimlerin de ilgi alanına girmektedir. Yapay zekâyı konu edinen sosyolojik çalışmalar hem nitelik hem nicelik açısından artış göstermektedir. Sosyoloji içerisinde yapay zekâ üzerine çalışmalarda iki temel yaklaşım olduğu görülmektedir. Birinci gruptaki çalışmalar yoğunlukla yapay zekâyı toplumsal etkileri bağlamında sermaye ve siyasal iktidarların toplumsal kontrolü sürdürmelerinin bir aracı, toplumsal eşitsizlikleri yeniden üreten bir olgu ve onu üretenlerin kültürel yanlılıklarını taşıyan kültürel bir fenomen olarak ele alınmaktadır. İkinci gruptaki çalışmalar ise, genelde teknolojiyi özelde ise yapay zekâyı toplumsallığın üretiminde "toplumsal aktör" olarak konumlandıran çalışmalardır. Bu çalışmanın amacı yapay zekânın sosyoloji literatüründe ele alınış biçimlerini eleştirel bir perspektiften ele almak ve yapay zekâ sosyolojisini tartışmaya açmaktır.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.