“…To discuss how ADM systems may impact social inequality, we adapt a “big data process model” (Weyer et al, 2018: 74), by breaking down the ADM process into three steps (based on Weyer et al, 2018). We discuss how social inequalities may be shaped in each step: Data generation: Data bases may be biased, for example, due to historical discrimination against social groups or incomplete data availability. Data preparation and analysis: An algorithm may adapt or even reinforce biases that are already present in the data.…”
Section: A Process Model Of Admmentioning
confidence: 99%
“…We discuss how social inequalities may be shaped in each step: Data generation: Data bases may be biased, for example, due to historical discrimination against social groups or incomplete data availability. Data preparation and analysis: An algorithm may adapt or even reinforce biases that are already present in the data. This includes the choice and construction of variables that serve as the input for the algorithm, the choice of fairness metrics for identifying biases, and the choice of bias mitigation measures. Implementation: The way ADM systems ultimately affect inequality depends on their implementation within contexts (for contexts, see Weyer et al, 2018). Human decision-makers, if present, might handle algorithmic recommendations differently, and those affected by ADM-based decision might differ in their reactions.…”
Academic and public debates are increasingly concerned with the question whether and how algorithmic decision-making (ADM) may reinforce social inequality. Most previous research on this topic originates from computer science. The social sciences, however, have huge potentials to contribute to research on social consequences of ADM. Based on a process model of ADM systems, we demonstrate how social sciences may advance the literature on the impacts of ADM on social inequality by uncovering and mitigating biases in training data, by understanding data processing and analysis, as well as by studying social contexts of algorithms in practice. Furthermore, we show that fairness notions need to be evaluated with respect to specific outcomes of ADM systems and with respect to concrete social contexts. Social sciences may evaluate how individuals handle algorithmic decisions in practice and how single decisions aggregate to macro social outcomes. In this overview, we highlight how social sciences can apply their knowledge on social stratification and on substantive domains of ADM applications to advance the understanding of social impacts of ADM.
“…To discuss how ADM systems may impact social inequality, we adapt a “big data process model” (Weyer et al, 2018: 74), by breaking down the ADM process into three steps (based on Weyer et al, 2018). We discuss how social inequalities may be shaped in each step: Data generation: Data bases may be biased, for example, due to historical discrimination against social groups or incomplete data availability. Data preparation and analysis: An algorithm may adapt or even reinforce biases that are already present in the data.…”
Section: A Process Model Of Admmentioning
confidence: 99%
“…We discuss how social inequalities may be shaped in each step: Data generation: Data bases may be biased, for example, due to historical discrimination against social groups or incomplete data availability. Data preparation and analysis: An algorithm may adapt or even reinforce biases that are already present in the data. This includes the choice and construction of variables that serve as the input for the algorithm, the choice of fairness metrics for identifying biases, and the choice of bias mitigation measures. Implementation: The way ADM systems ultimately affect inequality depends on their implementation within contexts (for contexts, see Weyer et al, 2018). Human decision-makers, if present, might handle algorithmic recommendations differently, and those affected by ADM-based decision might differ in their reactions.…”
Academic and public debates are increasingly concerned with the question whether and how algorithmic decision-making (ADM) may reinforce social inequality. Most previous research on this topic originates from computer science. The social sciences, however, have huge potentials to contribute to research on social consequences of ADM. Based on a process model of ADM systems, we demonstrate how social sciences may advance the literature on the impacts of ADM on social inequality by uncovering and mitigating biases in training data, by understanding data processing and analysis, as well as by studying social contexts of algorithms in practice. Furthermore, we show that fairness notions need to be evaluated with respect to specific outcomes of ADM systems and with respect to concrete social contexts. Social sciences may evaluate how individuals handle algorithmic decisions in practice and how single decisions aggregate to macro social outcomes. In this overview, we highlight how social sciences can apply their knowledge on social stratification and on substantive domains of ADM applications to advance the understanding of social impacts of ADM.
“…The extent to which the social can be represented by big data at all needs to be investigated, as well as whether personality traits can be read from digital behavioural data (Kosinski et al, 2013), and what distortions occur in the algorithmic creation of rankings (social scoring; recommendation services) and prognoses (Kinder-Kurlanda, 2020;Mau, 2017). On the other hand, it should not be ignored that algorithmic big-data applications that support decisions, such as predictive policing, become particularly problematic if a meaningful theoretical foundation for statistical data evaluation is dispensed with or is only rudimentary (Weyer et al, 2018). From the political side, there are now often missionary calls for freedom of information, for data exchange to be maximized by means of open data and for as much data as possible to be linked together (Harari, 2016).…”
Section: Classification and Special Featuresmentioning
“…Es stellen sich auch die Fragen, inwieweit das Soziale überhaupt durch Big Data abbildbar ist, ob sich Persönlichkeitsmerkmale aus digitalen Verhaltensdaten ablesen lassen (Kosinski et al 2013) und zu welchen Verzerrungen es bei der algorithmischen Erstellung von Rangfolgen (Social Scoring; Empfehlungsdienste) und Prognosen kommt (Mau 2017;Kinder-Kurlanda 2020). Zum anderen darf nicht ausgeblendet werden, dass algorithmische Big-Data-Anwendungen zur Unterstützung von Entscheidungen wie etwa Predictive Policing besonders dann problematisch werden, falls auf eine sinnstiftende theoretische Fundierung der statistischen Datenauswertung verzichtet wird oder diese nur rudimentär vorhanden ist (Weyer et al 2018). Von politischer Seite wird jetzt oft missionarisch eine Informationsfreiheit gefordert, dass der Datenaustausch mittels "Open Data" maximiert und möglichst viele Daten miteinander verbunden werden (Harari 2016).…”
Section: Die Konvergenz Von Technik Und Religion: Implizite Alltagsre...unclassified
“…selbstfahrende Autos), der Energieversorgung oder auch von superintelligenten Maschinen, verlangt auch nach neuen Governance-Modi. Ansätze zentraler Echtzeit-Steuerungen von dezentralen Systemen stehen aber erst am Anfang ihrer Entwicklung (Weyer et al 2018). Hoffnungen und Anstrengungen werden in die Entwicklung kollektiver Intelligenz für Netzwerke gesteckt, die sowohl menschliche als auch technische Akteure integrieren.…”
Section: Governance In Einer Komplexen Digitalen Weltunclassified
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.