Summary In this work, we investigate how students in fields adjacent to algorithms development perceive fairness, accountability, transparency, and ethics in algorithmic decision-making. Participants (N = 99) were asked to rate their agreement with statements regarding six constructs that are related to facets of fairness and justice in algorithmic decision-making using scenarios, in addition to defining algorithmic fairness and providing their view on possible causes of unfairness, transparency approaches, and accountability. The findings indicate that “agreeing” with a decision does not mean that the person “deserves the outcome,” perceiving the factors used in the decision-making as “appropriate” does not make the decision of the system “fair,” and perceiving a system's decision as “not fair” is affecting the participants' “trust” in the system. Furthermore, fairness is most likely to be defined as the use of “objective factors,” and participants identify the use of “sensitive attributes” as the most likely cause of unfairness.
While professionals are increasingly relying on algorithmic systems for making a decision, on some occasions, algorithmic decisions may be perceived as biased or not just. Prior work has looked into the perception of algorithmic decision-making from the user's point of view. In this work, we investigate how students in fields adjacent to algorithm development perceive algorithmic decisionmaking. Participants (N=99) were asked to rate their agreement with statements regarding six constructs that are related to facets of fairness and justice in algorithmic decision-making in three separate scenarios. Two of the three scenarios were independent of each other, while the third scenario presented three different outcomes of the same algorithmic system, demonstrating perception changes triggered by different outputs. Quantitative analysis indicates that ) 'agreeing' with a decision does not mean the person 'deserves the outcome', ) perceiving the factors used in the decision-making as 'appropriate' does not make the decision of the system 'fair' and ) perceiving a system's decision as 'not fair' is affecting the participants' 'trust' in the system. In addition, participants found proportional distribution of benefits more fair than other approaches. Qualitative analysis provides further insights into that information the participants find essential to judge and understand an algorithmic decision-making system's fairness. Finally, the level of academic education has a role to play in the perception of fairness and justice in algorithmic decision-making.
Professionals are increasingly relying on algorithmic systems for decision making however, algorithmic decisions occasionally perceived as biased or not just. Prior work has provided evidences that education can make a difference on the perception of young developers on algorithmic fairness. In this paper, we investigate computer science students' perception of FATE in algorithmic decisionmaking and whether their views on FATE can be changed by attending a seminar on FATE topics. Participants attended a seminar on FATE in algorithmic decision-making and they were asked to respond to two online questionnaires to measure their pre-and post-seminar perception on FATE. Results show that a short seminar can make a difference in understanding and perception as well as the attitude of the students towards FATE in algorithmic decision support. However, short seminars are not a complete and long term solution in educating CS students on FATE. CS curricula need to be updated and include FATE topics if we want algorithmic decision support systems to be just to their users. CCS CONCEPTS• Human-centered computing → Empirical studies in HCI .
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.