Rideshare and ride-pooling platforms use artificial intelligence-based matching algorithms to pair riders and drivers. However, these platforms can induce unfairness either through an unequal income distribution or disparate treatment of riders. We investigate two methods to reduce forms of inequality in ride-pooling platforms: by incorporating fairness constraints into the objective function and redistributing income to drivers who deserve more. To test these out, we use New York City taxi data to evaluate their performance on both the rider and driver side. For the first method, we find that optimizing for driver fairness out-performs state-of-the-art models in terms of the number of riders serviced, showing that optimizing for fairness can assist profitability in certain circumstances. For the second method, we explore income redistribution as a method to combat income inequality by having drivers keep an $r$ fraction of their income, and contribute the rest to a redistribution pool. For certain values of $r$, most drivers earn near their Shapley value, while still incentivizing drivers to maximize income, thereby avoiding the free-rider problem and reducing income variability. While the first method is useful because it improves both rider and driver-side fairness, the second method is useful because it improves fairness without affecting profitability, and both methods can be combined to improve rider and driver-side fairness.
The ubiquity of AI leads to situations where humans and AI work together, creating the need for learning-to-defer algorithms that determine how to partition tasks between AI and humans. We work to improve learning-to-defer algorithms when paired with specific individuals by incorporating two fine-tuning algorithms and testing their efficacy using both synthetic and image datasets. We find that finetuning can pick up on simple human skill patterns, but struggles with nuance, and we suggest future work that uses robust semi-supervised to improve learning. DISTRIBUTION STATEMENT A. Approved for public release. Distribution is unlimited.
Placing a human in the loop may abate the risks of deploying AI systems in safety-critical settings ( e.g., a clinician working with a medical AI system). However, mitigating risks arising from human error and uncertainty within such human-AI interactions is an important and understudied issue. In this work, we study human uncertainty in the context of concept-based models, a family of AI systems that enable human feedback via concept interventions where an expert intervenes on human-interpretable concepts relevant to the task. Prior work in this space often assumes that humans are oracles who are always certain and correct. Yet, realworld decision-making by humans is prone to occasional mistakes and uncertainty. We study how existing concept-based models deal with uncertain interventions from humans using two novel datasets: UMNIST, a visual dataset with controlled simulated uncertainty based on the MNIST dataset, and CUB-S, a relabeling of the popular CUB concept dataset with rich, densely-annotated soft labels from humans. We show that training with uncertain concept labels may help mitigate weaknesses of concept-based systems when handling uncertain interventions. These results allow us to identify several open challenges, which we argue can be tackled through future multidisciplinary research on building interactive uncertainty-aware systems. To facilitate further research, we release a new elicitation platform, UElic, to collect uncertain feedback from humans in collaborative prediction tasks.
Our research focuses on developing matching policies that match drivers and riders for ride-pooling services. We aim to develop policies that balance efficiency and various forms of fairness. We did this through two methods: new matching algorithms that include a fairness term in the objective function, and income redistribution methods based on the Shapley value of a driver. I tested these methods on New York City Taxicab data to evaluate their performance and found that they succeed in reducing certain forms of fairness.
Let a 1 , . . . , a L be relatively prime. We think of them as coin denominations. Let M = LCM (a 1 , . . . , a L ) and let CH(n) be the number of ways to make change of n cents. We show there is an exact piece wise formula for CH(n). The pieces are polynomials that depend on n mod M . We show that many of the pieces agree on all but the constant term. These results are not new; however, our treatment is self-contained, unified, and elementary.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.