Legislation and ethical guidelines around the globe call for effective human oversight of the use of AI-based systems in high-risk contexts – that is oversight that reduces the risks associated with the use of AI-based systems. Such risks may relate to the imperfect accuracy of systems (e.g., incorrect classifications) or to ethical concerns (e.g., unfairness of outputs). Given the significant role that human oversight is expected to play in the operation of AI-based systems, it is crucial to gain a better understanding of the conditions for effective oversight. In this paper, we build on the literature on the management of imperfect automation to show that the reliable detection of errors or other deviant behavior is crucial for the effective management of these errors and thus crucial for effective oversight. We then propose that Signal Detection Theory (SDT) offers a promising framework for better understanding what affects people’s sensitivity (i.e., how well they are able to detect errors) and response bias (i.e., the tendency to report errors given a perceived evidence of an error) in detecting errors. To demonstrate the broad applicability of an SDT perspective to the study of error detection when overseeing AI-based systems, we then explicate the specifics for the case of unfairness detection. Additionally, we propose factors (task-, system-, and person-related factors) that may affect the sensitivity and response bias of humans tasked with detecting unfairness associated with the use of AI-based systems. Finally, we discuss implications and future research directions for an SDT perspective on error detection.