This dissertation examines the use of machine learning within management research. Specifically, it introduces a new approach to research known as algorithm-supported abduction and illustrates this process using two constructs relevant to the workplace: racism and sexism.Chapter 1 provides a practice-oriented review of the use of machine learning within management studies. It starts by discussing the limitations of conventional research methods and how they can be overcome using algorithm-supported abduction. It then introduces a new research method known as algorithm-supported abduction in which machine learning is used to assess empirical patterns within data, which are then used in combination with theory to create and test formal hypotheses. The chapter then discusses the basic aspects of machine learning that are shared across most machine learning approaches. This discussion includes an overview of the type of data used in machine learning analyses and the general forms of machine learning models used in management. The chapter then covers the basic process of building machine learning models and the metrics used to assess model performance. The chapter then proceeds with a more detailed examination of the popular machine learning approaches used in management, which include topic classifiers and topic modeling for analyzing text data, and decision tree and neural network models for analyzing numeric data. It then proceeds to illustrate how these models have been used in management research by providing an overview of the empirical research using machine learning in management.Chapter 2 illustrates the process of algorithm-supported abduction by examining the psychological predictors of racism. Racism in the workplace is one of the most pressing social issues facing businesses today. Most existing research on value-based antecedents of racism focuses on values related to conservatism, which are difficult to change. In this project, I sought
Laypeople tend to distrust innovative technologies, especially when the product involves complex systems. Although researchers try to increase public trust by highlighting the accuracy of AI technologies, many people still do not trust AI products as they do not understand how the AI model generates its predictions. One solution is to help laypeople understand how AI models work using functional analogies. Functional analogies explain AI systems by building on consumers' common knowledge and drawing parallels to phenomena that consumers are already familiar with. We present a case study of a black box AI-based Covid-19 detection product, GeNose C-19, developed by the Indonesian government. We find that explaining how GeNose works using functional analogies increases both Indonesian and American lay consumers' trust in GeNose. More broadly, the findings suggest that promoters of AI products can use analogies to explain how the product works to increase lay consumers' trust in the product.ARTIFICIAL INTELLIGENCE based products often succeed in the market but often fail. Similar to traditional technological products, some market failures are due to technological shortcomings, whereas others are due to overpromises made via marketing that the technology could not deliver Computer
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.