Strategic organizational decision making in today’s complex world is a dynamic process characterized by uncertainty. Therefore, diverse groups of responsible employees deal with the large amount and variety of information, which must be acquired and interpreted correctly to deduce adequate alternatives. The technological potential of artificial intelligence (AI) is expected to offer further support, although research in this regard is still developing. However, as the technology is designed to have capabilities beyond those of traditional machines, the effects on the division of tasks and the definition of roles established in the current human–machine relationship are discussed with increasing awareness. Based on a systematic literature review, combined with content analysis, this article provides an overview of the possibilities that current research identifies for integrating AI into organizational decision making under uncertainty. The findings are summarized in a conceptual model that first explains how humans can use AI for decision making under uncertainty and then identifies the challenges, pre-conditions, and consequences that must be considered. While research on organizational structures, the choice of AI application, and the possibilities of knowledge management is extensive, a clear recommendation for ethical frameworks, despite being defined as a crucial foundation, is missing. In addition, AI, other than traditional machines, can amplify problems inherent in the decision-making process rather than help to reduce them. As a result, the human responsibility increases, while the capabilities needed to use the technology differ from other machines, thus making education necessary. These findings make the study valuable for both researchers and practitioners.