Research in artificial intelligence (AI) has started in the twentieth century but it was not until 2012 that modern models of artificial neural networks aided the machine learning process considerably so that in the past ten years, both computer vision as well as natural language processing have become increasingly better. AI developments have accelerated rapidly, leaving open questions about the potential benefits and risks of these dynamics and how the latter might be managed. This paper discusses three major risks, all lying in the domain of AI safety engineering: the problem of AI alignment, the problem of AI abuse, and the problem of information control. The discussion goes through a short history of AI development, briefly touching on the benefits and risks, and eventually making the case that the risks might potentially be mitigated through strong collaborations and awareness concerning trustworthy AI. Implications for the (digital) humanities are discussed.