There are many definitions of what an artificial intelligence (AI) system is. This chapter emphasises the characteristics of AI to mimic human behaviour in the process of solving complex tasks in real-world environments. After introducing different types of AI systems, the chapter continues with a brief analysis of the distinction between research into what an AI system is in its inner structure and research into the uses of AI. Since much literature is already devoted to the ethical concerns surrounding the use of AI, this chapter addresses the problem of accountability with respect to opaque human-like AI systems. In addition, the chapter explains how research ethics in AI is fundamentally different from research ethics in any other field. Often, the goal of engineers in this field is to build powerful autonomous systems that tend to be opaque. The aim is therefore to build entities whose inner workings become unknown to their creators as soon as these entities start the learning process. A split accountability model is proposed to address this specificity.