Artificial Intelligence (AI) is about making computers that do the sorts of things that minds can do, and as we progress towards this goal, we tend to increasingly delegate human tasks to machines. However, AI systems usually do these tasks with an unusual imbalance of insight and understanding: new, deeper insights are present, yet many important qualities that a human mind would have previously brought to the activity are utterly absent. Therefore, it is crucial to ask which features of minds have we replicated, which are missing, and if that matters. One core feature that humans bring to tasks, when dealing with the ambiguity, emergent knowledge, and social context presented by the world, is reflection. Yet this capability is utterly missing from current mainstream AI. In this paper we ask what reflective AI might look like. Then, drawing on notions of reflection in complex systems, cognitive science, and agents, we sketch an architecture for reflective AI agents, and highlight ways forward.The definition also implies that there are things that human minds currently do, that in the future computers might do instead. This is not only true now, but has always been the case throughout human history, and likely will be far into the future. It is this transference of activity that gives rise to the seemingly constant stream of examples of new 'AI technologies'. These mostly do things that human minds used to, or wished to do. This is, of course, also the source of many of the issues and benefits that arise from the creation and use of AI technology: as we figure out how to replicate some of the things that minds can do, we delegate these things to machines. This typically brings increased automation, scale, and efficiency, which themselves contain the seeds of both enormous potential social and economic benefit, and potential real danger and strife.Further, we can notice that these AI technologies usually do this with an unusual (im)balance of insight and understanding. New, deeper insight and understanding often arise from the models employed, while many of the 'qualities' that a human mind would have previously brought to the activity, are utterly absent.In designing and analysing embodied AI technologies, the concept of an intelligent agent is central, and necessitates descriptions that are abstracted from the natural intelligences they are inspired by or modelled on. This abstraction, in turn, means that the notion of an AI agent only partially captures the mental, cognitive, and physical features of natural intelligence. Hence, it is important to ask: are the features that we have included sufficient for what is needed? Are we satisfied with leaving out those which we did?Frank and Virginia Dignum have recently reminded us how powerful the concept of an agent is in AI [2]. They have also pointed out some of the paradigmatic failures of agent-based modelling (ABM) and multi-agent systems (MAS). ABM