As technology becomes more sophisticated, the problems of appropriate function allocation, mode errors, and misuse of automation will continue to challenge system safety and e ciency. Addressing these problems will require the ®eld of cognitive ergonomics to consider three important challenges. First, to understand the human implications of self-organizing, multi-agent automation may involve recognizing the unique monitoring and control requirements. While current research has studied how people control a small number (2±10) of agents, the future will likely introduce the challenge of supervising hundreds of agents. Multiagent automation that consists of hundreds of loosely connected intelligent agents may exhibit powerful new adaptive behaviours that may be di cult for people to understand and manage. Secondly, to understand human interaction with increasing complex automation may require more comprehensive analysis and modelling techniques. Current analysis techniques such as analysis of variance, tend to rely upon static representations of the human±system interaction when dynamic representations are needed. Thirdly, understanding human interaction with this increasingly complex automation may bene®t from reconsidering new constructs to explain behaviour. The constructs of the information processing approach may not be su cient to explain reliance on multi-agent automation. Addressing the challenge of this new technology will require a theoretical understanding of human behaviour that goes beyond a task-based description of well-de®ned scenarios. Cognitive ergonomics must develop an understanding of the basic cognitive demands associated with managing multi-agent automation, tools that consider the dynamics of the interaction, and constructs that address the dynamic decision making that governs reliance.