Assistive robots can increase the autonomy and quality of life of people with disabilities and Augmented Reality (AR) User Interfaces (UIs) have the potential to facilitate their use. In this paper, we argue that to fulfil this potential and accommodate a more diverse user base, AR UIs should proactively identify user affordances, i.e., action options that are possible in the current context. However, current AR UIs for the control of assistive robots do not readily combine atomic actions and therefore can only provide individual actions as options. To overcome these limitations, we propose Affordance-Aware Proactive Planning (AP) 2 , an algorithm that proactively identifies feasible sequences of atomic actions by leveraging large datasets of plans given using human language. (AP) 2 incorporates natural language procesing, and planning algorithms to provide the most relevant and feasible plans given the user's context and provides means to reduce the time required to generate and present these as options to the user. Our main contributions are: 1) we propose a method that allows affordance-aware AR UI for robot control to combine atomic actions and provide higher-level options to the user, 2) we provide a means for dynamically updating goal states and the amount of semantically relevant plans that are analysed to facilitate ways to improve interactivity for the user, and 3) we validate the applicability of the proposed architecture with an assistive mobile manipulator deployed in a bedroom environment and controlled using an AR UI.