We introduce a new research area in Visual Analytics (VA) aiming to bridge existing gaps between methods of interactive Machine Learning (ML) and eXplainable Artificial Intelligence (XAI), on one side, and human minds, on the other side. The gaps are, first, a conceptual mismatch between ML/XAI outputs and human mental models and ways of reasoning, second, a mismatch between the information quantity and level of detail and human capabilities to perceive and understand. A grand challenge is to adapt ML and XAI to human goals, concepts, values, and ways of thinking. Complementing the current efforts in XAI towards solving this challenge, VA can contribute by exploiting the potential of visualization as an effective way of communicating information to humans and a strong trigger of human abstractive perception and thinking. We propose a cross-disciplinary research framework and formulate research directions for VA. THE IMPORTANCE of involving humans in the process of creating and training Machine Learning (ML) models is currently widely recognized in the ML community [1]. It is argued that humans involved in this process need to understand what the machine is doing and how it uses human inputs; hence, the machine must be able to explain its behavior to the users. Understanding of ML models has also critical importance for deciding whether they can be adopted for practical use. Explainability of models may even be more important than their performance, especially in high-stake domains. In response to the need to explain untransparent ML models ("black boxes") to users, the research field of eXplainable Artificial Intelligence (XAI) has emerged recently [8]. The work in this field was boosted by the European Parliament's adoption of the General Data Protection Regulation (GDPR), which introduces the right of