Ensuring models' consistency is a key concern when using a model-based development approach. Therefore, model inconsistency detection has received significant attention over the last years. To be useful, inconsistency detection has to be sound, efficient, and scalable. Incremental detection is one way to achieve efficiency in the presence of large models. In most of the existing approaches, incrementalization is carried out at the expense of the memory consumption that becomes proportional to the model size and the number of consistency rules. In this paper, we propose a new incremental inconsistency detection approach that only consumes a small and model size-independent amount of memory. It will therefore scale better to projects using large models and many consistency rules.Many approaches dealing with inconsistency management exist in the literature. The first generation of approaches that tackle this challenge uses inconsistency rules and executes them on models in batch [9][10][11][12][13][14]. The limitation of these first generation approaches, also called batch checkers, is efficiency. The main problem is that each time a model is modified, all inconsistency rules have to be rechecked as a whole, which turns to be highly time consuming. As reported by [3], checking large models with batch checkers can then take hours to complete, which cannot be sustainable in effective development processes.A second generation of approaches has then emerged to deal with efficiency. These approaches, also called incremental checkers, limit the set of rules to recheck (rule reduction) and/or the model elements to consider (scope reduction) after each model change [15][16][17][18]. The most efficient approaches intensively use cache memory, thus introducing a memory overhead relative to the size of models and requiring a mechanism to manage the cache along the model's life cycle. An experimental study shows that one of the current fastest incremental checkers has a cache that induces a linear memory overhead on several real-world Unified Modeling Language (UML) models but that has a worst-case memory overhead that is quadratic in the number of elements [15]. With the increasing complexity and size of industrial design models, we believe that optimal memory management (scalability) in inconsistency management is as important as response time (efficiency). This is also confirmed by our industrial partners, which have realized a case study that confirms the critical need for more optimal memory management for large and complex models [19].In this paper, we present an incremental checker that puts more emphasis on scalability rather than on efficiency. It has no cache and requires only a fixed memory overhead to perform both rule and scope reductions. Even if it cannot perform reduction for some cases (explained in Section 5) and is therefore less efficient than checkers that use cache for such cases, the intensive benchmark we have performed, using real large open source models, shows that it is fast enough (rechecks are perf...