Abstract-This paper reports on the use of generic linear constraint (GLC) node removal as a method to control the computational complexity of long-term simultaneous localization and mapping. We experimentally demonstrate that GLC provides a principled and flexible tool enabling a wide variety of complexity management schemes. Specifically, we consider two main classes: batch multi-session node removal, in which nodes are removed in a batch operation between mapping sessions, and online node removal, in which nodes are removed as the robot operates. Results are shown for 34.9 h of realworld indoor-outdoor data covering 147.4 km collected over 27 mapping sessions spanning a period of 15 months.
I. INTRODUCTIONGraph-based simultaneous localization and mapping (SLAM) [1]- [7] has been used to successfully solve many challenging SLAM problems in robotics. In graph SLAM, the problem of finding the optimal configuration of historic robot poses (and optionally the location of landmarks), is associated with a Markov random field or factor graph. In the factor graph representation, robot poses are represented by nodes and measurements between nodes by factors. Under the assumption of Gaussian measurement noise the graph represents a least squares optimization problem. The computational complexity of this problem is dictated by the density of connectivity within the graph, and by the number of nodes and factors it contains.Unfortunately, the standard formulation of graph SLAM requires that nodes be continually added to the graph for localization. This is a problem for long-term applications as the computational complexity of the graph becomes dependent not only on the spatial extent of the environment, but also the duration of the exploration (Fig. 1(b)).Early filtering-based works [8], [9], and more recently [10], have focused on controlling the computational complexity by enforcing sparse connectivity in the graph. In [11], an information-theoretic approach is used to slow the rate of the graph growth by avoiding the addition of uninformative poses. In [12], when the robot revisits a previously explored location, it avoids adding new nodes and instead adds links between existing nodes.