Increasingly sophisticated algorithms implemented on autonomous agents/robots with massive computational capabilities have enabled solving complex tasks in uncertain environments, such as mapping of disaster areas and search-and-rescue operations. For agents to explore and interact with the environment, it is important that they have a coherent view of this environment and their positions within it (see, for example, [19,25] and references therein). Such situational awareness is typically achieved through simultaneous localisation and mapping (SLAM) [32] and localisation using heterogeneous sensor fusion [36]. While research on situational awareness has traditionally focused on improving the localisation accuracy, the focus has now shifted to localisation methods that have knowledge of computational complexity limitations of the agents, energy and communication constraints as well as the agent's higher-level task [8,10]; for instance, a higher-level task may be the navigation of the robot in the environment from its current position to its final position. This chapter considers the problem of how position uncertainty in M. Fröhle ( ) • H. Wymeersch