Driven by advances in deep neural network models that fuse multimodal input such as RGB and depth representations to accurately understand the semantics of the environments (e.g., objects of different classes, obstacles, etc.), ground robots have gone through dramatic improvements in navigating unknown environments. Relying on their singular, limited perspective, however, can lead to suboptimal paths that are wasteful and quickly drain out their batteries, especially in the case of long-horizon navigation. We consider a special class of ground robots, that are air-deployed, and pose the central question: can we leverage aerial perspectives of differing resolutions and fields of view from air-to-ground robots to achieve superior terrain-aware navigation? We posit that a key enabler of this direction of research is collaboration between such robots, to collectively update their route plans, leveraging advances in long-range communication and on-board computing. Whilst each robot can capture a sequence of high resolution images during their descent, intelligent, lightweight pre-processing on-board can dramatically reduce the size of the data that needs to be shared among its peers over severely bandwidth-limited long range communication channels (e.g., over sub gigahertz frequencies). In this paper, we discuss use cases and key technical challenges that must be resolved to realize our vision for collaborative, multi-resolution terrain-awareness for air-to-ground robots.
CCS CONCEPTS• Computer systems organization → Embedded and cyberphysical systems;