In this paper, a novel deep learning dataset, called Air2Land, is presented for advancing the state-of-the-art object detection and pose estimation in the context of one fixed-wing unmanned aerial vehicle autolanding scenarios. It bridges vision and control for groundbased vision guidance systems having the multi-modal data obtained by diverse sensors and pushes forward the development of computer vision and autopilot algorithms targeted at visually assisted landing of one fixed-wing vehicle. The dataset is composed of sequential stereo images and synchronised sensor data, in terms of the flying vehicle pose and Pan-Tilt Unit angles, simulated in various climate conditions and landing scenarios. Since real-world automated landing data is very limited, the proposed dataset provides the necessary foundation for vision-based tasks such as flying vehicle detection, key point localisation, pose estimation etc. Hereafter, in addition to providing plentiful and scenerich data, the developed dataset covers high-risk scenarios that are hardly accessible in reality. The dataset is also open and available at https://github.com/micros-uav/micros_ air2land as well. The cover image is based on the Research Article