Few-shot aerial image semantic segmentation is a challenging task that requires precisely parsing unseen-category objects in query aerial images with limited annotated support aerial images. Formally, category prototypes would be extracted from support samples to segment query images in a pixelto-pixel matching manner. However, aerial objects in aerial images are often distributed with arbitrary orientations, and varying orientations could cause a dramatic feature change. This unique property of aerial images renders conventional matching manner without consideration of orientations fails to activate same-category objects with different orientations. Furthermore, the oscillation of the confidence scores in existing rotationinsensitive algorithms, engendered by the striking changes of object orientations, often leads to false recognition of lowerscored rotated semantic objects. To tackle these challenges, inspired by the intrinsic rotation invariance in aerial images, we propose a novel few-shot rotation-invariant aerial semantic segmentation network (FRINet) to efficiently segment aerial semantic objects with diverse orientations. Specifically, through extracting orientation-varying yet category-consistent support information, FRINet provides rotation-adaptive matching for each query feature in a feature-aggregation manner. Meanwhile, to encourage consistent predictions for aerial objects with arbitrary orientations, segmentation predictions from different orientations are supervised by the same label and further fused to obtain the final rotation-invariant prediction in a complementary manner. Moreover, aiming at providing a better solution searching space, the backbones are newly pre-trained in the base category to basically boost the segmentation performance. Extensive experiments on the few-shot aerial image semantic segmentation benchmark demonstrate that the proposed FRINet achieves a new state-of-the-art performance. The code is available at https://github.com/caoql98/FRINet.