Federated Learning (FL) has emerged as a critical technology to train deep learning models across massive decentralized IoT data on-device. While FL preserves data privacy, it encounters challenges like synchronization latency for model aggregation and single-point failures. In response to these issues, Hierarchical Federated Learning (HFL), which employs edge servers near edge devices to reduce synchronization latency and enhance resilience against single-point failures, has been proposed. However, the assumption of labeled edge devices, i.e, labeled data on edge devices, often proves impractical. Recent researches on semi-supervised FL enable model training for unlabeled edge devices, yet integrating these into HFL presents challenges in balancing model accuracy and training efficiency. This paper introduces FLAGS, a novel semi-supervised HFL system with adaptive global aggregation intervals. Building on the HFL system, FLAGS conducts alternate training between labeled cloud data and unlabeled edge devices. Through an adaptive global aggregation intervals control algorithm, FLAGS navigates the balance between model performance and training efficiency. Evaluation on CIFAR-10 demonstrates FLAGS outperforming baselines within designated time budgets.