The synergy of edge computing and Machine Learning (ML) holds immense potential for revolutionizing Internet of Things (IoT) applications, particularly in scenarios characterized by high-speed, continuous data generation. Offline ML algorithms are unsuitable for data streams, as they require existing datasets to build prediction models. As an embodiment of ML, which embraces the fact that learning environments change over time, in Online Machine Learning (OML) the model is trained with each new observation in production time. Most OML algorithms are developed after the offline version and present different behaviors, considering bias and variance. Finding a suitable estimator to solve a ML problem is still a challenge. In this context, ensemble learning emerges as a promising approach for balancing the bias-variance tradeoff and improving prediction accuracy by aggregating outputs from multiple ML models. This paper introduces a novel ensemble method tailored for edge computing environments, designed to efficiently operate on resource-constrained devices while accommodating various online learning scenarios. The primary objective is to enhance predictive accuracy at the edge. We conducted extensive experimental evaluations to assess our proposed ensemble's predictive performance using synthetic and real datasets. Our ensemble outperformed state-of-the-art data stream algorithms and ensemble regressors across various regression metrics. Furthermore, we evaluated the ensemble's performance in the context of auto-scaling for Virtual Network Function (VNF)-based applications operating at the network's edge.