2021
DOI: 10.1109/tnsm.2021.3052837
|View full text |Cite
|
Sign up to set email alerts
|

Machine Learning-Based Scaling Management for Kubernetes Edge Clusters

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
44
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
3
1

Relationship

1
8

Authors

Journals

citations
Cited by 92 publications
(44 citation statements)
references
References 28 publications
0
44
0
Order By: Relevance
“…This approach improves the automation of HPA and optimizes the application performance based on the load fluctuation. L.Toka et al [26] proposed HPA+, which provides proactive autoscaling to improve the quality of application services by exploiting the multi-forecast machine learning models. HPA+ applies the best prediction result from these forecasting models as the custom metric if their accuracy proves to equal that of the custom metric; otherwise, HPA+ uses CPU metrics to make scaling decisions.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…This approach improves the automation of HPA and optimizes the application performance based on the load fluctuation. L.Toka et al [26] proposed HPA+, which provides proactive autoscaling to improve the quality of application services by exploiting the multi-forecast machine learning models. HPA+ applies the best prediction result from these forecasting models as the custom metric if their accuracy proves to equal that of the custom metric; otherwise, HPA+ uses CPU metrics to make scaling decisions.…”
Section: Related Workmentioning
confidence: 99%
“…HPA+ applies the best prediction result from these forecasting models as the custom metric if their accuracy proves to equal that of the custom metric; otherwise, HPA+ uses CPU metrics to make scaling decisions. However, despite efforts such as mixing both vertical and horizontal scaling mechanisms [24] and applying the machine learning to KHPA [25], [26], the quality of application services can be reduced if the network delay between worker nodes in an edge computing environment is high.…”
Section: Related Workmentioning
confidence: 99%
“…The integration of placement methods with the auto-scaling capabilities of widely used platforms, e.g., Kubernetes [195], [196], or with reliability measures [166] necessary in an edge infrastructure that is prone to errors and downtimes, might be a challenging research avenue. Indeed, the optimization problem is rendered to be highly complex by integrating the dynamics of the deployed services and the actions the platform takes in turn.…”
Section: Temporal Placement Policies For Auto-scaling Edge Applicationsmentioning
confidence: 99%
“…First, the MLFO solves the PDL using the constraints, resulting in a mapping between the ML functions and the datacenters, and the connectivity and the deployment plan is computed. A list of iterations is generated that includes the communication of the MLFO with the VIO (e.g., Kubernetes) for the deployment of the ML functions (e.g., encapsulated into containers [5] ), and with the SDN controller for managing the connectivity among the ML functions. The list iterations include: i) the namespace creation (3); ii) the configuration of an image repository storing the different computing images that are retrieved when a new ML function instance is deployed (4); iii) the configuration of the ML pipeline network that entails creating the VLAN (5) and pairing it "namespace": "mla2-v1", "renderdata": { "gw": "192.168.30.73", "name": "ovs-vlan-600", "subnet": "192.168.30.72/29", "vlan": 600}, "type": "ovs_network" } 5 { "namespace": "mla2-v1", "renderdata": { "configs": [{"name": "push-url", "value":"http://192.168.30.74/reg"}], "filename": "servers.yaml", "name": "dynamic-config-2"}, "type": "yamlfileconfig" } 7 { "datacenter": "Metro DC-1", "renderdata": { "name": "VLAN_600_to_TUNNEL_200", "tunnelid": 200, "vlanid": 600}, "type": "vlantotunnel" } 6 Fig.…”
Section: Proposed Architecture and Workflowsmentioning
confidence: 99%