2021
DOI: 10.1109/access.2021.3071417
|View full text |Cite
|
Sign up to set email alerts
|

Empirical Evaluation of a Method for Monitoring Cloud Services Based on Models at Runtime

Abstract: Cloud computing is being adopted by commercial and governmental organizations driven by the need to reduce the operational cost of their information technology resources and search for a scalable and flexible way to provide and release their software services. In this computing model, the Quality of Services (QoS) is agreed between service providers and their customers through Service Level Agreements (SLA). There is thus a need for systematic approaches with which to assess the quality of cloud services and t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
1
1

Relationship

1
6

Authors

Journals

citations
Cited by 10 publications
(9 citation statements)
references
References 32 publications
(72 reference statements)
0
7
0
Order By: Relevance
“…They focus on component base programming rather than on a general programming language. Cedillo et al [18] described a generic method to monitor the satisfaction of non-functional requirements in cloud environments using models at runtime and SLAs. They proposed a middleware that interacts with services.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…They focus on component base programming rather than on a general programming language. Cedillo et al [18] described a generic method to monitor the satisfaction of non-functional requirements in cloud environments using models at runtime and SLAs. They proposed a middleware that interacts with services.…”
Section: Discussionmentioning
confidence: 99%
“…The necessity of monitoring systems through runtime models is presented in many works [3,[5][6][7][8]. Recent works, such as [6,17,18], address the problem of reconfiguring and changing the behavior of systems at runtime. Although, to date, little research has focused on providing generic tools that are independent of the application domain.…”
Section: Antecedentsmentioning
confidence: 99%
“…In this way the architecture drivers specify not only the “what” but also the “how well” (i.e., with what degree of quality), providing clear, unambiguous input for the selection of an architecture solution for each driver. Some recent works highlight the need to evaluate scenarios and quality attribute properties at runtime, such as the approach described by Cedillo et al, 30 who suggest the use of models for monitoring and evaluating quality properties of cloud services at runtime. The recent work of Sobhy et al 29 uses time series forecasting, based on live data collected, to forecast the future performance of a system by using simulated instances of the architecture.…”
Section: State‐of‐the‐art and State‐of‐the‐practicementioning
confidence: 99%
“…For the evaluation of Monitor-IoT, a quasi-experiment was used as an empirical strategy [38]. It includes the participation of undergraduate students from the last semesters of the Systems Engineering College at the University of Azuay and the University of Cuenca.…”
Section: Empirical Evaluation Of Monitor-iotmentioning
confidence: 99%