2021
DOI: 10.1007/s11219-021-09557-y
|View full text |Cite
|
Sign up to set email alerts
|

Construction of a quality model for machine learning systems

Abstract: Nowadays, systems containing components based on machine learning (ML) methods are becoming more widespread. In order to ensure the intended behavior of a software system, there are standards that define necessary qualities of the system and its components (such as ISO/IEC 25010). Due to the different nature of ML, we have to re-interpret existing qualities for ML systems or add new ones (such as trustworthiness). We have to be very precise about which quality property is relevant for which entity of interest … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
17
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 40 publications
(24 citation statements)
references
References 40 publications
0
17
0
Order By: Relevance
“…The framework defines these components for ML applications: dataset, algorithm, ML component and system, and, for each of them, proposed an argumentation approach to assess quality. Finally, Siebert et al [64] proposed a formal modelling definition for quality requirements in ML systems. They start from the process definition in [45] and build a meta-model for the description of quality requirements.…”
Section: Related Workmentioning
confidence: 99%
“…The framework defines these components for ML applications: dataset, algorithm, ML component and system, and, for each of them, proposed an argumentation approach to assess quality. Finally, Siebert et al [64] proposed a formal modelling definition for quality requirements in ML systems. They start from the process definition in [45] and build a meta-model for the description of quality requirements.…”
Section: Related Workmentioning
confidence: 99%
“…We can treat such constituent properties of complex system theories as standards for their evaluation because the absence of these properties will lower the quality of the theory. Alternatively, with big data-related theories as (machine-)learning systems, we must consider other standards for evaluating theories, for example, interpretability (the degree to which the model can be interpreted by humans), robustness (the ability of a model to handle missing data and still make good predictions), or effectiveness (the degree to which the algorithm detects context changes) (Siebert et al 2022). When considering computational models, standard-based evaluation activities must concern, for instance, model verification (internal consistency of the model), input validation (representation of the target in a model), process validation (representation of real-world mechanisms within a model), and output validation (fit of the model or its ability to predict future states of a target) (Gräbner 2018).…”
Section: Big Data and Standards For Evaluating Theoriesmentioning
confidence: 99%
“…According to (Siebert et al, 2021), a machine learning model is a mathematical model or piece of software that an engineer or data scientist makes” intelligent” by training it with input data. As such, the quality of the model depends on the quality of the training data, so much so that, if we provide false information or unworked data, the trained model will give wrong answers.…”
Section: System Overviewmentioning
confidence: 99%
“…It does this by measuring performance measures (such as precision, recall, etc. for classification tasks), performing sensitivity analysis, or testing against contradictory examples (Siebert et al, 2021).…”
Section: System Overviewmentioning
confidence: 99%