2022
DOI: 10.1109/tsc.2022.3195071
|View full text |Cite
|
Sign up to set email alerts
|

Multi-Dimensional Certification of Modern Distributed Systems

Abstract: Machine Learning (ML) is increasingly used to drive the operation of complex distributed systems deployed on the cloud-edge continuum enabled by 5G. Correspondingly, distributed systems' behavior is becoming more non-deterministic in nature. This evolution of distributed systems requires the definition of new assurance approaches for the verification of non-functional properties. Certification, the most popular assurance technique for system and software verification, is not immediately applicable to systems w… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
2
1

Relationship

1
5

Authors

Journals

citations
Cited by 9 publications
(9 citation statements)
references
References 52 publications
0
9
0
Order By: Relevance
“…A number of initiatives have started to standardize, audit, and certify algorithmic bias and fairness (Szczekocka et al, 2022), such as the IEEE P7003 TM Standard on Algorithmic Bias Considerations 7 , the IEEE Ethics Certification Program for Autonomous and Intelligent Systems 8 , the ISO/IEC TR 24027:2021-Bias in AI systems and AI aided decision making 9 , and the NIST AI Risk Management Framework 10 . Challenges of certification schemes are discussed in Anisetti et al (2023). Moreover, very few works attempt at investigating the practical applicability of fairness in AI (Madaio et al, 2022;Makhlouf et al, 2021b;Beutel et al, 2019), whilst several external audits of AI-based systems have been conducted (Koshiyama et al, 2021), sometimes with extensive media coverage (Camilleri et al, 2023).…”
Section: Fair-ai Methods and Resourcesmentioning
confidence: 99%
“…A number of initiatives have started to standardize, audit, and certify algorithmic bias and fairness (Szczekocka et al, 2022), such as the IEEE P7003 TM Standard on Algorithmic Bias Considerations 7 , the IEEE Ethics Certification Program for Autonomous and Intelligent Systems 8 , the ISO/IEC TR 24027:2021-Bias in AI systems and AI aided decision making 9 , and the NIST AI Risk Management Framework 10 . Challenges of certification schemes are discussed in Anisetti et al (2023). Moreover, very few works attempt at investigating the practical applicability of fairness in AI (Madaio et al, 2022;Makhlouf et al, 2021b;Beutel et al, 2019), whilst several external audits of AI-based systems have been conducted (Koshiyama et al, 2021), sometimes with extensive media coverage (Camilleri et al, 2023).…”
Section: Fair-ai Methods and Resourcesmentioning
confidence: 99%
“…They do not apply to data-intensive applications whose interactions are driven by (dynamic) non-functional requirements (e.g., [3]) referred also to data. In our reference example, trust must be negotiated on the basis of the geographical location of applications and data, to comply with data protection regulations (e.g., data of European citizens shall be processed in the EU or countries for which there exists an adequacy decision).…”
Section: Trust Management In Data Economy: Limitations and Motivationsmentioning
confidence: 99%
“…Service profiles should rely on upto-date information retrieved according to assurance techniques [5]. The integration with such techniques (e.g., certification [3]) must consider how to i) model (historical) service profiles, ii) compose and derive information from service profiles, iii) safely manage service profiles (C2.2, C2.3). The latter ensures that service profiles are stored and adequately protected for future negotiations, for instance, building on selective release [7], and certificate composition [6].…”
Section: Research Roadmapmentioning
confidence: 99%
See 1 more Smart Citation
“…In order to support QoS and constraintaware integration, the CSP exposes to the Deployment Agents suitable hooks to relevant resource (e.g., resource manager for deployment) or services (e.g., services offering security features) constituting their capabilities (R5). For instance, the CSP can offer a hook to access non-functional certificates (i.e., using certification scheme in [17], [18]) proving some capabilities or to invoke authentication services to support authentication requirements.…”
Section: B Deployment Architecturementioning
confidence: 99%