2010 IEEE 21st International Symposium on Software Reliability Engineering 2010
DOI: 10.1109/issre.2010.10
|View full text |Cite
|
Sign up to set email alerts
|

Assessing Asymmetric Fault-Tolerant Software

Abstract: This is the unspecified version of the paper.This version of the publication may differ from the final published version. Abstract -The most popular forms of fault tolerance against design faults use "asymmetric" architectures in which a "primary" part performs the computation and a "secondary" part is in charge of detecting errors and performing some kind of error processing and recovery. In contrast, the most studied forms of software fault tolerance are "symmetric" ones, e.g. Nversion programming. The latte… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
13
0

Year Published

2012
2012
2022
2022

Publication Types

Select...
2
2
1

Relationship

3
2

Authors

Journals

citations
Cited by 8 publications
(13 citation statements)
references
References 19 publications
(34 reference statements)
0
13
0
Order By: Relevance
“…Indeed, it is likely to affect their situation awareness (and thus their ability to detect potentially dangerous situations) and/or their trust in the AV (and thus their readiness to believe that their intervention is needed to resolve that situation). Also, the probability of a safety subsystem (like a human driver) taking successful action depends on the probability distribution of the demands on it created by the ML-based functions' failures [36,46,47], which will vary as the ML-based system evolves.…”
Section: Potential Fallacies In Using Disengagement Data And/or Extramentioning
confidence: 99%
“…Indeed, it is likely to affect their situation awareness (and thus their ability to detect potentially dangerous situations) and/or their trust in the AV (and thus their readiness to believe that their intervention is needed to resolve that situation). Also, the probability of a safety subsystem (like a human driver) taking successful action depends on the probability distribution of the demands on it created by the ML-based functions' failures [36,46,47], which will vary as the ML-based system evolves.…”
Section: Potential Fallacies In Using Disengagement Data And/or Extramentioning
confidence: 99%
“…), and use "failure rate" both in its technical meaning as the parameter (dpm) of, say, a Poisson process, and for the probability of failure per mile in a Bernoulli model (pfm, pcm). 5 "A first approximation" because the evolution of the ML-based core changes the set of failures to be tolerated by the safety subsystem (cf [22]).…”
Section: Operational Testing and Failure Processesmentioning
confidence: 99%
“…If a system copes well in the presence of one type of disturbances but less well with another type, changing the relative weights of these two types of disturbances will change the degree of dependability that will be observed. There will not even be a single indicator of "stressfulness" of an environment, so that we can say that if a system exhibited -say -99% availability under the benchmark stress, it will exhibit at least 99% availability in any 'less stressful" environment [30]. Likewise, we won't be able to trust that if system A is more dependable (from the viewpoint of interest: e.g., more reliable) than system B in the benchmark environment, it will still be more dependable in another environment.…”
Section: The Difficulty Of Extrapolationmentioning
confidence: 99%
“…So, all "coverage" measures have to be defined with respect to some stated type, or mix, of faults or disturbances; and the difficulties of extrapolation that characterised measures of dependability under stress also affect, in principle, measures of coverage. In particular, the desirability but also the limits of "benchmark" scenarios apply when estimating coverage factors just as when measuring a dependability measure [30].…”
Section: Measures Of "Coverage Factors"mentioning
confidence: 99%