COMPASS '91, Proceedings of the Sixth Annual Conference on Computer Assurance
DOI: 10.1109/cmpass.1991.161051
|View full text |Cite
|
Sign up to set email alerts
|

Design strategy for a formally verified reliable computing platform

Abstract: This paper presents a high-level design for a reliable computing platform for real-time control applications. The design tradeo s and analyses related to the development of a formallyveri ed reliable computing platform are discussed. The design strategy advocated in this paper requires the use of techniques that can be completely characterized mathematically as opposed to more powerful or more exible algorithms whose performance properties can only be analyzed by simulation and testing. The need for accurate r… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 13 publications
(4 citation statements)
references
References 8 publications
0
4
0
Order By: Relevance
“…Since today's technology cannot support the manufacturing of electronic devices with failure rates low enough to meet the reliability requirements, the reliability of an ultra-dependable system must be higher than the reliability of each of its node computers. This can only be achieved by utilizing fault-tolerant strategies that enable the continued operation of the system in the presence of node computer failures (Butler et al, 1991). The integrated architecture builds on top of a time-triggered architecture, which offers a consistent distributed computing base (Poledna et al, 2002) with a consistent distributed state induced by the sparse time.…”
Section: Improved Dependabilitymentioning
confidence: 99%
“…Since today's technology cannot support the manufacturing of electronic devices with failure rates low enough to meet the reliability requirements, the reliability of an ultra-dependable system must be higher than the reliability of each of its node computers. This can only be achieved by utilizing fault-tolerant strategies that enable the continued operation of the system in the presence of node computer failures (Butler et al, 1991). The integrated architecture builds on top of a time-triggered architecture, which offers a consistent distributed computing base (Poledna et al, 2002) with a consistent distributed state induced by the sparse time.…”
Section: Improved Dependabilitymentioning
confidence: 99%
“…A Fault Containment Region (FCR) is defined as a subsystem that operates correctly regardless of any arbitrary logical or electrical fault outside the region [46]. The justification for building ultra-reliable systems from replicated resources rests on an assumption of failure independence among redundant units [2]. The independence of FCRs can be compromised by shared physical resources (e.g., power supply, timing source), external faults (e.g., Electromagnetic Interference (EMI), spatial proximity) and design.…”
Section: Fault-containment Regionsmentioning
confidence: 99%
“…In these ultra-dependable applications, a maximum failure rate of 10 −9 critical failures per hour is demanded [1][p. 10]. This can only be achieved by utilizing fault-tolerant strategies that enable the continued operation of the system in the presence of component failures [2].…”
Section: Introductionmentioning
confidence: 99%
“…Since ECU failure rates are in the order of 10 −5 to 10 −6 , ultra-dependable applications require the system as a whole to be more reliable than any one of its ECUs. This can only be achieved by utilizing fault-tolerant strategies that enable the continued operation of the system in the presence of ECU failures [3].…”
Section: Dependabilitymentioning
confidence: 99%