[1992] Digest of Papers. FTCS-22: The Twenty-Second International Symposium on Fault-Tolerant Computing
DOI: 10.1109/ftcs.1992.243581
|View full text |Cite
|
Sign up to set email alerts
|
Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
39
0

Publication Types

Select...
6
3

Relationship

1
8

Authors

Journals

citations
Cited by 34 publications
(40 citation statements)
references
References 8 publications
1
39
0
Order By: Relevance
“…Following previous work [6], [7], [9], it is assumed that the computation is represented by a directed acyclic graph (dag), where the nodes represent computational operations and the edges represent data flow. Such a dataflow graph is obtained from a high-level description of the computation.…”
Section: The System Modelmentioning
confidence: 99%
See 1 more Smart Citation
“…Following previous work [6], [7], [9], it is assumed that the computation is represented by a directed acyclic graph (dag), where the nodes represent computational operations and the edges represent data flow. Such a dataflow graph is obtained from a high-level description of the computation.…”
Section: The System Modelmentioning
confidence: 99%
“…The rollback and retry approach was introduced in the context of reliable software in [3]. Several issues that arise in the synthesis of ASIC's that recover from transient faults through this approach are addressed in [1], [6], [7], [9]- [11].…”
Section: Introductionmentioning
confidence: 99%
“…Recently, eorts have been made to incorporate new design constraints into high-level synthesis such as fault-tolerance [7,8] and testability. T estability constraints have been included into high-level synthesis in [10], wherein synthesis is performed to reduce sequential depth between registers and primary I/O pins.…”
Section: Previous Workmentioning
confidence: 99%
“…[15] developed high-level synthesis algorithms targeting self-recovering data paths. Duplication and comparison of results at checkpoints was used in [14], while duplication and comparison of results as soon as they become available was used in [15]. Although these algorithms reduce the comparison overhead, they do not reduce the almost 100% hardware overhead of duplication.…”
Section: Introductionmentioning
confidence: 99%