IEEE/IFIP International Conference on Dependable Systems and Networks (DSN 2012) 2012
DOI: 10.1109/dsn.2012.6263960
|View full text |Cite
|
Sign up to set email alerts
|

Low-cost program-level detectors for reducing silent data corruptions

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

2
56
0

Year Published

2013
2013
2024
2024

Publication Types

Select...
4
2
2

Relationship

1
7

Authors

Journals

citations
Cited by 102 publications
(58 citation statements)
references
References 22 publications
2
56
0
Order By: Relevance
“…Cost vs. SDC reduction: We note that the objective of estimating SDCs with metrics is to identify an optimal set of SDC-targeted error detectors. We therefore employ a 0/1 knapsack algorithm to find an optimal set of detectors that will provide the largest SDC reduction at a given cost -we assume duplication for detectors and charge one instruction as the cost of duplicating and comparing results for one instruction on average (similar to [11]). Thus, we obtain an SDC reduction vs. cost graph for each application using the known SDC count for each instruction from Relyzer.…”
Section: Metric Evaluationmentioning
confidence: 99%
“…Cost vs. SDC reduction: We note that the objective of estimating SDCs with metrics is to identify an optimal set of SDC-targeted error detectors. We therefore employ a 0/1 knapsack algorithm to find an optimal set of detectors that will provide the largest SDC reduction at a given cost -we assume duplication for detectors and charge one instruction as the cost of duplicating and comparing results for one instruction on average (similar to [11]). Thus, we obtain an SDC reduction vs. cost graph for each application using the known SDC count for each instruction from Relyzer.…”
Section: Metric Evaluationmentioning
confidence: 99%
“…In order to pinpoint the sources of inaccuracy between the actual improvement rates that were determined using accurate flip-flop-level error injection vs. those published in the literature, we conducted error injection campaigns at other 13 Same applications studied in [49]. levels of abstraction (architecture register and program variable).…”
Section: Resilience Librarymentioning
confidence: 99%
“…However, we perform final analysis by incorporating the input data used during evaluation into the training step in order to give the technique the best possible benefit and to eliminate the occurrence of false positives. Checks for control variables (e.g., loop index, stack pointer, array address) are determined using application profiling and are manually added in the assembly code.In Table X, we breakdown the contribution to cost, improvement, and false positives resulting from assertions checking data variables [50] vs. those checking control variables [49]. Table XI demonstrates the importance of evaluating resilience techniques using accurate error injection (explained in [26]).…”
mentioning
confidence: 99%
See 1 more Smart Citation
“…Subsequent work [21] uses program invariants to detect SDC but they are susceptible to false positives. Authors in [9] [10] insert program-level detectors in SDC-hot sites to detect them. In this work, to reduce SDCs, we identify the sections of code that are more susceptible to large fault rates and propose architecture independent code transformations.…”
Section: Introductionmentioning
confidence: 99%