This paper presents a new software-implemented data error detection technique called Full Duplication and Selective Comparison. Our technique combines the ideas of existing techniques in order to increase the fault detection ratio, decrease the imposed code size and execution time overhead. As the name gives away, we opt to duplicate the entire code base and place comparison instructions in critical basic blocks only. The critical basic blocks are the blocks with two or more incoming edges. We evaluate our technique by implementing it for several case studies and by performing fault injection experiments. Next, we compared the obtained results to the parameters of three established techniques: Error Detection by Diverse Data and Duplicated Instructions, Critical Block Duplication and Software Implemented Fault Tolerance. The results show an average increase of 20.5% in fault detection ratio and an average decrease in code size and execution time overhead of 12.6% and 0.5%, respectively.
In this paper, we present a new software approach Instruction Level Duplication and Comparison (ILDC) for data error detection. We implemented ILDC in five different case studies. To validate our proposed technique, we measured its fault detection ratio and execution time overhead. Next, we compare it to the following two existing techniques: overhead reduction (VAR3+) and software-implemented fault tolerance (SWIFT), respectively. The results show that ILDC detects more errors than VAR3+ and SWIFT at a lower overhead.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.