Proceedings of the 51st Annual Design Automation Conference 2014
DOI: 10.1145/2593069.2593164
|View full text |Cite
|
Sign up to set email alerts
|

Multi-Objective Local-Search Optimization using Reliability Importance Measuring

Abstract: In recent years, reliability has become a major issue and objective during the design of embedded systems. Here, different techniques to increase reliability like hardware-/softwarebased redundancy or component hardening are applied systematically during Design Space Exploration (DSE), aiming at achieving highest reliability at lowest possible cost. Existing approaches typically solely provide reliability measures, e. g. failure rate or Mean-Time-To-Failure (MTTF), to the optimization engine, poorly guiding th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2016
2016
2020
2020

Publication Types

Select...
4
1
1

Relationship

1
5

Authors

Journals

citations
Cited by 10 publications
(4 citation statements)
references
References 14 publications
0
4
0
Order By: Relevance
“…Later, to improve the reliability of a system with limited budgets, we only need to improve the reliability of highly important components. In [5,28], we show this guides the DSE! towards highly reliable, yet affordable implementations.…”
Section: Introductionmentioning
confidence: 84%
“…Later, to improve the reliability of a system with limited budgets, we only need to improve the reliability of highly important components. In [5,28], we show this guides the DSE! towards highly reliable, yet affordable implementations.…”
Section: Introductionmentioning
confidence: 84%
“…To our knowledge, this is the first work specialized for independent tasks in the design of fault-tolerant multicores, as all the previous works (Das et al 2014;Gan et al 2012;Khosravi et al 2014;Pop et al 2009; Stralen and Pimentel 2012) consider general task sets (e.g., there are data dependences between tasks). As mentioned above, task mapping for general task sets is a well-known NP-hard problem.…”
Section: Problem Statementmentioning
confidence: 96%
“…Since different fault-tolerant techniques are usually characterized by different time and space overheads, there is an optimization trade-off in task hardening, i.e., determining one of the fault-tolerant techniques (e.g., DMR or TMR) for each task. This trade-off, together with the traditional optimization in task mapping, i.e., mapping each task to one of the cores, makes the design of fault-tolerant multi-cores very challenging (Das et al 2014;Gan et al 2012;Khosravi et al 2014;Pop et al 2009;Stralen and Pimentel 2012). It is notable that replication-based task hardening will introduce new tasks, i.e., replicas, into the system, and these replicas should be mapped as well.…”
Section: Introductionmentioning
confidence: 99%
“…Given all these degrees of freedom, in order to obtain optimal designs of fault-tolerant multiprocessor systems, a variety of DSE methods have been proposed over the last decade, especially the ones based on evolutionary algorithms (EAs) [6]- [9]. As a population-based metaheuristic optimization algorithm, EAs often provide good near optimal solutions to many types of problems, but their high time complexity [22], [23] is a prohibiting factor in real-word applications. Besides, EAs usually do not involve any highlevel problem-specific knowledge beyond that required in fitness evaluation [24], [25].…”
Section: Introductionmentioning
confidence: 99%