Many hard real-time systems need huge computing power and they are mostly designed by ad hoc methods. A m y pmcessors provide a viable means to achieve huge computing power and they can be designed systematically. This paper presenb 4 sysfemafic design methodology to design a m y processor based hard real-time systems. Introduct'lonReal-time systems must produce not only logically correct results, but also meet timing constraints. Depending on the types of timing constraints, real-time systems are divided into two groups: Hard real-time systems and Soft real-time systems [l], [2]. A soft real-time system must produce computations as fast as possible such that a statistically described response time is satisfied. In a hard real-time system, computations must be finished before a given deadline.Analogous to the status of VLSI design at its infancy, currently there is no scientific basis for hard real-time system design [2]. Though most state-of-the-art hard real-time systems have been designed by ad hoc methods, a scientific approach for hard real-time system design is esSential as verification of the ad hoc designs are costly and error prone. Due to huge processing power requirements, almost all hard real-time systems need a multiprocessing edvironment. According to r2], a multiprocessor hard real-time system must possess the following features: Homogeneity, Scalability, Survivability and Flexibility.Array processors consist of a set of modular processing elements (PES) with spatially local communication, which makes them homogeneous and scalable. Survivability and flexibility can be introduced in the array processor design as well. Furthermore, systematic. methods are used in array processor designing. These factors make array processor based hard real-time systems very attractive. The array processors 01'-erating with synchronous (asynchronous) communication are called systolic (wavefront) arrays. As the array processor contains modular PES, only design problems associated with regular or partially-regular dependence graphs are considered for array processor design.The rest of this paper is organized as follows. In Section 2, we briefly describe the widely used dependence graph approach and its limitations for real-time array processor design. In Section 3, our design methodology is presented. Finally, conclusions are drawn in Section 4. can be handled by these. Therefore, the current practice is to make the DG regular while the algorithm is written in single assignment form [8]. If the given problem is not associated with a regular DG, dummy operations can be added to get a regular DG. The DGs for large and complex problems are not regular in general and are very difficult to make regular by adding dummy operations. On the other hand, duminy nodes keep the PES in the array processor busy unnecessarily. This could prevent the ability to meet hard real-time deadlines. Dependence Graph Based Array Processor Design and its Limitations Structured Dependence Graph Based Array Processor DesignTo simplify the ...
This paper presents a method to optimise the reliability of a circuit in its application using a CAD system that simulates circuit behaviour including tolerances and allocates critical parts of the circuit. As an example a circuit susceptible to electromigration has been optimised towards both reliability and functionability.Building-in reliability requires a lot of co-operation between different disciplines [ 11. To make a VLSI-circuit robust by design, all devices have to be robust against their actual user conditions. This implies that not only detailed knowledge about the devices is required but also a lot of information about the circuit behaviour in its application is necessary.At this moment the link between devices and circuit is based on design rules that describe under what conditions devices may be used. This is especially the case for digital circuit design. However, these design rules are normally not detailed enough to perform a proper optimisation of circuit reliability. For example, electromigration design rules may consist of a maximum peak current and a maximum average current. But the actual dynamic wave form is no parameter, although this influences electromigration. Therefore it is necessary to have device failure models on circuit level. Such models must be detailed enough to detect conditions under which failures occur, but do not necessarily need to describe the whole failure mechanism in detail.As a circuit usually contains many devices each having their own failure mechanisms influenced by several stress-factors, a systematic approach is needed. This approach must be flexible enough to describe all failure mechanisms properly, but general enough to make it usable for a circuit designer who can not be an expert in the physical aspects of all failure mechanisms. This paper presents a systematic approach to model failure mechanisms on circuit level and to use these models for optimisation of both reliability and functionability. Optimisation is done using a CAD system. This way it is possible to carry out such an optimisation in a very early stage of the design process. In this system the stress-factors of the failure mechanisms are calculated using a circuit simulator. The effect of internal and external tolerances is incorporated in the simulation, as reliability problems often do not occur in nominal circuits, but in extreme circuits and under extreme user conditions. From the simulation results the sensitivity of failure behaviour for so-called designable parameters on circuit level is determined. This information is used to optimise the design towards minimum occurrence of failures. For functional demands the same methodology is used. Experience from earlier circuits about critical devices or topologies can be stored in a knowledge-base. This knowledge-base helps the designer to allocate critical parts of the circuit even before circuit simulation has been carried out. It is important to have this possibility of focusing on problem areas, as VLSIcircuits are too large to simul...
Traditionally the position of reliability analysis in the design and production process of electronic circuits is a position of reliability verification. A completed design is checked on reliability aspects and either rejected or accepted for production. This paper describes a method to model physical failure mechanisms within components in such a way that they can be used for reliability optimization, not after, but during the early phase of the design process. Furthermore a prototype of a CAD software tool is described, which can highlight components likely to fail and automatically adjust circuit parameters to improve product reliability.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.