Proceedings 1998 Design and Automation Conference. 35th DAC. (Cat. No.98CH36175)
DOI: 10.1109/dac.1998.724458
|View full text |Cite
|
Sign up to set email alerts
|

User defined coverage-a tool supported methodology for design verification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
31
0
1

Publication Types

Select...
5
3

Relationship

2
6

Authors

Journals

citations
Cited by 31 publications
(32 citation statements)
references
References 0 publications
0
31
0
1
Order By: Relevance
“…Specification-based coverage, often referred to as functional coverage [36,21,40], has two possible roles in testing. The first role is to provide information for test case selection when building test sets, to maximise the test adequacy of the set [47].…”
Section: Specification-based Coveragementioning
confidence: 99%
“…Specification-based coverage, often referred to as functional coverage [36,21,40], has two possible roles in testing. The first role is to provide information for test case selection when building test sets, to maximise the test adequacy of the set [47].…”
Section: Specification-based Coveragementioning
confidence: 99%
“…Unfortunately, design errors sometimes slip through this testing process due to the immense size of the test space. To minimize the probability of undetected errors, designers employ various techniques to improve the quality of verification including co-simulation [4], coverage analysis, random test generation [5], and model-driven test generation [6].…”
Section: Design Faultsmentioning
confidence: 99%
“…The particular complexity of this area stems from the vast test-space, which includes many corner cases that should each be targeted, and from the intricacies of the implementation of floating-point operations. Verification by simulation involves executing a subset of tests which is assumed to be a representative sample of the entire test-space [1]. In doing so, we would like to be able to define a particular subspace, which we consider as "interesting" in terms of verification, and then generate tests selected at random out of the subspace.…”
Section: Introductionmentioning
confidence: 99%
“…FPgen is an automatic floating-point test-generator, which receives as input the description of a floating-point coverage task [1] and outputs a random test that covers this task. A coverage task is defined by specifying a floating-point instruction and a set of constraints on the inputs, on the intermediate result(s), and on the final result.…”
Section: Introductionmentioning
confidence: 99%