Proceedings of the 45th Annual Design Automation Conference 2008
DOI: 10.1145/1391469.1391536
|View full text |Cite
|
Sign up to set email alerts
|

Functional test selection based on unsupervised support vector analysis

Abstract: Extensive software-based simulation continues to be the mainstream methodology for functional verification of designs. To optimize the use of limited simulation resources, coverage metrics are essential to guide the development of effective test suites. Traditional coverage metrics are defined based on either a functional model or a structural model of the design. If our goal is to select a subset of tests from a set of tests, using these coverage metrics require simulation of the entire set before the effecti… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
22
0

Year Published

2008
2008
2023
2023

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 33 publications
(22 citation statements)
references
References 15 publications
0
22
0
Order By: Relevance
“…The early work in [22] proposed to implement the filtering component by solving a novelty detection problem with unsupervised learning. Fig.…”
Section: The Filtering Componentmentioning
confidence: 99%
See 1 more Smart Citation
“…The early work in [22] proposed to implement the filtering component by solving a novelty detection problem with unsupervised learning. Fig.…”
Section: The Filtering Componentmentioning
confidence: 99%
“…If a test falls outside, it is novel and selected for simulation. One of the learning algorithms which can be used to build such a novelty model is the Support Vector Machine (SVM) one-class algorithm [13] which was used in [22]. In [22], the tests were assumed to be sequences of binary vectors.…”
Section: The Filtering Componentmentioning
confidence: 99%
“…In the recursive incremental decision tree algorithm described in Figure 4, the parts different from GoldMine (lines 4,7,8) are out lined. Figure 5 shows the a regular decision tree and an incremental version of it.…”
Section: Counterexample-based Incremental Decision Treesmentioning
confidence: 99%
“…Recently, some techniques based on DUV's input space analysis have been proposed, aiming to enhance the quality of generated stimuli values; moreover this may improve testbench application performance by reducing the number of simulation cycles required to fulfill the coverage criteria. In [6], a methodology based on Support Vector Machines is proposed; however, the verification process presents an intrinsic overhead due to the model learning and the testbench feedback. In [7], the authors presented the Parameter Domains formalism and frame work for the removal of redundant and/ or undesired stimuli values.…”
Section: Introductionmentioning
confidence: 99%
“…In order to do so, a Python script, which aids to automate the creation of this stimuli generator (in SystemC language) using classes from the SystemC Verification Library, SCV [8], was developed. Comparison was made to the random approach in a traditional (conventional) testbench execution procedure as in [6], where the Stimuli Source block was manually generated. After application on large circuit modules, results have shown an improvement up to 22% on verification time.…”
Section: Introductionmentioning
confidence: 99%