Novel test detection is an approach to improve simulation efficiency by selecting novel tests before their application [1]. Techniques have been proposed to apply the approach in the context of processor verification [2]. This work reports our experience in applying the approach to verifying a commercial processor. Our objectives are threefold: to implement the approach in a practical setting, to assess its effectiveness and to understand its challenges in practical application. The experiments are conducted based on a simulation environment for verifying a commercial dual-thread low-power processor core. By focusing on the complex fixed-point unit, the results show up to 96% saving in simulation time. The main limitation of the implementation is discussed based on the load-store unit with initial promising results to show how to overcome the limitation.
This work studies the potential of capturing customer returns with models constructed based on multivariate analysis of parametric wafer sort test measurements. In such an analysis, subsets of tests are selected to build models for making pass/fail decisions. Two approaches are considered. A preemptive approach selects correlated tests to construct multivariate test models to screen out outliers. This approach does not rely on known customer returns. In contrast, a reactive approach selects tests relevant to a given customer return and builds an outlier model specific to the return. This model is applied to capture future parts similar to the return. The study is based on test data collected over roughly 16 months of production for a high-quality SoC sold to the automotive market. The data consists of 62 customer returns belonging to 52 lots. The study shows that each approach can capture returns not captured by the other. With both approaches, the study shows that multivariate test analysis can have a significant impact on reducing customer return rates especially during the later period of the production.
This paper studies the potential of using wafer probe tests to predict the outcome of future tests. The study is carried out using test data based on an SoC design for the automotive market. Given a set of known failing parts, there are two possible approaches to learn. First a single binary classification model can be learned to model all failing parts. We show that this approach can be effective if the failing parts are compatible in learning. Second, an individual outlier model can be learned for each failing part. We show that this approach is suitable for learning failing parts such as customer returns, where each may have a unique failing behavior. We also show that with Principal Component Analysis (PCA), a learning model can be visualized in two or three dimensional PC space, which facilitates an engineer to manually select or adjust the model.
Burn-in is a common test approach to screen out unreliable parts. The cost of burn-in can be significant due to long burn-in periods and expensive equipment. This work studies the potential of using parametric test data to reduce the time of burn-in. The experiment focuses on developing parametric test models based on test data collected after 10 hours of burn-in to predict parts likely-to-fail after 24 and 48 hours of burn-in. Our study shows that 24-hour and 48-hour burn-in failures behave abnormally in multivariate parametric test spaces after 10 hours of burn-in. Hence, it is possible to develop multivariate test models to identify these likely-to-fail parts early in a burn-in cycle. This study is carried out on 8 lots of test data from a burn-in experiment based on a 3-axis accelerometer design. The study shows that after 10 hours of burn-in, it is possible to identify a large portion of all parts that do not require longer burn-in time, potentially providing significant cost saving.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.