2017
DOI: 10.1115/1.4037179
|View full text |Cite
|
Sign up to set email alerts
|

Evaluating the Use of Artificial Neural Networks and Graph Complexity to Predict Automotive Assembly Quality Defects

Abstract: This paper presents the use of subassembly models instead of the entire assembly model to predict assembly quality defects at an automotive original equipment manufacturer (OEM). Specifically, artificial neural networks (ANNs) were used to predict assembly time and market value from assembly models. These models were converted into bipartite graphs from which 29 graph complexity metrics were extracted to train 18,900 ANN prediction models. The size of the training set, order of the bipartite graph, selection o… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
6
1
1

Relationship

2
6

Authors

Journals

citations
Cited by 14 publications
(8 citation statements)
references
References 19 publications
0
8
0
Order By: Relevance
“…In this manner, the prediction error can be greater than 100% for the percent error and the normalized error. It is fully recognized that improvements to the predictive power of the approach can be made based on larger training sets, more similar training samples, and including information beyond the graph connectivity of the models (see Patel et al, 2016, for a comparison on factors influencing the inferencing power). However, improving the predictive capabilities of the approach is not the focus of this paper; rather, comparing the representations using the same prediction framework is.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…In this manner, the prediction error can be greater than 100% for the percent error and the normalized error. It is fully recognized that improvements to the predictive power of the approach can be made based on larger training sets, more similar training samples, and including information beyond the graph connectivity of the models (see Patel et al, 2016, for a comparison on factors influencing the inferencing power). However, improving the predictive capabilities of the approach is not the focus of this paper; rather, comparing the representations using the same prediction framework is.…”
Section: Methodsmentioning
confidence: 99%
“…To understand the accuracy from each of the four prediction models, it is important to understand the amount of error in each of these models. To calculate the amount of error, three different types of error analysis formulae, listed in this section, were used based on those found in Patel et al (2016). To handle the wide range of values calculated by the ANNs, it is important to have a specialized error calculation formula.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…The utilization of deep learning, and specifically autoencoders, also led to the creation of a computational framework that models the curiosity of a given user in order to provide surprising examples [21]. Neural networks have also been utilized to automatically predict quality defects in automotive parts [22] and to support design for additive manufacturing [23][24][25]. These examples, while not exhaustive, serve to highlight potential utility of neural networks for design and the need for a standardized approach to implementing them.…”
Section: Neural Network and Deep Learningmentioning
confidence: 99%
“…Several studies have been undertaken to study the authorship and consistency and interpretability of models (Kurfman et al, 2003; Caldwell, Ramachandran, et al, 2012; Caldwell, Thomas, et al, 2012), as well as elucidating the correctness of model construction (Nagel et al, 2015). Alternatively, some approaches have been proposed that automatically reason on function models from database collections (Lucero et al, 2014; Patel, Andrews, et al, 2016; Sridhar et al, 2016). Finally, some approaches entail the support of first principle based physics reasoning (Goel et al, 2009; Sen et al, 2011 b , 2013 a ).…”
Section: Levels Of Comparisonmentioning
confidence: 99%