In airworthiness assurance, while there is a long tradition of measuring inspection reliability for machine-aided Non-Destructive Inspection (NDI), the more common visual inspection has received little attention. Yet inspection reliability measurements are needed if we are to set appropriate inspection intervals for airframe components. Visual inspection of aircraft is characterized as using multiple senses (despite its name) and having to inspect for multiple fault types, in contrast to NDI which is used for single specific fault types. The study here used 12 professional inspectors to perform nine visual inspection tasks on a long-service Boeing 737 aircraft. Each inspector worked over two days. Measures were taken of performance, strategy and individual differences. Only a fraction of the results are presented here, with a major finding that aircraft visual inspection has approximately the same reliability as industrial inspection. Individual differences were found, as well as correlations between certain aspects of performance and individual characteristics such as Field Independence and Peripheral Visual Acuity. However, there was little correlation between an individual inspector's performance on the different tasks, showing the difficulty of designing selection and placement procedures for such a wide-ranging job.
In nondestructive evaluation (NDE), measurement outputs usually involve different sources of variability such as operator variation, flaw-morphology variation, setup and calibration variation, environmental related variations, and measurement error. If an appropriate experiment is conducted, it is possible to estimate the separate effects of different sources of variability. These sources of variability imply that the Probability of Detection (POD) itself is random depending, for example, on the operator assigned to do the inspection. Traditional POD analysis has focused on the estimation of the mean of the POD distribution (i.e., estimating a POD averaged over the different sources of variability reflected in the data), also providing an associated 95% lower confidence bound to reflect statistical uncertainty (i.e., uncertainty due to limited data). Focusing on mean POD obscures the process variability and has the potential to provide an overly optimistic impression of POD when there is considerable variation. An alternative, commonly used in other areas of statistical analysis, such as product reliability, is to make inferences on a lower quantile of the distribution. In this paper, we emphasize the important difference between mean POD and quantile POD and provide guidance about when they should be used.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.