In this paper, we systematically review recent advances in surface inspection using computer vision and image processing techniques, particularly those based on texture analysis methods. The aim is to review the state-of-the-art techniques for the purposes of visual inspection and decision making schemes that are able to discriminate the features extracted from normal and defective regions. This field is so vast that it is impossible to cover all the aspects of visual inspection. This paper focuses on a particular but important subset which generally treats visual surface inspection as texture analysis problems. Other topics related to visual inspection such as imaging system and data acquisition are out of the scope of this survey. The surface defects are loosely separated into two types. One is local textural irregularities which is the main concern for most visual surface inspection applications. The other is global deviation of colour and/or texture, where local pattern or texture does not exhibit abnormalities. We refer this type of defects as shade or tonality problem. The second type of defects have been largely neglected until recently, particularly when colour imaging system has been widely used in visual inspection and where chromatic consistency plays an important role in quality control. The emphasis of this survey though is still on detecting local abnormalities, given the fact that majority of the reported works are dealing with the first type of defects. The techniques used to inspect textural abnormalities are discussed in four categories, statistical approaches, structural approaches, filter based methods, and model based approaches, with a comprehensive list of references to some recent works. Due to rising demand and practice of colour texture analysis in application to visual inspection, those works that are dealing with colour texture analysis are discussed separately. It is also worth noting that processing vector-valued data has its unique challenges, which conventional surface inspection methods have often ignored or do not encounter. We also compare classification approaches with novelty detection approaches at the decision making stage. Classification approaches often require supervised training and usually provide better performance than novelty detection based approaches where training is only carried out on defect-free samples. However, novelty detection is relatively easier to adapt and is particularly desirable when training samples are incomplete.
Abstract-We propose an active contour model using an external force field that is based on magnetostatics and hypothesized magnetic interactions between the active contour and object boundaries. The major contribution of the method is that the interaction of its forces can greatly improve the active contour in capturing complex geometries and dealing with difficult initializations, weak edges and broken boundaries. The proposed method is shown to achieve significant improvements when compared against six well-known and state-of-the-art shape recovery methods, including the geodesic snake, the generalized version of GVF snake, the combined geodesic and GVF snake, and the charged particle model.
Abstract-An enhanced, region-aided, geometric active contour that is more tolerant toward weak edges and noise in images is introduced. The proposed method integrates gradient flow forces with region constraints, composed of image region vector flow forces obtained through the diffusion of the region segmentation map. We refer to this as the Region-aided Geometric Snake or RAGS. The diffused region forces can be generated from any reliable region segmentation technique, greylevel or color. This extra region force gives the snake a global complementary view of the boundary information within the image which, along with the local gradient flow, helps detect fuzzy boundaries and overcome noisy regions. The partial differential equation (PDE) resulting from this integration of image gradient flow and diffused region flow is implemented using a level set approach. We present various examples and also evaluate and compare the performance of RAGS on weak boundaries and noisy images.
We present an approach to detecting and localizing defects in random color textures which requires only a few defect free samples for unsupervised training. It is assumed that each image is generated by a superposition of various-size image patches with added variations at each pixel position. These image patches and their corresponding variances are referred to here as textural exemplars or texems. Mixture models are applied to obtain the texems using multiscale analysis to reduce the computational costs. Novelty detection on color texture surfaces is performed by examining the same-source similarity based on the data likelihood in multiscale, followed by logical processes to combine the defect candidates to localize defects. The proposed method is compared against a Gabor filter bank-based novelty detection method. Also, we compare different texem generalization schemes for defect detection in terms of accuracy and efficiency.
Psychology studies and behavioural observation show that humans shift their attention from one location to another when viewing an image of a complex scene. This is due to the limited capacity of the human visual system in processing simultaneously multiple visual inputs. The sequential shifting of attention on objects in a non-task oriented viewing can be seen as a form of saliency ranking. Although there are methods proposed for predicting saliency rank, they are not able to model this human attention shift well, as they are primarily based on ranking saliency values from binary prediction. Following psychological studies, we propose in this paper to predict the saliency rank by inferring human attention shift. We first construct a large salient object ranking dataset. The saliency rank of objects is defined by the order that an observer attends to these objects based on attention shift. The final saliency rank is an average across the saliency ranks of multiple observers. We then propose a learning-based CNN to leverage both bottom-up and top-down attention mechanisms to predict the saliency rank. Experimental results show that the proposed network achieves state-of-the-art performances on salient object rank prediction.
There is a need for solutions which assist users to understand long time-series data by observing its changes over time, finding repeated patterns, detecting outliers, and effectively labeling data instances. Although these tasks are quite distinct and are usually tackled separately, we present an interactive visual analytics system and approach that can address these issues in a single system. It enables users to visualize, understand and explore univariate or multivariate long time-series data in one image using a connected scatter plot. It supports interactive analysis and exploration for pattern discovery and outlier detection. Different dimensionality reduction techniques are used and compared in our system. Because of its power of extracting features, deep learning is used for multivariate time-series along with 2D reduction techniques for rapid and easy interpretation and interaction with large amount of time-series data. We deploy our system with different time-series datasets and report two real-world case studies that are used to evaluate our system.
Image-based noninvasive fractional flow reserve (FFR) is an emergent approach to determine the functional relevance of coronary stenoses. The present work aimed to determine the feasibility of using a method based on coronary computed tomography angiography (CCTA) and reduced-order models (0D-1D) for the evaluation of coronary stenoses. The reduced-order methodology (cFFR RO ) was kept as simple as possible and did not include pressure drop or stenosis models. The geometry definition was incorporated into the physical model used to solve coronary flow and pressure. cFFRRO was assessed on a virtual cohort of 30 coronary artery stenoses in 25 vessels and compared with a standard approach based on 3D computational fluid dynamics (cFFR 3D ). In this proof-of-concept study, we sought to investigate the influence of geometry and boundary conditions on the agreement between both methods. Performance on a per-vessel level showed a good correlation between both methods (Pearson's product-moment R = 0.885, P < 0.01), when using cFFR 3D as the reference standard. The 95% limits of agreement were −0.116 and 0.08, and the mean bias was −0.018 (SD = 0.05). Our results suggest no appreciable difference between cFFR RO and cFFR 3D with respect to lesion length and/or aspect ratio. At a fixed aspect ratio, however, stenosis severity and shape appeared to be the most critical factors accounting for differences in both methods. Despite the assumptions inherent to the 1D formulation, asymmetry did not seem to affect the agreement.The choice of boundary conditions is critical in obtaining a functionally significant drop in pressure. Our initial data suggest that this approach may be part of a broader risk assessment strategy aimed at increasing the diagnostic yield of cardiac catheterisation for in-hospital evaluation of haemodynamically significant stenoses. KEYWORDSboundary conditions, coronary stenosis severity, shape and asymmetry, non-invasive fractional flow reserve, reduced-order model
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.