Machine interpretation of the shape of a component from CAD databases is an important problem in CAD/CAM, computer vision, and intelligent manufacturing. It can be used in CAD/CAM for evaluation of designs, in computer vision for machine recognition and machine inspection of objects, and in intelligent manufacturing for automating and integrating the link between design and manufacturing. This topic has been an active area of research since the late '70s, and a significant number of computational methods have been proposed to identify portions of the geometry of a part having engineering significance (here called "features").1 However, each proposed mechanism has been able to solve the problem only for components within a restricted geometric domain (such as polyhedral components), or only for components whose features interact with each other in a restricted manner. The purposes of this article are to review and summarize the development of research on machine recognition of features from CAD data, to discuss the advantages and potential problems of each approach, and to point out some of the promising directions future investigations may take. Since most work in this field has focused on machining features, the article primarily covers those features associated with the manufacturing domain. In order to better understand the state of the art, methods of automated feature recognition are divided into the following categories of methods based on their approach: graphbased, syntactic pattern recognition, rule-based, and volumetric. Within each category we have studied issues such as the definition of features, mechanisms developed for recognition of features, the application scope, and the assumptions made. In addition, the problem is addressed from the perspective of information input requirements and the advantages and disadvantages of boundary representation, constructive solid geometry (CSG), and 2D drawings with respect to machine recognition of features are examined. Emphasis is placed on the mechanisms for attacking problems associated with interacting features.
Spatial quantization error and displacement error are inherent in automated visual inspection systems. This paper discusses the effect of spatial quantization errors and displacement errors on the precision dimensional measurements for an edge segment. Probabilistic analysis in terms of the resolution of the image is developed for two-dimensional (2-D) quantization errors. Expressions for the mean and variance of these errors are developed. The probability density function (pdf) of the quantization error is derived. The position and orientation errors of the active head are assumed to be normally distributed. A probabilistic analysis in terms of these errors is developed for the displacement errors. Through integrating the spatial quantization errors and the displacement errors, we can compute the total error in the active vision inspection system. Based on the developed analysis, we investigate whether a given set of sensor setting parameters in an active system is suitable to obtain a desired accuracy for specific dimensional measurements. In addition, based on this approach, one can determine sensor positions and view directions which meet the necessary tolerance and accuracy of inspection.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.