A strategy for locating and grasping a target object in an unknown position using a robotic manipulator equipped with a CCD camera is described. Low-level trajectory and joint control during the grasping operation is handled by the manipulator S conventional motion contrcrller using target-pose data provided by an artificial-neural-network-based vision system. The Feature CMAC is a self-organizing neural network that eficiently transforms images of a target into estimates of its location and orientation. The approach emulates biological systems in that it begins with simple image features (e.g., comers) and successively combines them to form more complex features in order to determine object position. Knowledge of camera parameters, camera position and object models is not required since that information is incorporated into the network during a training procedure wherein the target is viewed in a series of known poses. The manipulator is used to generate the training images autonomously. No training of connection weights is required; instead, training serves only to define the network topology which requires just one pass through the training images. Experiments validating the effectiveness of the strategy on an industrial robotic workcell are presented.