A visual sensing system is utilized mainly to estimate the misalignment between mating parts, the recognition of which is the integral part of any assembly process. The recognition, however, requires the information on the state of the misalignment that includes the shapes of parts in mating motion and instantaneous relative position and angular orientation between mating parts. Normally, this information has been given in advance by an operator to facilitate assembly action. Therefore, in order to recognize the assembly state in sequence without intervention of an operator, it requires an effective sensing system and algorithm capable of working well even without a priori information on part shape and location. In this paper, we propose a novel system that can assemble parts under such uncertain environments. The system, composed of an omnidirectional sensing module and a recognition module, is capable of acquiring information on the sequential state of parts assembly motion from which instantaneous, relative location and orientation between the mating parts can be determined. Since the system does not utilize a priori knowledge on the shape of mating parts, it greatly reduces the degree of human intervention, thus increasing autonomy and flexibility. To evaluate the performance of the proposed system, a series of assembly experiments are performed. The results show that the proposed system, indeed, demonstrates effectiveness of vision guided assembly action.