A basic theory is proposed for detecting multiorientation fields from images that have multiple orientations at each point in the field. This theory, representing a multiorientation field by a single set of fundamental constraint equations, differs from conventional methods which use many filters tuned to different orientations. It is also different from the steerable filters proposed by Freeman and Adelson [12]. This theory renders implementation of a selection process of orientation-tuned filters with high-magmtude output unnsessary. It also makes it unnecessary to implement a search process for an extreme value output of steerable filters. This theory lets analytical solutions to be derived that explicitly compute multiple orientations. This theory is derived from the operational formalism of the principle of linear superposition. Conventional methods suffer from interference between component signals when multiorientation signal components fall in the tuning range of each filter. As a result, filters with a narrow tuning range are necessary in conventional methods. The algorithms derived from our theory are not negatively influenced by interference. By using these algorithms in multiscale image representation, characteristic image structures such as junctions can be extracted from low-order image derivatives. This theory is expected to provide a theoretical foundation for the image representation of multiple orientations of the hypercolumn structure in the primary visual cortex of the brain.