paintings are produced by artists based on their concepts and employ color, texture, and other techniques. Determining the arrangement of abstract paintings is challenging given their ambiguous nature. Previous studies on image orientation recognition faced three major difficulties. First, they relied heavily on pre-existing convolution neural network models, such as VGG and AlexNet. Second, they focused largely on a single task-recognizing image orientation. Finally, ground truth data concerning visual perception regions of images were often obtained through manual annotation. To overcome these issues, we introduce OC-OD: a multitask approach fused across multiple feature layers for better performance. The orientation classification (OC) subtask is the primary task, whereas the visual perception region detection (OD) subtask is auxiliary. OC and OD utilize the same feature extraction layer; OD aims to enhance the efficiency of OC completion. At the same time, the ground truth data used in OD is obtained from gaze fixation density maps gathered by an eye tracker during the subject's viewing of the image, rather than through manual annotation. Two datasets were chosen to compare the training impact of various model parameters and architecture. The experimental results were extensively compared, leading to the discovery that our proposed approach significantly enhances orientation recognition accuracy and outperforms other state-ofthe-art methodologies.