The development of technology and the increasing prevalence of solitary living have transformed non-humanoid robots, such as robotic sweepers and mechanical pets, into potential sources of emotional support for individuals. Nevertheless, the majority of non-humanoid robots currently in existence are task-oriented and lack features such as facial expressions and sound. Existing research primarily emphasizes the details of human motion in robot motion design, while devoting less attention to the analysis of universal emotional expression factors and methods rooted in human recognition patterns. In our initial step, a theoretical framework and holistic expression factors were proposed based on Gestalt theory and SOR theory. These factors encompass vertical and horizontal motion direction, stimulation, and vertical repetition. Subsequently, animation simulation tests were conducted to confirm and examine the contributions of each factor to the recognition of emotional expressions. The results indicate that both vertical and horizontal movements can convey emotional valence. However, if both of them exist, there is no leading direction to the valence recognition result. When both vertical and horizontal movements are present, valence recognition is influenced by the combined effects of stimulation, vertical repetition, and movement direction. Simultaneously, nonhumanoid robots can display recognizable emotional content when influenced by holistic expression factors. This framework can serve as a universal guide for emotional expression tasks in non-humanoid robots, proving the hypothesis that Gestalt theory is applicable in dynamic emotional recognition tasks. At the same time, these findings propose a new holistic perspective for designing emotional expression methods for robots.