Currently, convolution neural networks (CNNs) and transformers have been the dominant paradigms for change detection (CD) thanks to their powerful local and global feature extraction capabilities. However, with the improvement of resolution, spatial, spectral, and temporal relationships among objects in remote sensing images are becoming more complicated, and cannot be modeled efficiently by existing methods. To capture the high-order complex relationships in images, we propose a multiview hypergraph fusion network (MVHFNet) for CD, in which the high-order relationships along spatial, spectral, and temporal views are extracted by hypergraph learning. Specifically, this network is composed of three branches, including the spectral hypergraph learning branch (SpeHGL), the spatial hypergraph learning branch (SpaHGL) branch, and the temporal hypergraph learning branch (TemHGL). In these branches, multiview features are extracted by different attention modules, and hypergraph learning consisting of hypergraph construction and hypergraph convolution is imposed on these features to model the high-order relationships. Then, to integrate the multi-view features from different branches, a multi-view feature fusion (MVF) module is designed, in which the multi-view features are fused and condensed for the following prediction. Finally, the change map is produced by a prediction head. We conduct extensive experiments on three datasets, such as LEVIR-CD, SYSU-CD, and CLCD. The experimental results demonstrate that the proposed MVHFNet achieves better CD performance compared to some state-of-the-art methods.