Deep learning has become an emerging technical for image manipulation localization, which can automatically recognize abnormal traces caused by manipulation. However, as manipulations mainly happens in the foreground regions, these methods largely focus on the foreground contents and neglect the background, which contain complementary signal for fully understanding the image and are meaningful for manipulation localization. We propose a Mutually‐Complementary Network (MC‐Net), which is a two‐branch network to operate the foreground and background features, respectively. To distill complementary signals from the features, we propose a mutual attentive module composed of self‐feature attentive, and cross‐feature attentive components to advance the communication across the foreground and background branches. Extensive qualitative and quantitative experiments demonstrate that our proposed MC‐Net distinctly improves the prediction of foreground and background, obtains consistent performance increments on four benchmark data sets, and significantly outperforms the state‐of‐the‐art methods.