Recent advances in satellite remote sensing technology and computer technology have significantly impacted practical applications in remote sensing image segmentation. However, the prevalent hybrid segmentation models that combine Convolutional Neural Networks (CNNs) and Transformers, often overlook the critical exploration of local and global feature correlations across various scales. This exploration is essential for learning more representative features and strengthening context modeling capabilities. Additionally, the decoding layers of these models do not effectively exploit the pixel-level semantic relationships within cross-layer feature maps, thereby limiting the models' ability to discern small object features. To address these challenges, this paper introduces a Multi-directional and Multiconstraint Learning Network (MMLN) designed for semantic segmentation of remote sensing imagery. This network features a Multi-directional Dynamic Complement Decoder (MDCD), which enhances the interaction between local and global features in the feature space, and subsequently improves the feature discrimination within the segmentation network. Moreover, a Multi-constraint Saliency Boundary-adaptive Module (MSBM) is implemented to reinforce the spatial constraints on saliency at the edge regions and ensure semantic consistency along the mask boundaries. This, in turn, augments the segmentation model's capability to detect small objects. The evaluation on four datasets reveals that the MMLN outperforms the existing state-of-the-art methods in remote sensing imagery segmentation. The code is available at https://github.com/zhongyas/MMLN.