This paper proposed a multi-objective power inspection method based on multiple attention mechanisms and strong semantic feature extraction.Based on the Deeplabv3+ network model,the CBAM attention mechanism is introduced into the MobileNetV2 network to enchance the contextual information interaction capability; the ASPPAttention fast feature fusion structure is proposed to achievefast extraction of multi-dimensional valid information by designing deep separa-ble convolutions with different perceptual fields, and to enhance the pixel-levelfeature encoding capability with the CA attention mechanism; the lightweightfeature encoding capability is proposed. The proposed lightweight inverted con-volutional decoder structure improves the feature extraction capability of themodel by designing an inverted bottleneck convolutional structure in two quadru-ple downsampling layers with a low number of parameters, and introduces the CAattention mechanism to avoid the heterogeneous gap; in the training process, theconvergence speed of the model is accelerated by using migration learning, andDice Loss is introduced to avoid the effect of the number of samples on the gener-alization of the model. The experimental results show that the MIoU of the powersegmentation inspection method based on attention and information decouplingcan reach 48.5%, the accuracy rate can reach 97.5%, and the detection speed ofthe model can reach 40.8 FPS, which is a better balance of speed and accuracycompared with the network models of HRNet, PSPNet and DeeplabV3+.