Infrared and visible image fusion (IVIF) is used to synthesize a single image that contain rich information by integrating the advantages of these two modalities. However, current methods struggle to preserve the thermal radiation information from infrared images and textural information from visible images. To overcome this shortcoming, we present a fusion framework called, AMFusionNet, designed for IVIF. AMFusionNet contains three modules: multi-kernel convolution block (MKCBlock) combined with parallel spatial attention and channel attention modules (PSCNet), and a decoder. Specifically, multi-kernel convolution can not only extract a variety of feature information from source images but also expand the network's receptive field; To better extract salient information from infrared images, we introduced a parallel attention mechanism. This mechanism integrates the channel attention module and the spatial attention module in a parallel fashion, utilizing Gaussian Error Linear Units (GELU) as its nonlinear activation function. To improve the detailed preservation capabilities of the network, a multi-scale structural similarity (MS-SSIM) loss function is incorporated into the comprehensive loss function. Experiments on the TNO and FLIR datasets demonstrate that AMFusionNet outperforms other methods in object and subject evaluations. INDEX TERMS Infrared and visible image fusion, Parallel attention mechanism, Multi-kernel convolution, MS-SSIM.