Owing to the validity of measuring the visual visibility for image distortion, just noticeable distortion (JND) has been widely used in the quantisation-based watermarking framework. However, the existing JND model treats every region in the image with an equal attention level. Visual saliency, which reflects the visual attention, is proposed to improve the perceptual JND model. Based on the improved model, a logarithmic spread transform dither modulation (STDM) watermarking scheme is proposed. Simulations show that the proposed watermarking scheme with the improved JND model has superior robustness compared with existing STDM schemes.Introduction: Visual saliency (VS) is responsible for defining which areas of an image will attract more attention of the human visual system. In general, distortion occurring in an area that attracts the viewer's attention is more annoying than that in other areas [1]. Therefore, VS can be exploited to weight the commonly-used just noticeable distortion (JND) model with different attention levels in the watermarking framework. However, limited progress has been made in this research area, mainly due to the fact that the saliency map calculated in the original image is inconsistent with that in the watermarked image. Therefore, the existing VS computational model cannot provide maximum performance for a practical blind watermarking framework.In this Letter, a novel VS-based watermarking method for a monochrome image is proposed, in which the VS model in the discrete cosine transform (DCT) domain is introduced to modulate the perceptual JND model with a new numerical measure. Consequently, the proposed VS-based JND model is used to adjust the quantisation step adaptively for the logarithmic spread transform dither modulation (STDM) watermarking framework [2]. Experiments show the proposed scheme has enhanced robustness against common attacks.