Recently, we presented the SCA-2023 dataset, which had been developed specifically to evaluate the quality of various image precompensation algorithms for observers with imperfect vision. Such precompensation makes it possible to bring their image perception closer to that of an observer with the ideal vision. While experimenting with various image quality metrics, we realized that it was not so easy to evaluate the quality provided by different algorithms, since the metrics ''voted'' for different things, and their choice often seemed to contradict the human perception. This is a key motivation for our study, in which we set out to select the metric best correlated with the human perception of precompensated images. We selected a suitable subdataset from our SCA-2023 dataset and, based on it, created 90 grayscale images, which were shown to our colleagues in a pairwise comparison way. More than 2,000 pairwise comparison results were collected from 24 study participants. Further, according to our original methodology, these results were compared with the ''opinion'' of some popular quality metrics, which made it possible to rank these metrics according to their adequacy within the framework of this task. Finally, we showed how to use these results in optimization procedures aimed at improving the quality of precompensation.