Visualisation of high-dimensional data is typically formulated as a non-linear mapping between a high-dimensional space and a two-dimensional latent space. The goal is that similar data items should be projected to similar coordinates in the latent space. Nevertheless, due to the non-linearity data items that are distant in the high-dimensional space may be projected close to each other in the latent space. Therefore, magnification factors must be analysed in order to detect stretches and contractions in the embedded latent space. However, magnification factors may not be straightforward to communicate to practitioners who are not aware of such manifestations in the embedded latent space, and only care about an accurate depiction that shows which data items are similar. The goal of this work is to devise a more convenient visualisation that corrects for magnifications and thus depicts the true distances between data items more faithfully. We present an approach based on a multidimensional scaling technique that corrects the obtained visualisations by distorting them according to local magnification factors. The approach is general and we demonstrate it on different visualisation algorithms, such as GTM extensions, the autoencoder, the GP-LVM, and on different types of data.