“…However, in the standard GAN framework where the Generator is solely optimized by an adversarial loss, the resulting Generator only mimics the colors and patterns of the target images without learning the underlying correspondence between the input and the target images, resulting in severe hallucinations at the micro-scale 19 . To overcome this hallucination problem, various other pixel-wise loss functions, such as mean absolute error (MAE) 18,21,22,32,36,37,51,56,59 , mean square error (MSE) 18,79 , SSIM 31,82 , Huber loss 31 , reversed Huber loss 23 , and color distance metrics 56 are incorporated into the Generator loss terms (in addition to the Discriminator loss) to regularize the GAN training; these additional loss terms are calculated using the virtually generated images and their corresponding ground truth (histochemically stained images). Moreover, image regularization terms such as total variation 83 were also exploited in some works to eliminate or suppress different types of image artifacts created by the Generator 18,[20][21][22]31 .…”