“…The successful functioning of supervised learning relies on the ability to minimise a loss function. In recent years, various methods of analysis have been proposed to explore the loss functions of neural networks in the space identified by their weights through visualisation and parametrisation techniques [27]. For example, in [22], Li et al found that the addition of skip connections can smooth the loss functions and thereby make them easier to minimise.…”