Deep neural network is the widely applied technology in this decade. In spite of the fruitful applications, the mechanism behind that is still to be elucidated. We study the learning process with a very simple supervised learning encoding problem. As a result, we found a simple law, in the training response, which describes neural tangent kernel. The response consists of a power law like decay multiplied by a simple response kernel. We can construct a simple mean-field dynamical model with the law, which explains how the network learns. In the learning, the input space is split into sub-spaces along competition between the kernels. With the iterated splits and the aging, the network gets more complexity, but finally loses its plasticity.
The trade-off between the number of friendships and the closeness of friendships of humans arises due to the limitations of time and cognitive capacities for communication. This trade-off distinguishes asynchronous text communication through the internet (lightweight communication) from face-to-face communication and the social grooming of primates (elaborate communication). This study modelled communication as messaging flows driven by edge and node activations to investigate micro-mechanisms that realise the trade-off law and the differences between the two types of communications. We observed the emergence of five patterns of social structures depending on the strengths of the two types of activations, namely, edge and node activations. The two patterns that show known statistics on empirical studies, such as the trade-off and power-law distributions of closeness, emerged around a threshold between elaborate and lightweight communications, where network structures qualitatively changed. A balance between edge and node activations shifts one pattern (elaborate communication) to another pattern (lightweight communication). Consequently, relation networks that communicate through lightweight communication become less clustered. These results suggest how communication systems construct different social structures, e.g., the impact of popularising the internet.
Generative adversarial networks are popular deep neural networks for generative modeling in the field of Artificial Intelligence. In the generative modeling, we want to output a sample with some random numbers as an input. We train the artificial neural network with a training data set for the purpose. The network is known with astonishingly fruitful demonstrations, but we know the difficulty in the training because of the complex training dynamics. Here, we introduce an ecological analogy for the training dynamics. With the simple ecological model, we can understand the dynamics. Furthermore, a controller for the training can be designed based on the understanding. We then demonstrate how the network and the controller work with an ideal case, MNIST.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.