In this paper, we investigate the Pell sequence and the Perrin sequence and we derive some relationships between these sequences and permanents and determinants of one type of Hessenberg matrices.
Commercial transport planning as well as individual intra-city or inter-city traffic in densely populated regions, both in Europe and the US, increasingly suffer from congestion problems, to an extent which e.g. affects predictable transport planning substantially (except - so far - for overnight tours). Due to the highly dynamic character of congestion forming and dissolving, no static approach like shortest path finding, applied globally or individually in car navigators, is adequate here: Its use even makes things worse as can be frequently observed. In this paper we present a completely decentralized multi-agent approach (termed BeeJamA) on multiple layers where car or truck routing are handled through algorithms adapted from the BeeHive algorithms which in turn have been derived from honey bee behavior. We report on extensive distributed simulation experiments in the BeeJamA project which demonstrate a very substantial improvement over traditional congestion handling
a b s t r a c tIn this paper, firstly we present a connection between determinants of tridiagonal matrices and the Lucas sequence. Secondly, we obtain the complex factorization of Lucas sequence by considering how the Lucas sequence can be connected to Chebyshev polynomials by determinants of a sequence of matrices.
Over-parameterized models, in particular deep networks, often exhibit a double descent phenomenon, where as a function of model size, error first decreases, increases, and decreases at last. This intriguing double descent behavior also occurs as a function of training epochs, and has been conjectured to arise because training epochs control the model complexity. In this paper, we show that such epoch-wise double descent arises for a different reason: It is caused by a superposition of two or more bias-variance tradeoffs that arise because different parts of the network are learned at different times, and eliminating this by proper scaling of stepsizes can significantly improve the early stopping performance. We show this analytically for i) linear regression, where differently scaled features give rise to a superposition of bias-variance tradeoffs, and for ii) a two-layer neural network, where the first and second layer each govern a bias-variance tradeoff. Inspired by this theory, we study a five-layer convolutional network empirically, and show that eliminating epoch-wise double descent through adjusting stepsizes of different layers improves the early stopping performance significantly.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.