Motivated by problems in the neural networks setting, we study moduli spaces of double framed quiver representations and give both a linear algebra description and a representation theoretic description of these moduli spaces. We define a network category whose isomorphism classes of objects correspond to the orbits of quiver representations, in which neural networks map input data. We then prove that the output of a neural network depends only on the corresponding point in the moduli space. Finally, we present a different perspective on mapping neural networks with a specific activation function, called ReLU, to a moduli space using the symplectic reduction approach to quiver moduli. Contents 7.1. ReLU neural networks 20 References 22 Appendix A. Moduli spaces and neural networks 23 A.1. Machine learning concepts 23 A.2. Back-propagation 24 A.3. Combinatorial back-propagation 25 A.4. Final discussion 26