A new network with super-approximation power is introduced. This network is built with Floor ([Formula: see text]) or ReLU ([Formula: see text]) activation function in each neuron; hence, we call such networks Floor-ReLU networks. For any hyperparameters [Formula: see text] and [Formula: see text], we show that Floor-ReLU networks with width [Formula: see text] and depth [Formula: see text] can uniformly approximate a Hölder function [Formula: see text] on [Formula: see text] with an approximation error [Formula: see text], where [Formula: see text] and [Formula: see text] are the Hölder order and constant, respectively. More generally for an arbitrary continuous function [Formula: see text] on [Formula: see text] with a modulus of continuity [Formula: see text], the constructive approximation rate is [Formula: see text]. As a consequence, this new class of networks overcomes the curse of dimensionality in approximation power when the variation of [Formula: see text] as [Formula: see text] is moderate (e.g., [Formula: see text] for Hölder continuous functions), since the major term to be considered in our approximation rate is essentially [Formula: see text] times a function of [Formula: see text] and [Formula: see text] independent of [Formula: see text] within the modulus of continuity.