This paper introduces a novel regularization approach aimed at improving generalization performance by perturbing deterministic logical expressions. We incorporate logical inference into deep neural networks using logic gates and propose stochastic sampling to select appropriate logic gates from a predetermined set at each node, resembling sampling from a categorical distribution. While the Gumbel softmax relaxation facilitates effective sampling learning, the independence of perturbation from the maximum index operation (arg max) poses challenges in maintaining consistent sampling and preserving the original categorical probability order. To address this issue, we introduce scaled noise in the Gumbel process, followed by normalization to unnormalized probabilities. By leveraging randomness and introducing stochastic learning into deterministic logical transformations, we demonstrate enhanced classification accuracy. Extensive evaluations on publicly available datasets, including UCI (adult and breast cancer), MNIST, and CIFAR-10, establish the superiority of our method over softmax-based logical gate networks. Our contributions significantly advance the training of logic gate-based networks, inspiring further developments in deep logic gate network training.