In deep neural network recommendation models, a common training method is to provide both positive and negative examples to the model to construct the loss function, in order that the model can better learn the useful information in the data because of the increased distinction between positive and negative examples. Due to their focus on strong negative examples, traditional negative sampling methods tend to select false negative samples, which leads to overfitting, reduces the generalization ability of the model, and brings considerable extra computational overhead. For this study, we designed a reliable negative sample fusion generation method named Single-mix. This method reconsiders message propagation mechanisms of node information in graph neural networks (GNNs) and negative sample embedding and fusion during node information aggregation. Single-mix associates information compression and flattening with generalization to reduce redundant information and computational overhead while constructing negative samples effectively. We also propose the number of mixed layers for single-layer enhancement by exploring the structural characteristics of the dataset. Although Single-mix introduces simpler information compression than other negative sampling methods based on GNNs, it achieves superior experimental results on several general experimental datasets.