“…By using stochastic gradient descent for model parameter optimization, the BPLR algorithm can gradually reduce the loss function and improve the fit of the model on the training set, so as to achieve better personalized sorting results. In practical applications, different optimization algorithms and the addition of regularization terms can also be used to further improve the model performance and generalization ability [7]. In addition, reasonable setting of hyperparameters such as learning rate and batch size is also key to optimizing the model.…”