“…Additionally, as the length of the predictive horizon extends, the accumulative error in state observation is amplified, potentially leading to suboptimal control effects or even detrimental control performance. Aiming at simplifying the construction and solving of state functions, quadratic programming (QP) [36,37], commonly used in MPC solvers, requires linearization through the Taylor expansion method [38], which consequently degrades the accuracy of state observation further. To essentially enhance the state observing ability for the built-in state predictive model, combination-oriented methods integrating offline state observation model training and online data prediction, in which data-driven MPCs [39,40] are one of the most preferred solutions.…”