In obstacle avoidance trajectory planning, the environmental information collected by onboard and roadside sensors must be transmitted to the intelligent vehicle controller through network communication such as the CAN network and DSRC. However, the inherent network communication constraints such as delay and loss will lead to obstacle avoidance errors. To this end, a game deep Q-learning (GDQN) obstacle avoidance strategy is proposed combining deep Q-learning and the game theory reward strategy. The deep Q-learning network realises the modelling and description of the uncertainty of communication constraints. The obstacle avoidance reward strategy is presented by integrating the rules of traffic environment and vehicle dynamics. A scene preprocessing algorithm based on the artificial potential field method is proposed, which transforms the search problem of the optimal obstacle avoidance trajectory in the global scene into the search in the banded area to reduce the demand for computing power to the greatest extent. The experimental results show that compared with the existing research, the proposed method effectively solves the obstacle avoidance trajectory planning problem when the network has communication constraints and effectively balances traffic safety and vehicle stability in the process of obstacle avoidance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.