Reinforcement learning is one of the most important methodologies in artificial intelligence and has achieved unprecedented success for addressing the fundamental decision-making tasks in various fields, including financial decisions, navigation, and robotic control tasks. However, the current reinforcement learning faces two well-recognized challenging problems, namely, trapping in the local optima and weak generalization, limiting its ability and performance to tackle complex and dynamic decision making tasks. Here, inspired by human intelligence, we present a new framework, called the \emph{cognitive escape reinforcement learning} (CERL), that has the cognition escape module and the semantic cognition module, for avoiding local optima and achieving strong generalization. Moreover, we elaborate on CERL in a classical and complex decision-making task, namely, visual navigation. Extensive experiments demonstrate the efficacy of our proposed CERL framework. Its navigation performance is also remarkably superior to those of the state-of-the-art methods. Furthermore, it can achieve the highest navigation success rate, which is more than 6\% higher than that of the best agent among the existing ones. The required path length is shortened by more than 29\% and delivers real-time performance. Our presented reinforcement learning-based framework opens up a new direction for tackling future complex decision-making tasks rapidly and with high performance.