Many optimal control problems are formulated as two point boundary value problems (TPBVPs) with conditions of optimality derived from the HamiltonJacobiBellman (HJB) equations. In most cases, it is challenging to solve HJBs due to the difficulty of guessing the adjoint variables. This paper proposes two learning-based approaches to find the initial guess of adjoint variables in real-time, which can be applied to solve general TP-BVPs. For cases with database of solutions and corresponding adjoint variables of a TPBVP under varying boundary conditions, a supervised learning method is applied to learn the HJB solutions off-line. After obtaining a trained neural network from supervised learning, we are able to find proper initial adjoint variables for given boundary conditions in realtime. However, when validated solutions of TPBVPs are not available, the reinforcement learning method is applied to solve HJB by constructing a neural network, defining a reward function, and setting appropriate super parameters. The reinforcement learning based HJB method can learn how to find accurate adjoint variables via an updating neural network. Finally, both learning approaches are implemented in classical optimal control problems to verify the effectiveness of the learning based HJB methods.