Focusing on the motion control problem of two link manipulator, a manipulator control approach based on deep deterministic policy gradient with parameter noise is proposed. Firstly, the manipulator simulation environment is built. And then the three deep reinforcement learning models named the deep deterministic policy gradient (DDPG), asynchronous advantage actor-critical (A3C) and distributed proximal policy optimization (DPPO) are established for training according to the target setting, state variables and reward & punishment mechanism of the environment model. Finally the motion control of two link manipulator is realized. After comparing and analyzing the three models, the DDPG approach based on parameter noise is proposed for further research to improve its applicability, so as to cut down the debugging time of the manipulator model and reach the goal smoothly. The experimental results indicate that the DDPG approach based on parameter noise can control the motion of two link manipulator effectively. The convergence speed of the control model is significantly promoted and the stability after convergence is improved. In comparison with the traditional control approach, the DDPG control approach based on parameter noise has higher efficiency and stronger applicability.