High precision control of soft robots is challenging due to their stochastic behavior and material-dependence nature. While RL has been applied in soft robotics, achieving precision in task execution is still a long way off. Traditionally, RL requires substantial data for convergence, often obtained from a training environment. Yet, despite exhibiting high accuracy in the training environment, RL-policies often fall short in reality due to the training-to-reality gap, and the performance is exacerbated by the stochastic nature of soft robots. This study paves the way for the implementation of RL for soft robot control to achieve high precision in task execution. Two sample-efficient adaptive control strategies are proposed, that leverage the RL-policy. The schemes can overcome stochasticity, bridge the training-to-reality gap, and attain desired accuracy even in challenging tasks such as obstacle avoidance. Additionally, deliberate and reversible damage is induced to the pneumatic actuation chamber, altering the soft robot's behavior to test the adaptability of our solutions. Despite the damage, desired accuracy was achieved in most scenarios without needing to retrain the RL-policy.