We introduce a One-Shot Learning system where a robot effectively learns how to manipulate objects by relying solely on the object’s name, a single image, and a visual example of a person picking it up. Once the robot has mastered picking up a new object, an audio command is all that is needed to prompt it to perform the action. Our approach heavily depends on synthetic data generation, which is crucial for training various detection and regression models. Additionally, we introduce a novel combined regression model called Cross-Validation Regression with Z-Score (CVR-ZS), which improves the robot’s grasp accuracy. The system also features a classifier that uses a cutting-edge text-encoding technique, allowing for flexible user prompts for object retrieval. The complete system includes a text encoder and classifier, an object detector, and the CVR-ZS regressor. This setup has been validated with a Niryo Ned robot.