Objective Based on social exchange theory, this study investigates the effects of robots’ fairness and social status on humans’ reward-punishment behaviors and trust in human-robot interactions. Background In human-robot teamwork, robots show fair behaviors, dedication (altruistic unfair behaviors), and selfishness (self-interested unfair behaviors), but few studies have discussed the effects of these robots’ behaviors on teamwork. Method This study adopts a 3 (the independent variable is the robot’s fairness: self-interested unfair behaviors, fair behaviors, and altruistic unfair behaviors) × 3 (the moderator variable is the robot’s social status: superior, peer, and subordinate) experimental design. Each participant and a robot completed the experimental task together through a computer. Results When robots have different social statuses, the more altruistic the fairness of the robot, the more reward behaviors, the fewer punishment behaviors, and the higher human–robot trust of humans. Robots’ higher social status weakens the influence of their fairness on humans’ punishment behaviors. Human–robot trust will increase humans’ reward behaviors and decrease humans’ punishment behaviors. Humans’ reward-punishment behaviors will increase repaired human-robot trust. Conclusion Robots’ fairness has a significant impact on humans’ reward-punishment behaviors and trust. Robots’ social status moderates the effect of their fair behavior on humans’ punishment behavior. There is an interaction between humans’ reward-punishment behaviors and trust. Application The study can help to better understand the interaction mechanism of the human–robot team and can better serve the management and cooperation of the human–robot team by appropriately adjusting the robots’ fairness and social status.