Trust is vital for effective human-robot teams. Trust is unstable, however, and it changes over time, with decreases in trust occurring when robots make mistakes. In such cases, certain strategies identified in the human-human literature can be deployed to repair trust, including apologies, denials, explanations, and promises. Whether these strategies work in the human-robot domain, however, remains largely unknown. This is primarily because of the fragmented and dispersed state of the current literature on trust repair in HRI. As a result, this paper brings together studies on trust repair in HRI and presents a more cohesive view of when apologies, denials, explanations, and promises have been seen to repair trust. In doing so, this paper also highlights possible gaps and proposes future work. This contributes to the literature in several ways but primarily provides a starting point for future research and recommendations for studies seeking to determine how trust can be repaired in HRI.