This comprehensive review investigates the escalating concern of adversarial attacks on deep learning models, offering an extensive analysis of state-of-the-art detection techniques. Encompassing traditional machine learning methods and contemporary deep learning approaches, the review categorizes and evaluates various detection mechanisms while addressing challenges such as the need for benchmark datasets and interpretability. Emphasizing the crucial role of explaining ability and trustworthiness, the paper also explores emerging trends, including the integration of technologies like explainable artificial intelligence (XAI) and reinforcement learning. By synthesizing existing knowledge and outlining future research directions, this review serves as a valuable resource for researchers, practitioners, and stakeholders seeking a nuanced understanding of adversarial attack detection in deep learning.