In recognition of our roadways to remain safe and effective, road signs are essential. As a result, we must carefully evaluate the strengths and weaknesses of automatic roads sign detection solutions. The variety and severity of difficult conditions are constrained in the databases of existing road signs. Due to concurrent modifications to various conditions, it is impossible to assess the impact of a single element because there is no metadata matching to these circumstances. In this study, we examine the TSDR issue in difficult conditions and concentrate on the functional degradation brought on by such circumstances. In order to do this, we suggest a prior enhancement-focused TSDR framework that is built on the YOLO principle. A YOLO-based challenge classifier, an encoder-decoder YOLO structure for image enhancement called Enhance-Net, and two different YOLO architectures for sign recognition and categorization make up our modular method. In the difficult photos that require their proper recognition, we suggest a novel learning workflow for Enhance-Net with an emphasis on improving the quality of the road sign sections (instead of the complete image). The CURE-TSD Real dataset, which is built on simulated testing settings that correlate to adversaries that can arise in real-world environments and systems, and a voice assistant were introduced to address the inadequacies in previous datasets.