Traffic-sign recognition (TSR) has been an essential part of driver-assistance systems, which is able to assist drivers in avoiding a vast number of potential hazards and improve the experience of driving. However, the TSR is a realistic task that is full of constraints, such as visual environment, physical damages, and partial occasions, etc. In order to deal with the constrains, convolutional neural networks (CNN) are accommodated to extract visual features of traffic signs and classify them into corresponding classes. In this project, we initially created a benchmark (NZ-Traffic-Sign 3K) for the traffic-sign recognition in New Zealand. In order to determine which deep learning models are the most suitable one for the TSR, we choose two kinds of models to conduct deep learning computations: Faster R-CNN and YOLOv5. According to the scores of various metrics, we summarized the pros and cons of the picked models for the TSR task.