This paper presents a fully-automated system for detecting road signs in the United States and assess their visibility during daytime from the perspective of the driver using images captured by an in-vehicle camera. The system deploys YOLOv8 to build a multi-label detection model and then, calculates various readability and detectability factors, including the simplicity of the surroundings, potential obstructions, and the angle at which the road sign is positioned, to determine the overall visibility of the sign. This proposed system can be integrated into Driver Assistance Systems (DAS) to manage the information delivered to drivers, as an excess of information could potentially distract them. Road signs are categorized based on their visibility levels, allowing Driver Assistance Systems to caution drivers about signs that may have lower visibility but are of significant importance. The system comprises four main stages: 1) identifying road signs using YOLOv8; 2) segmenting the surrounding areas; 3) measuring visibility parameters; and 4) determining visibility levels through fuzzy logic inference system. This paper introduces a visibility estimation system for road signs specifically tailored to the United States. Experimental results showcase the system's effectiveness. The visibility levels generated by the proposed system were subjectively compared to decisions made by human experts, revealing a substantial agreement between the two approaches.