In the present article, we introduce a foveation-based optimized embedded and its optimized version image coders thereafter called VOEFIC/MOEFIC and its related foveation wavelet visible difference predictor FWVDP coding quality metric. It advances a visually advanced foveal weighting mask that regulates the wavelet-based image spectrum before its encoding by the SPIHT encoder. It intends to arrive at a destined compression rate with a significant quality improvement for a disposed of binary budget, witnessing separation, and a foveal locale that locates the object in the zone of concern ROI. The coder embodies a couple of masking achieves build on the human psycho-visual quality criteria. Hence, the coder administers the foveal model to weigh the source wavelets samples, reshapes its spectrum content, adapts its shape, discards or somewhat shrinks the redundant excess and finally enhances the visual quality. The foveal weighting mask is computed indoors wavelet subbands as come after. First, it administers the foveal wavelet-based filter depending on the intention point so that it removes or at least reduces the imperceptible frequencies around the zone of concern. Next, it augments the picture contrast according to wavelet JND thresholds to manage brightening and nice the contrast above the distortion just notable. Once refined, the weighted wavelet spectrum will be embedded coded using the standard SPIHT to reach a desired binary bit budget. The manuscript also advances a foveationbased objective quality evaluator that embodies a psycho-visual quality criterion identified with the visual cortex framework. This investigator furnishes a foveal score FPS having the power of detecting probable errors and measuring objectively the compression quality. Keep in mind that the foveal coder VOEFIC and its visually upgraded variant MOEFIC, have similar complexity as their reference SPIHT. In contrast, their gathered data highlight the visual coding advancement and the boost ratio purchased in its quality gain.
This paper represents a study for the realization of a system based on Artificial Intelligence, which allows the recognition of traffic road signs in an intelligent way, and also demonstrates the performance of Transfer Learning for object classification in general. When systems are trained on the aspects of human visualization (HVS), which helps or generates the same decisions, the construct robust and efficient systems. This allows us to avoid many environmental risks, both for weather conditions, such as cloudy or rainy weather that causes obscured vision of signs, but the main objective is to avoid all road risks that are dangerous to achieve road safety, such as accidents due to non-compliance with traffic rules, both for vehicles and passengers. However, simply collecting road signs in different places does not solve the problem, an intelligent system for classifying road signs is needed to improve the safety of people in its environment. This study proposed a traffic road sign classification system that extracts visual characteristics from a Convolution Neural Network (CNN) classification model. This model aims to assign a class to the image of the road sign through the classifier with the most efficient optimized. Then the evaluation of its effectiveness according to several criteria, using the Confusion Matrix and the classification report, with an in-depth analysis of the results obtained by the images that are taken from the urban world. The results obtained by the system are encouraging in comparison with the systems developed in the scientific literature, for example, the Advanced Driving Assistance Systems (ADAS) of the sector automobile.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.