Traffic sign detection has achieved promising results in recent years. Nevertheless, there are still two problems remain to be overcome. One problem is the detection of small traffic signs, which usually occupy less than 2% of the image area. The other problem is fine-grained classification, with difficulties arising from similar appearances between traffic signs. For example, different speed-limit traffic signs have differences solely from the speed numbers. In this paper, we propose a Feature Aggregation MultiPath Network (FAMN) to tackle the problems simultaneously. First, we propose a Feature Aggregation (FA) structure to aggregate regional features from different feature maps by using element-wise Max, then convolution layers are used to extract rich semantic features. Accordingly, objects of different scales can choose the best features to improve performance of small object detection. Second, we propose a Multipath Network (MN) structure to obtain fine-grained features. The MN structure consists of three paths, extracting instance-level, part-level, and context-level features, respectively. The three types of features are then concatenated to form fine-grained features of the proposals. Experimental results demonstrate the effectiveness of our proposed FAMN. Specifically, FAMN is able to obtain an average F1-measure of 93.1% in TT100K dataset, 2.9% higher than the state-of-the-art.
Background The evaluation of refraction is indispensable in ophthalmic clinics, generally requiring a refractor or retinoscopy under cycloplegia. Retinal fundus photographs (RFPs) supply a wealth of information related to the human eye and might provide a promising approach that is more convenient and objective. Here, we aimed to develop and validate a fusion model-based deep learning system (FMDLS) to identify ocular refraction via RFPs and compare with the cycloplegic refraction. In this population-based comparative study, we retrospectively collected 11,973 RFPs from May 1, 2020 to November 20, 2021. The performance of the regression models for sphere and cylinder was evaluated using mean absolute error (MAE). The accuracy, sensitivity, specificity, area under the receiver operating characteristic curve, and F1-score were used to evaluate the classification model of the cylinder axis. Results Overall, 7873 RFPs were retained for analysis. For sphere and cylinder, the MAE values between the FMDLS and cycloplegic refraction were 0.50 D and 0.31 D, representing an increase of 29.41% and 26.67%, respectively, when compared with the single models. The correlation coefficients (r) were 0.949 and 0.807, respectively. For axis analysis, the accuracy, specificity, sensitivity, and area under the curve value of the classification model were 0.89, 0.941, 0.882, and 0.814, respectively, and the F1-score was 0.88. Conclusions The FMDLS successfully identified the ocular refraction in sphere, cylinder, and axis, and showed good agreement with the cycloplegic refraction. The RFPs can provide not only comprehensive fundus information but also the refractive state of the eye, highlighting their potential clinical value.
Background: The evaluation of refraction is indispensable in ophthalmic clinics, generally requiring a refractor or retinoscopy under cycloplegia. Retinal fundus photographs (RFPs) supply a wealth of information related to the human eye and might provide a new approach that is more convenient and objective. Here, we aimed to develop and validate a fusion model-based intelligent retinoscopy system (FMIRS) to identify ocular refraction via RFPs and compare with the cycloplegic refraction. In this population-based comparative study, we retrospectively collected 11,973 RFPs from May 1, 2020 to November 20, 2021. The FMIRS was constructed, and the performance of the regression models of sphere and cylinder was evaluated. The accuracy, sensitivity, specificity, area under the receiver operating characteristic curve, and F1-score were used to evaluate the classification model of the cylinder axis. Results: Overall, 11,973 images were included. For sphere and cylinder, the mean absolute error values between the FMIRS and cycloplegic refraction were 0.50 D and 0.31 D, representing an increase of 29.41% and 26.67%, respectively, when compared with those of the single models. The correlation coefficients (r) were 0.949 and 0.807, respectively. For axis analysis, the accuracy, specificity, sensitivity, and area under the curve value of the classification model were 0.89, 0.941, 0.882, and 0.814, respectively, and the F1-score was 0.88. Conclusions: The FMIRS successfully identified ocular refraction accurately in sphere, cylinder, and axis, and it showed good agreement with the cycloplegic refraction. The RFPs can not only provide comprehensive fundus information but also the refraction state of the eye, emphasising their potential clinical value.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.