The CIECAM02 color‐appearance model enjoys popularity in scientific research and industrial applications since it was recommended by the CIE in 2002. However, it has been found that computational failures can occur in certain cases such as during the image processing of cross‐media color reproduction applications. Some proposals have been developed to repair the CIECAM02 model. However, all the proposals developed have the same structure as the original CIECAM02 model and solve the problems concerned at the expense of losing accuracy of predicted visual data compared with the original model. In this article, the structure of the CIECAM02 model is changed and the color and luminance adaptations to the illuminant are completed in the same space rather than in two different spaces, as in the original CIECAM02 model. It has been found that the new model (named CAM16) not only overcomes the previous problems, but also the performance in predicting the visual results is as good as if not better than that of the original CIECAM02 model. Furthermore the new CAM16 model is simpler than the original CIECAM02 model. In addition, if considering only chromatic adaptation, a new transformation, CAT16, is proposed to replace the previous CAT02 transformation. Finally, the new CAM16‐UCS uniform color space is proposed to replace the previous CAM02‐UCS space. A new complete solution for color‐appearance prediction and color‐difference evaluation can now be offered.
Objectives/Hypothesis: To develop a deep-learning-based computer-aided diagnosis system for distinguishing laryngeal neoplasms (benign, precancerous lesions, and cancer) and improve the clinician-based accuracy of diagnostic assessments of laryngoscopy findings. Study Design: Retrospective study. Methods: A total of 24,667 laryngoscopy images (normal, vocal nodule, polyps, leukoplakia and malignancy) were collected to develop and test a convolutional neural network (CNN)-based classifier. A comparison between the proposed CNNbased classifier and the clinical visual assessments (CVAs) by 12 otolaryngologists was conducted. Results: In the independent testing dataset, an overall accuracy of 96.24% was achieved; for leukoplakia, benign, malignancy, normal, and vocal nodule, the sensitivity and specificity were 92.8% vs. 98.9%, 97% vs. 99.7%, 89% vs. 99.3%, 99.0% vs. 99.4%, and 97.2% vs. 99.1%, respectively. Furthermore, when compared with CVAs on the randomly selected test dataset, the CNN-based classifier outperformed physicians for most laryngeal conditions, with striking improvements in the ability to distinguish nodules (98% vs. 45%, P < .001), polyps (91% vs. 86%, P < .001), leukoplakia (91% vs. 65%, P < .001), and malignancy (90% vs. 54%, P < .001). Conclusions: The CNN-based classifier can provide a valuable reference for the diagnosis of laryngeal neoplasms during laryngoscopy, especially for distinguishing benign, precancerous, and cancer lesions.
The clustering methods have recently absorbed evenincreasing attention in learning and vision. Deep clustering combines embedding and clustering together to obtain optimal embedding subspace for clustering, which can be more effective compared with conventional clustering methods. In this paper, we propose a joint learning framework for discriminative embedding and spectral clustering. We first devise a dual autoencoder network, which enforces the reconstruction constraint for the latent representations and their noisy versions, to embed the inputs into a latent space for clustering. As such the learned latent representations can be more robust to noise. Then the mutual information estimation is utilized to provide more discriminative information from the inputs. Furthermore, a deep spectral clustering method is applied to embed the latent representations into the eigenspace and subsequently clusters them, which can fully exploit the relationship between inputs to achieve optimal clustering results. Experimental results on benchmark datasets show that our method can significantly outperform state-of-the-art clustering approaches.
BackgroundTo develop a deep neural network able to differentiate glaucoma from non-glaucoma visual fields based on visual filed (VF) test results, we collected VF tests from 3 different ophthalmic centers in mainland China.MethodsVisual fields obtained by both Humphrey 30–2 and 24–2 tests were collected. Reliability criteria were established as fixation losses less than 2/13, false positive and false negative rates of less than 15%.ResultsWe split a total of 4012 PD images from 1352 patients into two sets, 3712 for training and another 300 for validation. There is no significant difference between left to right ratio (P = 0.6211), while age (P = 0.0022), VFI (P = 0.0001), MD (P = 0.0039) and PSD (P = 0.0001) exhibited obvious statistical differences. On the validation set of 300 VFs, CNN achieves the accuracy of 0.876, while the specificity and sensitivity are 0.826 and 0.932, respectively. For ophthalmologists, the average accuracies are 0.607, 0.585 and 0.626 for resident ophthalmologists, attending ophthalmologists and glaucoma experts, respectively. AGIS and GSS2 achieved accuracy of 0.459 and 0.523 respectively. Three traditional machine learning algorithms, namely support vector machine (SVM), random forest (RF), and k-nearest neighbor (k-NN) were also implemented and evaluated in the experiments, which achieved accuracy of 0.670, 0.644, and 0.591 respectively.ConclusionsOur algorithm based on CNN has achieved higher accuracy compared to human ophthalmologists and traditional rules (AGIS and GSS2) in differentiation of glaucoma and non-glaucoma VFs.Electronic supplementary materialThe online version of this article (10.1186/s12880-018-0273-5) contains supplementary material, which is available to authorized users.
New elastographic techniques such as poroelastography and viscoelasticity imaging aim at imaging the temporal mechanical behavior of tissues. These techniques usually involve the use of curve fitting methods being applied to noisy data to estimate new elastographic parameters. As of today, however, current elastographic implementations of poroelastography and viscoelasticity imaging methods are in general too slow and not optimized for clinical applications. Furthermore, image quality performance of these new elastographic techniques is still largely unknown due to a paucity of data and the lack of systematic studies that analyze their performance limitations. In this paper, we propose a new elastographic time constant (TC) estimator, which is based on the use of the least square error (LSE) curve-fitting method and the Levenberg-Marquardt (LM) optimization rule as applied to noisy elastographic data obtained from a material in a creep-type experiment. The algorithm is executed on a massively parallel general purpose graphics processing unit (GPGPU) to achieve real-time performance. The estimator's performance is analyzed using simulations. Experimental results obtained from poroelastic phantoms are presented as a proof of principle of the new estimator's technical applicability on real experimental data. The results of this study demonstrate that the newly proposed elastographic estimator can produce highly accurate and sensitive elastographic TC estimates in real-time and at high signal-to-noise ratios.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.