ObjectiveTo evaluate the diagnostic accuracy of keratoconus using deep learning of the colour-coded maps measured with the swept-source anterior segment optical coherence tomography (AS-OCT).DesignA diagnostic accuracy study.SettingA single-centre study.ParticipantsA total of 304 keratoconic eyes (grade 1 (108 eyes), 2 (75 eyes), 3 (42 eyes) and 4 (79 eyes)) according to the Amsler-Krumeich classification, and 239 age-matched healthy eyes.Main outcome measuresThe diagnostic accuracy of keratoconus using deep learning of six colour-coded maps (anterior elevation, anterior curvature, posterior elevation, posterior curvature, total refractive power and pachymetry map).ResultsDeep learning of the arithmetical mean output data of these six maps showed an accuracy of 0.991 in discriminating between normal and keratoconic eyes. For single map analysis, posterior elevation map (0.993) showed the highest accuracy, followed by posterior curvature map (0.991), anterior elevation map (0.983), corneal pachymetry map (0.982), total refractive power map (0.978) and anterior curvature map (0.976), in discriminating between normal and keratoconic eyes. This deep learning also showed an accuracy of 0.874 in classifying the stage of the disease. Posterior curvature map (0.869) showed the highest accuracy, followed by corneal pachymetry map (0.845), anterior curvature map (0.836), total refractive power map (0.836), posterior elevation map (0.829) and anterior elevation map (0.820), in classifying the stage.ConclusionsDeep learning using the colour-coded maps obtained by the AS-OCT effectively discriminates keratoconus from normal corneas, and furthermore classifies the grade of the disease. It is suggested that this will become an aid for improving the diagnostic accuracy of keratoconus in daily practice.Clinical trial registration number000034587.
Abstract. This paper introduces "SyncTap", a user interface technique for making a network connection between digital devices. When a user wants to connect two devices, he or she synchronously presses and releases the "connection" buttons on both devices. Then, multicast packets that contain press and release timing are sent to the network. By comparing this timing with locally recorded one, both devices correctly identify each other. This scheme is simple but scalable because it can detect and handle simultaneous overlapping connection requests. It can also be used for making secure connections by exchanging public keys. This paper describes the principle, the protocol, and applications of SyncTap.
This paper describes a system that allows users who obtain a "wearable ID key" to personalize dynamically ubiquitous computers by simply touching them. We call the concept of providing personalized service by touch "Active Personalization". For active personalization, users only have to wear a digital key and do not have to carry around other computers. When users touch ubiquitous computers with their wearable key, the keyholes of the ubiquitous computers recognize their IDs and can personalize the computers. We developed a new network technology between keys and keyholes that enables digital information to be carried through a person's body based on a near-field technology we call TouchNet.
Purpose. Although optical coherence tomography (OCT) is essential for ophthalmologists, reading of findings requires expertise. The purpose of this study is to test deep learning with image augmentation for automated detection of chorioretinal diseases. Methods. A retina specialist diagnosed 1,200 OCT images. The diagnoses involved normal eyes (n=570) and those with wet age-related macular degeneration (AMD) (n=136), diabetic retinopathy (DR) (n=104), epiretinal membranes (ERMs) (n=90), and another 19 diseases. Among them, 1,100 images were used for deep learning training, augmented to 59,400 by horizontal flipping, rotation, and translation. The remaining 100 images were used to evaluate the trained convolutional neural network (CNN) model. Results. Automated disease detection showed that the first candidate disease corresponded to the doctor’s decision in 83 (83%) images and the second candidate disease in seven (7%) images. The precision and recall of the CNN model were 0.85 and 0.97 for normal eyes, 1.00 and 0.77 for wet AMD, 0.78 and 1.00 for DR, and 0.75 and 0.75 for ERMs, respectively. Some of rare diseases such as Vogt–Koyanagi–Harada disease were correctly detected by image augmentation in the CNN training. Conclusion. Automated detection of macular diseases from OCT images might be feasible using the CNN model. Image augmentation might be effective to compensate for a small image number for training.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.