We propose a new type of micro/nano fluidic mixer based on non-equilibrium electrokinetics and demonstrate its mixing performance. We fabricate the device with two-step reactive ion etching, one for nanochannels and one for microchannels. Mixing is achieved by strong vortex structures formed near the micro/nano channel interface. We expect the proposed device to be beneficial in the development of micro total analysis systems, since it is simple in its design with minimal fabrication complications.
A Generative Adversarial Network (GAN) with generator G trained to model the prior of images has been shown to perform better than sparsity-based regularizers in illposed inverse problems. Here, we propose a new method of deploying a GAN-based prior to solve linear inverse problems using projected gradient descent (PGD). Our method learns a network-based projector for use in the PGD algorithm, eliminating expensive computation of the Jacobian of G. Experiments show that our approach provides a speed-up of 60-80× over earlier GAN-based recovery methods along with better accuracy. Our main theoretical result is that if the measurement matrix is moderately conditioned on the manifold range(G) and the projector is δ-approximate, then the algorithm is guaranteed to reach O(δ) reconstruction error in O(log(1/δ)) steps in the low noise regime. Additionally, we propose a fast method to design such measurement matrices for a given G. Extensive experiments demonstrate the efficacy of this method by requiring 5-10× fewer measurements than random Gaussian measurement matrices for comparable recovery performance. Because the learning of the GAN and projector is decoupled from the measurement operator, our GAN-based projector and recovery algorithm are applicable without retraining to all linear inverse problems, as confirmed by experiments on compressed sensing, super-resolution, and inpainting.
Various ocular diseases, such as cataract, diabetic retinopathy, and glaucoma have affected a large proportion of the population worldwide. In ophthalmology, fundus photography is used for the diagnosis of such retinal disorders. Nowadays, the setup of fundus image acquisition has changed from a fixed position to portable devices, making acquisition more vulnerable to distortions. However, a trustworthy diagnosis solely relies upon the quality of the fundus image. In recent years, fundus image quality assessment (IQA) has drawn much attention from researchers. This paper presents a detailed survey of the fundus IQA research. The survey covers a comprehensive discussion on the factors affecting the fundus image quality and the real-time distortions. The fundus IQA algorithms have been analyzed on the basis of the methodologies used and divided into three classes, namely: (i) Similarity-based, (ii) Segmentation-based, and (iii) Machine learning based. In addition, limitations of state of the art in this research field are also presented with the possible solutions. The objective of this paper is to provide a detailed information about the fundus IQA research with its significance, present status, limitations, and future scope. To the best of our knowledge, this is the first survey paper on the fundus IQA research.
Objectively assessing the perceptual quality of an ocular fundus image is essential for the reliable diagnosis of various ocular diseases. A fair amount of work has been done in this field to date. However, the generalizability of the current work is limited, as the existing quality models were developed and evaluated with data-sets built with limited subjective inputs. This paper aims at addressing this limitation with the following two contributions. First, a new fundus image quality assessment (FIQuA) data-set is presented, containing 1500 fundus images with three classes of quality: Good, Fair, and Poor. Also, for each image, subjective scores (in the range [0-10]) were collected for six quality parameters, including structural and generic properties of the fundus images. Second, a new multivariate regression based convolutional neural network (CNN) model is proposed to predict the fundus image quality. The proposed model consists of two individually trained blocks. The first block consists of four pre-trained models, trained against the subjective scores for the six quality parameters, and aims at deriving the optimized features for classification. Next, the optimized features from each of the four models are ensembled together and transferred to the second block for final classification. The proposed model achieves a strong correlation with the subjective scores, with the values 0.941, 0.954, 0.853, and 0.401 obtained for SROCC, LCC, KCC, and RMSE respectively. Its classification accuracy is 95.66% over the FIQuA data-set, and 98.96% and 88.43% respectively over the two publicly available data-sets DRIMDB and EyeQ. INDEX TERMS Fundus image quality assessment, diabetic retinopathy, multivariate regression, convolutional neural network.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.