The early detection of Diabetic Retinopathy (DR) is critical for diabetics to lower the blindness risks. Many studies represent that Deep Convolutional Neural Network (CNN) based approaches are effective to enable automatic DR detection through classifying retinal images of patients. Such approaches usually depend on a very large dataset composed of retinal images with predefined classification labels to support their CNN training. However, in some occasions, it is not so easy to get enough welllabelled images to act as model training samples. At the same time, when a CNN becomes deeper, its training will not only take much longer time, but also be more likely to lead to overfitting, especially on a large training dataset. Therefore, it is meaningful to explore a simpler CNN based approach that is still effective on small datasets to classify retinal images. In this paper, an approach to retinal image classification is proposed based on the integration of multi-scale shallow CNNs. Experiments on public datasets show that, on small datasets, the proposed approach can improve the classification accuracy by 3% compared with current representative integrated CNN learning approaches. On the bigger dataset, the proposed approach can improve the classification accuracy by 3% to 9% compared with other representative approaches such as traditional CNN, LCNN and VGG16noFC. The evaluation also represents that, though the classification accuracy of the proposed approach declines by 6% on the smallest dataset containing only 10% samples of the original dataset, its time cost declines to about 30% of that on the original dataset. INDEX TERMS convolutional neural network, diabetic retinopathy, image classification, integrated learning, performance integration.
With the rapid development and wide applications of information technology in the medical field, the data sharing of medical information has become one of the focus that has been received much attention by most researchers in recent years. At present, the scheme of data sharing based on blockchain has been becoming more and more mature, and it has the features of decentralized, secure and tamper-resistant to address the problem of data security in the process of data sharing, thereby improving the quality of medical service of citizens, reducing medical cost and cutting down medical risk. Obviously, information technology is not the main factor to hinder data sharing between medical institutions. Actually, the low enthusiasm of medical institutions was due to lack a comprehensive incentive mechanism of medical data sharing. Combining the medical field, this paper proposes an incentive mechanism of data sharing based on information entropy, which can effectively promote more medical institutions to participate in data sharing and enhance the enthusiasm of medical data sharing. Analyses show that the approach is efficient and effective.
In the traditional medical system, individual medical data is managed by hospitals rather than individual patients. It is difficult to exchange effectively with fragmented storage, and large amounts of data are difficult to realize their potential value. With the rapid development of medical informatization, centralized storage of fragmented medical data has been unable to meet the relevant needs of the medical industry. To solve the difficulty of sharing and the complexity of confirming rights in the medical system, this paper proposes a medical data sharing model based on blockchain. The model provides reliable storage with IPFS file system, uses Proxy re-encryption to realize data sharing and ensure data proprietary rights, and uses Token economic system to measure the contribution in the sharing process, which stimulates the enthusiasm of sharing. At last, based on the existing sharing problem of medical data, the paper shows the potential solution.
Both improving the execution efficiency and reducing the execution cost are essential for scientific workflows in cloud environments. As many scientific workflow tasks become more data-intensive and computation-intensive, storing their outputs in cloud for reuse is a feasible way to achieve such objectives. Because data storage in cloud would increase the storage cost, though it might mean less computation time due to data reuse, it is important to determine the proportion of the output data sets of workflow tasks that should be stored in cloud. The paper explores caching provenance in cloud to enhance the smart re-run of Kepler workflows based on a near optimum data caching policy. Ant Colony System optimization is introduced to determine the near optimum data sets to be cached in cloud in order to improve the execution efficiency of future workflow re-run without increasing the total cost of workflow execution in cloud. Simulation and analysis show that the proposed approach is efficient.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.