COVID-19 is a rapidly spreading viral disease and has affected over 100 countries worldwide. The numbers of casualties and cases of infection have escalated particularly in countries with weakened healthcare systems. Recently, reverse transcription-polymerase chain reaction (RT-PCR) is the test of choice for diagnosing COVID-19. However, current evidence suggests that COVID-19 infected patients are mostly stimulated from a lung infection after coming in contact with this virus. Therefore, chest X-ray (i.e., radiography) and chest CT can be a surrogate in some countries where PCR is not readily available. This has forced the scientific community to detect COVID-19 infection from X-ray images and recently proposed machine learning methods offer great promise for fast and accurate detection. Deep learning with convolutional neural networks (CNNs) has been successfully applied to radiological imaging for improving the accuracy of diagnosis. However, the performance remains limited due to the lack of representative X-ray images available in public benchmark datasets. To alleviate this issue, we propose a self-augmentation mechanism for data augmentation in the feature space rather than in the data space using reconstruction independent component analysis (RICA). Specifically, a unified architecture is proposed which contains a deep convolutional neural network (CNN), a feature augmentation mechanism, and a bidirectional LSTM (BiLSTM). The CNN provides the high-level features extracted at the pooling layer where the augmentation mechanism chooses the most relevant features and generates low-dimensional augmented features. Finally, BiLSTM is used to classify the processed sequential information. We conducted experiments on three publicly available databases to show that the proposed approach achieves the state-of-the-art results with accuracy of 97%, 84% and 98%. Explainability analysis has been carried out using feature visualization through PCA projection and t-SNE plots.
The bag-of-words (BoW) model has been widely used for scene classification in recent state-of-the-art methods. However, inter-class similarity among scene categories and very high spatial resolution imagery makes its performance limited in the remote-sensing domain. Therefore, this research presents a new KAZE-based image descriptor that makes use of the BoW approach to substantially increase classification performance. Specifically, a novel multi-neighbourhood KAZE is proposed for small image patches. Secondly, the spatial pyramid matching and BoW representation can be adopted to use the extracted features and make an innovative BoW KAZE (BoWK) descriptor. Third, two bags of multi-neighbourhood KAZE features are selected in which each bag is regarded as separated feature descriptors. Next, canonical correlation analysis is introduced as a feature fusion strategy to further refine the BOWK features, which allows a more effective and robust fusion approach than the traditional feature fusion strategies. Experiments on three challenging remote-sensing data sets show that the proposed BoWK descriptor not only surpasses the conventional KAZE descriptor but also yields significantly higher classification performance than the state-of-the-art methods used now. Moreover, the proposed BoWK approach produces rich informative features to describe the scene images with low-computational cost and a much lower dimension.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.