Three-dimensional (3D) constructive interference in steady state (CISS) is a gradient-echo MRI sequence that is used to investigate a wide range of pathologies when routine MRI sequences do not provide the desired anatomic information. The increased sensitivity of the 3D CISS sequence is an outcome of the accentuation of the T2 values between cerebrospinal fluid (CSF) and pathological structures. Apart from its well-recognized applications in the evaluation of the cranial nerves, CSF rhinorrhea and aqueduct stenosis, we have found the CISS sequence to be useful for the cisternal spaces, cavernous sinuses and the ventricular system, where it is useful for detecting subtle CSF-intensity lesions that may be missed on routine spin-echo sequences. This information helps in the management of these conditions. After a brief overview of the physics behind this sequence, we illustrate its clinical applications with representative cases and discuss its potential role in imaging protocols.
The rapid spread of coronavirus disease has become an example of the worst disruptive disasters of the century around the globe. To fight against the spread of this virus, clinical image analysis of chest CT (computed tomography) images can play an important role for an accurate diagnostic. In the present work, a bi-modular hybrid model is proposed to detect COVID-19 from the chest CT images. In the first module, we have used a Convolutional Neural Network (CNN) architecture to extract features from the chest CT images. In the second module, we have used a bi-stage feature selection (FS) approach to find out the most relevant features for the prediction of COVID and non-COVID cases from the chest CT images. At the first stage of FS, we have applied a guided FS methodology by employing two filter methods: Mutual Information (MI) and Relief-F, for the initial screening of the features obtained from the CNN model. In the second stage, Dragonfly algorithm (DA) has been used for the further selection of most relevant features. The final feature set has been used for the classification of the COVID-19 and non-COVID chest CT images using the Support Vector Machine (SVM) classifier. The proposed model has been tested on two open-access datasets: SARS-CoV-2 CT images and COVID-CT datasets and the model shows substantial prediction rates of 98.39% and 90.0% on the said datasets respectively. The proposed model has been compared with a few past works for the prediction of COVID-19 cases. The supporting codes are uploaded in the Github link: https://github.com/Soumyajit-Saha/A-Bi-Stage-Feature-Selection-on-Covid-19-Dataset
No abstract
The novel SaRS-CoV-2 virus, responsible for the dangerous pneumonia-type disease, COVID-19, has undoubtedly changed the world by killing at least 3,900,000 people as of June 2021 and compromising the health of millions across the globe. Though the vaccination process has started, in developing countries such as India, the process has not been fully developed. Thereby, a diagnosis of COVID-19 can restrict its spreading and level the pestilence curve. As the quickest indicative choice, a computerized identification framework ought to be carried out to hinder COVID-19 from spreading more. Meanwhile, Computed Tomography (CT) imaging reveals that the attributes of these images for COVID-19 infected patients vary from healthy patients with or without other respiratory diseases, such as pneumonia. This study aims to establish an effective COVID-19 prediction model through chest CT images using efficient transfer learning (TL) models. Initially, we used three standard deep learning (DL) models, namely, VGG-16, ResNet50, and Xception, for the prediction of COVID-19. After that, we proposed a mechanism to combine the above-mentioned pre-trained models for the overall improvement of the prediction capability of the system. The proposed model provides 98.79% classification accuracy and a high F1-score of 0.99 on the publicly available SaRS-CoV-2 CT dataset. The model proposed in this study is effective for the accurate screening of COVID-19 CT scans and, hence, can be a promising supplementary diagnostic tool for the forefront clinical specialists.
Data Envelopment Analysis (DEA) is used to measure tax efficiency in 15 Indian states from 1980/81 to 1992/93. Tax efficiency is shown to be conditional on state gross domestic product (SDP), agriculture's share in state SDP, and a poverty index. The considerable remaining efficiency differences are attributable to the small size of some tax jurisdiction rather than to technical inefficiency. Multilateral Malmquist tax indices show that six of the states were consistently efficient, while three were consistently inefficient. Tax efficiency grew at an average annual rate of 3.9% until 1986/87, but growth ceased after that date for all but two states.
The analysis of human facial expressions from the thermal images captured by the Infrared Thermal Imaging (IRTI) cameras has recently gained importance compared to images captured by the standard cameras using light having a wavelength in the visible spectrum. It is because infrared cameras work well in low-light conditions and also infrared spectrum captures thermal distribution that is very useful for building systems like Robot interaction systems, quantifying the cognitive responses from facial expressions, disease control, etc. In this paper, a deep learning model called IRFacExNet (InfraRed Facial Expression Network) has been proposed for facial expression recognition (FER) from infrared images. It utilizes two building blocks namely Residual unit and Transformation unit which extract dominant features from the input images specific to the expressions. The extracted features help to detect the emotion of the subjects in consideration accurately. The Snapshot ensemble technique is adopted with a Cosine annealing learning rate scheduler to improve the overall performance. The performance of the proposed model has been evaluated on a publicly available dataset, namely IRDatabase developed by RWTH Aachen University. The facial expressions present in the dataset are Fear, Anger, Contempt, Disgust, Happy, Neutral, Sad, and Surprise. The proposed model produces 88.43% recognition accuracy, better than some state-of-the-art methods considered here for comparison. Our model provides a robust framework for the detection of accurate expression in the absence of visible light.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.