This paper presents a comprehensive study of Convolutional Neural Networks (CNN) and transfer learning in the context of medical imaging. Medical imaging plays a critical role in the diagnosis and treatment of diseases, and CNN-based models have demonstrated significant improvements in image analysis and classification tasks. Transfer learning, which involves reusing pre-trained CNN models, has also shown promise in addressing challenges related to small datasets and limited computational resources. This paper reviews the advantages of CNN and transfer learning in medical imaging, including improved accuracy, reduced time and resource requirements, and the ability to address class imbalances. It also discusses challenges, such as the need for large and diverse datasets, and the limited interpretability of deep learning models. What factors contribute to the success of these networks? How are they fashioned, exactly? What motivated them to build the structures that they did? Finally, the paper presents current and future research directions and opportunities, including the development of specialized architectures and the exploration of new modalities and applications for medical imaging using CNN and transfer learning techniques. Overall, the paper highlights the significant potential of CNN and transfer learning in the field of medical imaging, while also acknowledging the need for continued research and development to overcome existing challenges and limitations.
The aedes mosquito-borne dengue viruses cause dengue fever, an arboviral disease (DENVs). In 2019, the World Health Organization forecasts a yearly occurrence of infections from 100 million to 400 million, the maximum number of dengue cases ever testified worldwide, prompting WHO to label the virus one of the world’s top ten public health risks. Dengue hemorrhagic fever can progress into dengue shock syndrome, which can be fatal. Dengue hemorrhagic fever can also advance into dengue shock syndrome. To provide accessible and timely supportive care and therapy, it is necessary to have indispensable practical instruments that accurately differentiate Dengue and its subcategories in the early stages of illness development. Dengue fever can be predicted in advance, saving one’s life by warning them to seek proper diagnosis and treatment. Predicting infectious diseases such as dengue is difficult, and most forecast systems are still in their primary stages. In developing dengue predictive models, data from microarrays and RNA-Seq have been used significantly. Bayesian inferences and support vector machine algorithms are two examples of statistical methods that can mine opinions and analyze sentiment from text. In general, these methods are not very strong semantically, and they only work effectively when the text passage inputs are at the level of the page or the paragraph; they are poor miners of sentiment at the level of the sentence or the phrase. In this research, we propose to construct a machine learning method to forecast dengue fever.
Soft sensors are data-driven devices that allow for estimates of quantities that are either impossible to measure or prohibitively expensive to do so. DL (deep learning) is a relatively new feature representation method for data with complex structures that has a lot of promise for soft sensing of industrial processes. One of the most important aspects of building accurate soft sensors is feature representation. This research proposed novel technique in automation of manufacturing industry where dynamic soft sensors are used in feature representation and classification of the data. Here the input will be data collected from virtual sensors and their automation-based historical data. This data has been pre-processed to recognize the missing value and usual problems like hardware failures, communication errors, incorrect readings, and process working conditions. After this process, feature representation has been done using fuzzy logic-based stacked data-driven auto-encoder (FL_SDDAE). Using the fuzzy rules, the features of input data have been identified with general automation problems. Then, for this represented features, classification process has been carried out using least square error backpropagation neural network (LSEBPNN) in which the mean square error while classification will be minimized with loss function of the data. The experimental results have been carried out for various datasets in automation of manufacturing industry in terms of computational time of 34%, QoS of 64%, RMSE of 41%, MAE of 35%, prediction performance of 94%, and measurement accuracy of 85% by proposed technique.
Locomotion prediction for human welfare has gained tremendous interest in the past few years. Multimodal locomotion prediction is composed of small activities of daily living and an efficient approach to providing support for healthcare, but the complexities of motion signals along with video processing make it challenging for researchers in terms of achieving a good accuracy rate. The multimodal internet of things (IoT)-based locomotion classification has helped in solving these challenges. In this paper, we proposed a novel multimodal IoT-based locomotion classification technique using three benchmarked datasets. These datasets contain at least three types of data, such as data from physical motion, ambient, and vision-based sensors. The raw data has been filtered through different techniques for each sensor type. Then, the ambient and physical motion-based sensor data have been windowed, and a skeleton model has been retrieved from the vision-based data. Further, the features have been extracted and optimized using state-of-the-art methodologies. Lastly, experiments performed verified that the proposed locomotion classification system is superior when compared to other conventional approaches, particularly when considering multimodal data. The novel multimodal IoT-based locomotion classification system has achieved an accuracy rate of 87.67% and 86.71% over the HWU-USP and Opportunity++ datasets, respectively. The mean accuracy rate of 87.0% is higher than the traditional methods proposed in the literature.
The advancement of computer vision technology has led to the development of sophisticated algorithms capable of accurately recognizing human actions from red-green-blue videos recorded by drone cameras. Hence, possessing an exceptional potential, human action recognition also faces many challenges including, tendency of humans to perform the same action in different ways, limited camera angles, and field of view. In this research article, a system has been proposed to tackle the forementioned challenges by using red-green-blue videos as input while the videos were recorded by drone cameras. First of all, the video was split into its constituent frames and then gamma correction was applied on each frame to obtain an optimized version of the image. Then the Felzenszwalb's algorithm performed the segmentation to segment out human from the input image and human silhouette was generated. Utilizing the silhouette, skeleton was extracted to spot thirteen body key points. The key points were then used to perform elliptical modeling to estimate the individual boundaries of the body parts while the elliptical modeling was governed by the Gaussian mixture model-expectation maximization algorithm. The elliptical models of the body parts were utilized to spot fiducial points that if tracked, could provide very useful information about the performed action. Some other features that were extracted for this study include, the 3d point cloud feature vector, relative distance and velocity of the key-points, and their mutual angles. The features were then forwarded for optimization under a quadratic discriminant analysis and finally, a convolutional neural network was trained to perform the action classification. Three benchmark datasets including, the Drone-Action dataset, the UAV-Human dataset, and the Okutama-Action dataset were used for a comprehensive experimentation. The system outperformed the state-of-the-art approaches by securing accuracies of 80.03%, 48.60%, and 78.01% over the Drone-Action dataset, the UAV-Human dataset, and the Okutama-Action dataset respectively.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.