IoT (Internet of Things) usage in industrial and scientific domains is progressively increasing. Currently, IoTs are utilized in numerous applications in different domains, similar to communication technology, environmental monitoring, agriculture, medical services, and manufacturing purposes. But, the IoT systems are vulnerable against various intrusions and attacks in the perspective on the security view. It is essential to create an intrusion detection model to detect and secure the network from different attacks and anomalies that continually happen in the network. In this paper, the anomaly detection model for an IoT network using deep neural networks (DNN) with chicken swarm optimization (CSO) algorithm was proposed. Presently, the DNN has demonstrated its efficiency in different fields that are applicable to its usage. Deep learning is the type of algorithm based on machine learning which used many layers to gradually extricate more significant features of level from the raw inputs. The UNSW-NB15 dataset was utilized to evaluate the anomaly detection model. The proposed model obtained 94.85% accuracy and 96.53% detection rate which is better than other compared techniques like GA-NB, GSO, and PSO for validation. The DNN-CSO model has performed well in detecting most of the attacks, and it is appropriate for detecting anomalies in the IoT network.
Glaucoma is the second most common cause for blindness around the world and the third most common in Europe and the USA. Around 78 million people are presently living with glaucoma (2020). It is expected that 111.8 million people will have glaucoma by the year 2040. 90% of glaucoma is undetected in developing nations. It is essential to develop a glaucoma detection system for early diagnosis. In this research, early prediction of glaucoma using deep learning technique is proposed. In this proposed deep learning model, the ORIGA dataset is used for the evaluation of glaucoma images. The U-Net architecture based on deep learning algorithm is implemented for optic cup segmentation and a pretrained transfer learning model; DenseNet-201 is used for feature extraction along with deep convolution neural network (DCNN). The DCNN approach is used for the classification, where the final results will be representing whether the glaucoma infected or not. The primary objective of this research is to detect the glaucoma using the retinal fundus images, which can be useful to determine if the patient was affected by glaucoma or not. The result of this model can be positive or negative based on the outcome detected as infected by glaucoma or not. The model is evaluated using parameters such as accuracy, precision, recall, specificity, and F-measure. Also, a comparative analysis is conducted for the validation of the model proposed. The output is compared to other current deep learning models used for CNN classification, such as VGG-19, Inception ResNet, ResNet 152v2, and DenseNet-169. The proposed model achieved 98.82% accuracy in training and 96.90% in testing. Overall, the performance of the proposed model is better in all the analysis.
The anterior cruciate ligaments (ACL) are the fundamental structures in preserving the common biomechanics of the knees and most frequently damaged knee ligaments. An ACL injury is a tear or sprain of the ACL, one of the fundamental ligaments in the knee. ACL damage most generally happens during sports, for example, soccer, ball, football, and downhill skiing, which include sudden stops or changes in direction, jumping, and landings. Magnetic resonance imaging (MRI) has a major role in the field of diagnosis these days. Specifically, it is effective for diagnosing the cruciate ligaments and any related meniscal tears. The primary objective of this research is to detect the ACL tear from MRI knee images, which can be useful to determine the knee abnormality. In this research, a Deep Convolution Neural Network (DCNN) based Inception-v3 deep transfer learning (DTL) model was proposed for classifying the ACL tear MRI images. Preprocessing, feature extraction, and classification are the main processes performed in this research. The dataset utilized in this work was collected from the MRNet database. A total of 1,370 knee MRI images are used for evaluation. 70% of data (959 images) are used for training and testing, and 30% of data (411 images) are used in this model for performance analysis. The proposed DCNN with the Inception-v3 DTL model is evaluated and compared with existing deep learning models like VGG16, VGG19, Xception, and Inception ResNet-v28. The performance metrics like accuracy, precision, recall, specificity, and F-measure are evaluated to estimate the performance analysis of the model. The model has obtained 99.04% training accuracy and 95.42% testing accuracy in performance analysis.
In the domain of remote sensing, the classification of hyperspectral image (HSI) has become a popular topic. In general, the complicated features of hyperspectral data cause the precise classification difficult for standard machine learning approaches. Deep learning-based HSI classification has lately received a lot of interest in the field of remote sensing and has shown promising results. As opposed to conventional hand-crafted feature-based classification approaches, deep learning can automatically learn complicated features of HSIs with a greater number of hierarchical layers. Because HSI’s data structure is complicated, applying deep learning to it is difficult. The primary objective of this research is to propose a deep feature extraction model for HSI classification. Deep networks can extricate features of spatial and spectral from HSI data simultaneously, which is advantageous for increasing the performances of the proposed system. The squeeze and excitation (SE) network is combined with convolutional neural networks (SE-CNN) in this work to increase its performance in extracting features and classifying HSI. The squeeze and excitation block is designed to improve the representation quality of a CNN. Three benchmark datasets are utilized in the experiment to evaluate the proposed model: Pavia Centre, Pavia University, and Salinas. The proposed model’s performance is validated by a performance comparison with current deep transfer learning approaches such as VGG-16, Inception-v3, and ResNet-50. In terms of accuracy on each class of datasets and overall accuracy, the proposed SE-CNN model outperforms the compared models. The proposed model achieved an overall accuracy of 96.05% for Pavia University, 98.94% for Pavia Centre dataset, and 96.33% for Salinas dataset.
Currently, the size of multimedia data is rising gradually from gigabytes to petabytes, due to the progression of a larger quantity of realistic data. The majority of big data is conveyed via the internet and they were accumulated on cloud servers. Since cloud computing offers internet-oriented services, there were a lot of attackers and malevolent users. They always attempt to deploy the private data of users without any right access. At certain times, they substitute the real data by any counterfeit data. As a result, data protection has turned out to be a noteworthy concern in recent times. This paper aims to establish an optimization-based privacy preservation model for preserving multimedia data by selecting the optimal secret key. Here, the encryption and decryption process is carried out by Improved Blowfish cryptographic technique, where the sensitive data in cloud server is preserved using the optimal key. Optimal key generation is the significant procedure to ensure the objectives of integrity and confidentiality. Likewise, data restoration is the inverse process of sanitization (decryption). In both the cases, key generation remains a major aspect, which is optimally chosen by a novel hybrid algorithm termed as “Clan based Crow Search with Adaptive Awareness probability (CCS-AAP). Finally, an analysis is carried out to validate the improvement of the proposed method.
Problem statement: This study addressed the user's perceived quality of service requirements in content distribution and investigated the role of QoS. Aware Dominating set based Semantic Overlay Network (QADSON) in surrogate server selection to achieve the specified quality of service. Approach: At first, we constructed the QoS aware dominating set based semantic overlay network which was a virtual network of surrogate servers that was built on top of existing physical network with the purpose to implement new network services and features such as efficiency, exact-1 domination, controlled redundancy and fault tolerance that are not available in the existing network. We applied EFRRA content replication algorithm to disseminate the content among the surrogate servers and evaluated its performance in QADSON. Results: We assessed the efficiency, exact-1 domination, controlled redundancy and fault resiliency of QADSON in terms of mean response time, mean CDN utility, hit ratio percentage, rejection rate and CDN load. We extended the simulation experiments to analyze the role of QADSON in maintaining uniform CDN utility of above 0.95. Conclusion:We also investigated the quality of service requirements for the content distribution and evaluated performance of QADSON based CDN in terms of mean response time, latency, hit ratio percentage, mean CDN utility, rejection rate and CDN load.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.