Secure Internet of Things (IoTs) have evolved into a requirement for electronic healthcare systems. In most cases, health images contain sensitive information about patients that must be protected. Traditional encryption cannot be directly applied to image data due to restrictions in digital data attributes. Additionally, patients may lose the confidentiality of their data when private images are transmitted via a network. Thus, multimedia Artificial Intelligence and image processing are applied to build improved secure IoTs. To guarantee accurate and privately protected e-health services, a secure lightweight key frame extraction approach is essential. Additionally, when taking into account the limitations of real-time e-health systems, it can be challenging to establish a satisfactory degree of security in an economical manner. An encryption scheme that contain a hashing version of the Blum Blum Shub (BBS) generator, namely Hash-BBS (HBBS) is suggested and built to achieve a high grade of integrity and confidentiality in transmission data of COVID-19 CT-images for patients. Also, an AI technique is applied for COVID-19 testing such as adopted a convolutional neural network. Evaluation showed that the proposed framework outperformed alternative security and transfer learning methodologies in secure prediction. Therefore, it can be used to reliably transmit CT-images for COVID-19 patients while meeting strict security and prediction benchmarks.
Recently, pattern recognition in audio signal processing using electroencephalography (EEG) has attracted significant attention. Changes in eye cases (open or closed) are reflected in distinct patterns in EEG data, gathered across a range of cases and actions. Therefore, the accuracy of extracting other information from these signals depends significantly on the prediction of the eye case during the acquisition of EEG signals. In this paper, we use deep learning vector quantization (DLVQ), and feedforward artificial neural network (F-FANN) techniques to recognize the case of the eye. The DLVQ is superior to traditional VQ in classification issues due to its ability to learn a code-constrained codebook. On initialization by the k-means VQ approach, the DLVQ shows very promising performance when tested on an EEG-audio information retrieval task, while F-FANN classifies EEG-audio signals of eye state as open or closed. The DLVQ model achieves higher classification accuracy, higher F score, precision, and recall, as well as superior classification abilities as compared to the F-FANN.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.