Cherry virus A (CVA) is a graft-transmissible member of the genus Capillovirus that infects different stone fruits. Sweet cherry (Prunus avium L; family Rosaceae) is an important deciduous temperate fruit crop in the Western Himalayan region of India. In order to determine the health status of cherry plantations and the incidence of the virus in India, cherry orchards in the states of Jammu and Kashmir (J&K) and Himachal Pradesh (H.P.) were surveyed during the months of May and September 2009. The incidence of CVA was found to be 28 and 13% from J&K and H.P., respectively, by RT-PCR. In order to characterize the virus at the molecular level, the complete genome was amplified by RT-PCR using specific primers. The amplicon of about 7.4 kb was sequenced and was found to be 7,379 bp long, with sequence specificity to CVA. The genome organization was similar to that of isolates characterized earlier, coding for two ORFs, in which ORF 2 is nested in ORF1. The complete sequence was 81 and 84% similar to that of the type isolate at the nucleotide and amino acid level, respectively, with 5' and 3' UTRs of 54 and 299 nucleotides, respectively. This is the first report of the complete nucleotide sequence of cherry virus A infecting sweet cherry in India.
Pneumonia is an infection in one or both the lungs because of virus or bacteria through breathing air. It inflames air sacs in lungs which fill with fluid which further leads to problems in respiration. Pneumonia is interpreted by radiologists by observing abnormality in lungs in case of fluid in Chest X-Rays. Computer Aided Detection Diagnosis (CAD) tools can assist radiologists by improving their diagnostic accuracy. Such CAD tools use neural networks which are trained on Chest X-Ray dataset to classify a Chest X-Ray into normal or infected with Pneumonia. Convolution neural networks have shown remarkable performance in object detection in an image. Quaternion Convolution neural network (QCNN) is a generalization of conventional convolution neural networks. QCNN treats all three channels (R, G, B) of color image as a single unit and it extracts better representative features and which further improves classification. In this paper, we have trained Quaternion residual network on a publicly available large Chest X-Ray dataset on Kaggle repository and obtained classification accuracy of 93.75% and F-score of .94. We have also compared our performance with other CNN architectures. We found that classification accuracy was higher with Quaternion Residual network when we compared it with a real valued Residual network.
The target detection ability of an infrared small target detection (ISTD) system is advantageous in many applications. The highly varied nature of the background image and small target characteristics make the detection process extremely difficult. To address this issue, this study proposes an infrared patch model system using non-convex (IPNCWNNM) weighted nuclear norm minimization (WNNM) and robust principal component analysis (RPCA). As observed in the most advanced methods of infrared patch images (IPI), the edges, sometimes in a crowded background, can be detected as targets due to the extreme shrinking of singular values (SV). Therefore, a non-convex WNNM and RPCA have been utilized in this paper, where varying weights are assigned to the SV rather than the same weights for all SV in the existing nuclear norm minimization (NNM) of IPI-based methods. The alternate direction method of multiplier (ADMM) is also employed in the mathematical evaluation of the proposed work. The observed evaluations demonstrated that in terms of background suppression and target detection proficiency, the suggested technique performed better than the cited baseline methods.
Introduction: Sign language is the only way to communicate for speech-impaired people. But this sign language is not known to normal people so this is the cause of barrier in communicating. This is the problem faced by speech impaired people. In this paper, we have presented our solution which captured hand gestures with Kinect camera and classified the hand gesture into its correct symbol. Method: We used Kinect camera not the ordinary web camera because the ordinary camera does not capture its 3d orientation or depth of an image from camera however Kinect camera can capture 3d image and this will make classification more accurate. Result: Kinect camera will produce a different image for hand gestures for ‘2’ and ‘V’ and similarly for ‘1’ and ‘I’ however, normal web camera will not be able to distinguish between these two. We used hand gesture for Indian sign language and our dataset had 46339, RGB images and 46339 depth images. 80% of the total images were used for training and the remaining 20% for testing. In total 36 hand gestures were considered to capture alphabets and alphabets from A-Z and 10 for numeric, 26 for digits from 0-9 were considered to capture alphabets and Keywords. Conclusion: Along with real-time implementation, we have also shown the comparison of the performance of the various machine learning models in which we have found out the accuracy of CNN on depth- images has given the most accurate performance than other models. All these resulted were obtained on PYNQ Z2 board.
scite is a Brooklyn-based startup that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2023 scite Inc. All rights reserved.
Made with 💙 for researchers