Decoding DNA symbols using next-generation sequencers was a major breakthrough in genomic research. Despite the many advantages of next-generation sequencers, e.g., the high-throughput sequencing rate and relatively low cost of sequencing, the assembly of the reads produced by these sequencers still remains a major challenge. In this review, we address the basic framework of next-generation genome sequence assemblers, which comprises four basic stages: preprocessing filtering, a graph construction process, a graph simplification process, and postprocessing filtering. Here we discuss them as a framework of four stages for data analysis and processing and survey variety of techniques, algorithms, and software tools used during each stage. We also discuss the challenges that face current assemblers in the next-generation environment to determine the current state-of-the-art. We recommend a layered architecture approach for constructing a general assembler that can handle the sequences generated by different sequencing platforms.
Several cancellable biometrics (CBs) techniques have been proposed to protect biometric data and maintain users' privacy. Although such techniques can withstand brute-force and/or pre-image attacks, they are vulnerable to correlation attacks. In this study, the authors propose a novel correlation attack-resistant CBs scheme that is based on a convolution operation and a bidirectional associative memory (BAM) neural network. The proposed scheme utilises BAM to bind biometric templates to random bit-strings in a secure and efficient manner. These random bit-strings are then employed to derive cancellable templates from the true templates linked to them via BAM weights, which are safely stored with the generated cancellable template in the system database. In this study, linear convolution is adopted as the cancellable transformation process. The result of convolving the original biometric template with the transformation key is binarised according to a predefined threshold to thwart blind de-convolution. The security of the proposed scheme against different attacks is analysed and experiments on the CASIA-IrisV3-Interval dataset illustrate the efficacy of the proposed scheme.
Abstract. Recommender systems are needed to find food items of one's interest. We review recommender systems and recommendation methods. We propose a food personalization framework based on adaptive hypermedia. We extend Hermes framework with food recommendation functionality. We combine TF-IDF term extraction method with cosine similarity measure. Healthy heuristics and standard food database are incorporated into the knowledgebase. Based on the performed evaluation, we conclude that semantic recommender systems in general outperform traditional recommenders systems with respect to accuracy, precision, and recall, and that the proposed recommender has a better F-measure than existing semantic recommenders.
https://github.com/SaraEl-Metwally/LightAssembler CONTACT: sarah_almetwally4@mans.edu.egSupplementary information: Supplementary data are available at Bioinformatics online.
Arabic Language is the mother tongue for 23 countries and more than 350 million persons. It is the language of the Holy Quran; therefore, many non-Arabic Islamic countries, like Pakistan, teach Arabic as a second language. Nevertheless, it is observable that the Arabic content on the Web is less than what should be. The evolution of the Semantic Web (SW) added a new dimension to this problem. This paper is an attempt to figure out the problem, its causes, and to open avenues to think about the solutions. The survey presented in this paper concerned with the SW applications regarding the Arabic Language in the domains of Ontology construction and utilization, Arabic WordNet (AWN) exploiting and enrichment, Arabic Named Entities Extraction, Holy Quran and Islamic Knowledge semantic representation, and Arabic Semantic Search Engines. In fact, the study revealed serious deficiencies in dealing semantically with the Arabic Language. That is mainly owing to the rarity of tools that can support the Arabic script. Furthermore, the Arabic resources, if available, are not free. Moreover, there are many technical problems in the semantic dealing with the Arabic context. Therefore, most of the developed applications are not sufficiently proficient. However, due to the significance of the Arabic Language, it is inevitable to overcome these deficiencies in order to put the Arabic Language in the category of the machine-semanticallyinterpretable languages, rather than just the textually processable ones. This way, we can exploit the power of the Semantic Web features in extracting the essence of the knowledge residing in the Arabic web documents and going beyond dealing with its rigid texts.
A novel deep architecture Thresholding Convolution Neural Network (ThCNN) progresses in this paper; Which is a simple and effective method to regularizing features map in the early layers of Convolution Neural Network(CNN). One of the issues identified with deep learning is the features in early layers that robustness and discriminativeness. In this paper, we compute the optimal global threshold to determine the features that are passed to the next layers. We then evaluate ThCNN on an MNIST dataset comparing it CNN by applying multiple trained models. It yield decent accuracy compared to traditional CNN. It gives a 99.5% accuracy compared to 99.3% for traditional CNN.
Survey research is appropriate and necessary to address certain research question types. This paper aims to provide a general overview of the textual similarity in the literature. Measuring textual similarity tends to have an increasingly important turn in related topics like text classification, recovery of specific information from data, clustering, topic retrieval, subject tracking, question answering, essay grading, summarization, and the nowadays trending Conversational Agents (CA), which is a program deals with humans through natural language conversation. Finding the similarity between terms is the essential portion of textual similarity, then used as a major phase for sentence-level, paragraph-level, and script-level similarities. In particular, we concern with textual similarity in Arabic. In the Arabic Language, applying Natural language Processing (NLP) tasks are very challenging indeed as it has many characteristics, which are considered as confrontations. However, many approaches for measuring textual similarity have been presented for Arabic text reviewed and compared in this paper.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.