Comprehensive understanding of an organism requires that we understand the contributions of most, if not all, of its genes. Classical genetic approaches to this issue have involved systematic deletion of each gene in the genome, with comprehensive sets of mutants available only for very-well-studied model organisms. We took a different approach, harnessing the power of in vivo transposition coupled with deep sequencing to identify >500,000 different mutations, one per cell, in the prevalent human fungal pathogen Candida albicans and to map their positions across the genome. The transposition approach is efficient and less labor-intensive than classic approaches. Here, we describe the production and analysis (aided by machine learning) of a large collection of mutants and the comprehensive identification of 1,610 C. albicans genes that are essential for growth under standard laboratory conditions. Among these C. albicans essential genes, we identify those that are also essential in two distantly related model yeasts as well as those that are conserved in all four major human fungal pathogens and that are not conserved in the human genome. This list of genes with functions important for the survival of the pathogen provides a good starting point for the development of new antifungal drugs, which are greatly needed because of the emergence of fungal pathogens with elevated resistance and/or tolerance of the currently limited set of available antifungal drugs.
Candida albicans is a prevalent human fungal commensal and also a pathogen that causes life-threatening systemic infections. Treatment failures are frequent because few therapeutic antifungal drug classes are available and because drug resistance and tolerance limit drug efficacy.
Cryptococcosis is a globally distributed invasive fungal infection caused by infections with Cryptococcus neoformans or Cryptococcus gattii . Only three classes of therapeutic drugs are clinically available for treating cryptococcosis: polyenes (amphotericin B), azoles (fluconazole), and pyrimidine analogues (flucytosine).
Introduction. Neural network models of autoassociative, distributed memory allow storage and retrieval of many items (vectors) where the number of stored items can exceed the vector dimension (the number of neurons in the network). This opens the possibility of a sublinear time search (in the number of stored items) for approximate nearest neighbors among vectors of high dimension.The purpose of this paper is to review models of autoassociative, distributed memory that can be naturally implemented by neural networks (mainly with local learning rules and iterative dynamics based on information locally available to neurons).Scope. The survey is focused mainly on the networks of Hopfield, Willshaw and Potts, that have connections between pairs of neurons and operate on sparse binary vectors. We discuss not only autoassociative memory, but also the generalization properties of these networks. We also consider neural networks with higher-order connections and networks with a bipartite graph structure for non-binary data with linear constraints.Conclusions. In conclusion we discuss the relations to similarity search, advantages and drawbacks of these techniques, and topics for further research. An interesting and still not completely resolved question is whether neural autoassociative memories can search for approximate nearest neighbors faster than other index structures for similarity search, in particular for the case of very high dimensional vectors. Introduction. Neural network models of autoassociative, distributed memory allow storage and retrieval of many items (vectors) where the number of stored items can exceed the vector dimension (the number of neurons in the network). This opens the possibility of a sublinear time search (in the number of stored items) for approximate nearest neighbors among vectors of high dimension.The purpose of this paper is to review models of autoassociative, distributed memory that can be naturally implemented by neural networks (mainly with local learning rules and iterative dynamics based on information locally available to neurons).Scope. The survey is focused mainly on the networks of Hopfield, Willshaw and Potts, that have connections between pairs of neurons and operate on sparse binary vectors. We (Online), ISSN 0454-9910 (Print). Киб. и выч. техн. 2017. № 2 (188) 6 discuss not only autoassociative memory, but also the generalization properties of these networks. We also consider neural networks with higher-order connections and networks with a bipartite graph structure for non-binary data with linear constraints.Conclusions. In conclusion we discuss the relations to similarity search, advantages and drawbacks of these techniques, and topics for further research. An interesting and still not completely resolved question is whether neural autoassociative memories can search for approximate nearest neighbors faster than other index structures for similarity search, in particular for the case of very high dimensional vectors. ISSN 2519-2205 (Online), ISSN 0454-9910 (Print). Киб...
A major goal in neuroscience is to elucidate the principles by which memories are stored in a neural network. Here, we have systematically studied how the four types of associative memories (short-and long-term memories, each formed using positive and negative associations) are encoded within the compact neural network of C. elegans worms.Interestingly, short-term, but not long-term, memories are evident in the sensory system.Long-term memories are relegated to inner layers of the network, allowing the sensory system to resume innate functionality. Furthermore, a small set of sensory neurons is allocated for coding short-term memories, a design that can increase memory capacity and limit non-innate behavioral responses. Notably, individual sensory neurons may code for the conditioned stimulus or the experience valence. Interneurons integrate these information to modulate animal behavior upon memory reactivation. This comprehensive study reveals basic principles by which memories are encoded within a neural network, and highlights the central roles of sensory neurons in memory formation.
Introduction. Distributed representation (DR) of data is a form of a vector representation,where each object is represented by a set of vector components, and each vector component can belong to representations of many objects. In ordinary vector representations, the meaning of each component is defined, which cannot be said about DR. However, the similarity of RP vectors reflects the similarity of the objects they represent.DR is a neural network approach based on modeling the representation of information in the brain, resulted from ideas about a "distributed" or "holographic" representations. DRs have a large information capacity, allow the use of a rich arsenal of methods developed for vector data, scale well for processing large amounts of data, and have a number of other advantages. Methods for data transformation to DRs have been developed for data of various types -from scalar and vector to graphs.The purpose of the article is to provide an overview of a part of the work of the Department of Neural Information Processing Technologies (International Center) in the field of neural network distributed representations. The approach is a development of the ideas of Nikolai Mikhailovich Amosov and his scientific school of modeling the structure and functions of the brain.Scope. The formation of distributed representations from the original vector representations of objects using random projection is considered. With the help of the DR, it is possible to efficiently estimate the similarity of the original objects represented by numerical vectors. Gritsenko V.I., Rachkovskij D.A., Revunova E.G. ISSN 2519-2205 (Online), ISSN 0454-9910 (Print). Киб. и выч. техн. 2018. № 4 (194) 8The use of DR allows developing regularization methods for obtaining a stable solution of discrete ill-posed inverse problems, increasing the computational efficiency and accuracy of their solution, analyzing analytically the accuracy of the solution. Thus DRs allow for increasing the efficiency of information technologies applying them.Conclusions. DRs of various data types can be used to improve the efficiency and intelligence level of information technologies. DRs have been developed for both weakly structured data, such as vectors, and for complex structured representations of objects, such as sequences, graphs of knowledge-base situations (episodes), etc. Transformation of different types of data into the DR vector format allows unifying the basic information technologies of their processing and achieving good scalability with an increase in the amount of data processed.In future, distributed representations will naturally combine information on structure and semantics to create computationally efficient and qualitatively new information technologies in which the processing of relational structures from knowledge bases is performed by the similarity of their DRs. The neurobiological relevance of distributed representations opens up the possibility of creating intelligent information technologies based on them that function similarly to...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.