Machine Learning has been the quintessential solution for many AI problems, but learning models are heavily dependent on specific training data. Some learning models can be incorporated with prior knowledge using a Bayesian setup, but these learning models do not have the ability to access any organized world knowledge on demand. In this work, we propose to enhance learning models with world knowledge in the form of Knowledge Graph (KG) fact triples for Natural Language Processing (NLP) tasks. Our aim is to develop a deep learning model that can extract relevant prior support facts from knowledge graphs depending on the task using attention mechanism. We introduce a convolutionbased model for learning representations of knowledge graph entity and relation clusters in order to reduce the attention space. We show that the proposed method is highly scalable to the amount of prior information that has to be processed and can be applied to any generic NLP task. Using this method we show significant improvement in performance for text classification with 20Newsgroups (News20) & DBPedia datasets, and natural language inference with Stanford Natural Language Inference (SNLI) dataset. We also demonstrate that a deep learning model can be trained with substantially less amount of labeled training data, when it has access to organized world knowledge in the form of a knowledge base.
We propose a novel cloud-based data storage solution framework named Data Vaporizer (DV). The proposed framework provides many unique features such as storing data over multiple clouds or storage zones, resistance against organized vendor attacks, maintaining data integrity and confidentiality through client-side processing, fault-tolerance against failure of one or more cloud storage locations and avoids vendor lock-in of data. Data Vaporizer is highly configurable to meet various client data encryption requirements, compliance to industry standards and fault tolerance constraints depending on the nature and sensitivity of the data. To enhance the level of security and reliability; especially to protect data against malicious attacks and secure key management in cloud; DV uses advanced techniques of secret sharing of the keys. The architecture and optimality of data placement and efficient key management algorithm of DV ensure that the solution is highly scalable. The data foot print and subsequent cost incurred by our storage solution is minimal, considering the benefits provided. The initial response for the adoption of DV in actual client scenarios is promising.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.