In large-scale enterprises, vast quantities of textual information are harbored across corporate repositories and intranet websites. Traditional search techniques, lacking context sensitivity, often need to improve in efficiently retrieving pertinent data. Modern techniques that use a distributed representation of words require a considerable training dataset and computation, thus presenting a financial and operational burden. Generative models for information search suffer from problems of transparency and hallucination, which can be detrimental, especially for organizations and their stakeholders who rely on these results for critical business operations. This paper presents a non-goal-oriented conversational agent based on a collection of finite state machines and an information search model for text search from an extensive collection of stored corporate documents and intranet websites. We use a distributed representation of words derived from the BERT model, which allows for contextual searches. We minimally fine-tune a BERT model on a multi-label text classification task specific to a closed-domain knowledge base. Based on DCG metrics, our information retrieval model using distributed embeddings from the minimally trained BERT model and Word Movers Distance for calculating topic similarity shows a higher relevance to user queries than BERT embeddings with cosine similarity and BM25. Our architecture promises to drastically expedite and improve the accuracy of information retrieval in closed-domain systems without the need for a massive training dataset or expensive computing while maintaining transparency.