Deep Neural Networks (DNNs) are extensively deployed in today's safety-critical autonomous systems thanks to their high performance. However, they are known to make mistakes unpredictably, e.g., a DNN may misclassify an object if it is used for perception, or issue unsafe control commands if it is used for planning and control. One common cause for such unpredictable mistakes is Out-of-Distribution (OOD) inputs, i.e., test inputs that fall outside of the distribution of the training dataset. In this paper, we present a framework for OOD detection based on outlier detection in the hidden layers of a DNN by applying Isolation Forest (IF) and Local Outlier Factor (LOF) techniques. Extensive experimental evaluation indicates that LOF is a promising method in terms of both the Machine Learning metrics of precision, recall, F1 score and accuracy, and computational efficiency during testing.