In recent academic discussions surrounding the textual domain, there has been significant attention directed towards adversarial examples. Despite this focus, the area of detecting such adversarial examples remains notably under-investigated. In this chapter, the authors put forward an innovative approach for the detection of adversarial examples within the realm of natural language processing (NLP). This approach draws inspiration from the local outlier factor (LOF) algorithm. The rigorous empirical evaluation, conducted on pertinent real-world datasets, leverages classifiers based on long short-term memory (LSTM), convolutional neural networks (CNN), and transformer architectures to pinpoint adversarial incursions. The results underscore the superiority of our proposed technique in comparison to recent state-of-the-art methods, namely DISP and FGWS, achieving an impressive F1 detection accuracy rate of up to 94.8%.