In a non-stationary environment, newly received data may have different knowledge patterns to the data used to train learning models. As time passes, the performance of learning models becomes increasingly unreliable. This problem is known as concept drift and is a common issue in real-world domains. Concept drift detection has attracted increasing attention in recent years, however, hardly any existing methods pay attention to small regional drifts, and their drift detection accuracy may vary due to different statistical significance test. To address these problems, this paper presents a novel concept drift detection method that is based on regional density estimation, named nearest neighbor-based density variation identification (NN-DVI). It consists of three components. The first one is a k-nearest neighborbased space partitioning schema (NNPS) which transforms unmeasurable discrete data instances into a set of shared subspaces for density estimation. The second one is a distance function that accumulates the density discrepancies in these subspaces and quantifies the overall discrepancies. The last component is a tailored statistical significant test by which the confidence interval of a concept drift can be accurately determined. The distance applied in NN-DVI is sensitive to regional drift, and has been proven to follow a normal distribution. As a result, both the accuracy and false alarm rate of NN-DVI are statistically guaranteed. In addition, several benchmarks have been used to evaluate the method, including both synthetic and real-world datasets. The overall results show that NN-DVI has better performance in terms of addressing concept-drift-detection-related problems.