Even though deformable part model (DPM) [3] is one of the most popular human detection methods [1], it is difficult to see a practical application of DPM human detection on mobile devices due to its computational overhead. On the other hand, many objectness (a.k.a. detection proposals) measure methods have been recently proposed [4]. An objectness measure generates object windows that are likely to contain generic objects, and allows avoiding an exhaustive search. Its intention to improve detection speed seems to be perfectly matched with DPM human detection in mobile devices. However, most objectness measure methods reviewed and evaluated in [4] are not well suited for real-time detection because all objectness measure methods (except for the binarized normed gradients (BING) method [2]) spend more than 250 milliseconds.For this reason, we propose a more efficient and accurate method for estimating human windows using the normed gradients with color feature in order to enhance detection rate within a short period of time. In this paper, we are interested in humans, not generic objects. So we name objectness estimation for humans simply as personness estimation.To rapidly determine the priority order of each feature vector of an image with a supervised approach, we make use of the skin color model and the fast NG feature (or simply Bing) proposed by Cheng et al. [2]. An NG feature is a 64 dimensional vector describing the magnitude (a.k.a. norm) of the gradients of an 8 × 8 downsampled image. To examine every image window for the objectness measure, a magnitude map of gradients is resized to 36 predefined quantized shapes (or quantized sizes in [2]). The results are called NG maps. By calculating the correlation values between the NG maps and w ∈ R 8×8 , we can obtain a filter score for each window. A quantized shape is ignored in the predicting stage if the quantized shape has less than or equal to 50 positive samples in the training stage. The final objectness score is calculated as follows:where v i ,t i ∈ R and s (i,x,y) are learned coefficient, a bias term and the filter score at position (x, y) of each quantized shape i. s (i,x,y) is obtained using the first stage linear SVM, and v i and t i are obtained using the second stage linear SVM. Even though SVM training on various object categories makes the original linear model w ∈ R 8×8 work for generic objects, it degrades the performance of the SVM classifier for single-category object detection. Therefore, we generate the new linear model w p ∈ R 8×8 shown in Fig. 1(a) by training the linear SVM only on humans that our DPM detector can detect. This restricted training technique also reduces the number of quantized shapes to consider. Fig. 1(a) illustrates that w p ∈ R 8×8 surprisingly places more confidence in the shoulder and head regions. After training the linear SVM on VOC 2007 dataset, we choose four points (3, 1), (3, 2), (4, 1), (4, 2) to extract skin color information. Head and neck are usually located in these four points as shown in the Fig.