Significant progress has been achieved in objects detection applications such as Face Detection. This mainly due to the latest development in deep learning-based approaches and especially in the computer vision domain. However, deploying deep-learning methods require huge computational power such as graphical processing units. These computational requirements make using such methods unsuitable for deployment on platforms with limited resources, such as edge devices. In this paper, we present an experimental framework to reduce the model's size systematically, aiming at obtaining a small-size model suitable for deployment in a resource-limited environment. This was achieved by systematic layer removal and filter resizing. Extensive experiments were carried out using the "You Only Look Once" model (YOLO v3-tiny). For evaluation purposes, we used two public datasets to assess the impact of the model's size reduction on a common computer vision task such as face detection. Results show clearly that, a significant reduction in the model's size, has a very marginal impact on the overall model's performance. These results open new directions towards further investigation and research to accelerate the use of deep learning models on edge-devices.