Deep learning-based image signal processor (ISP) models for mobile cameras can generate high-quality images that rival those of professional DSLR cameras. However, their computational demands often make them unsuitable for mobile settings. Additionally, modern mobile cameras employ non-Bayer color filter arrays (CFA) such as Quad Bayer, Nona Bayer, and Q×Q Bayer to enhance image quality, yet most existing deep learning-based ISP (or demosaicing) models focus primarily on standard Bayer CFAs. In this study, we present PyNET-Q×Q, a lightweight demosaicing model specifically designed for Q×Q Bayer CFA patterns, which is derived from the original PyNET. We also propose a knowledge distillation method called progressive distillation to train the reduced network more effectively. Consequently, PyNET-Q×Q contains less than 2.5% of the parameters of the original PyNET while preserving its performance. Experiments using Q×Q images captured by a prototype Q×Q camera sensor show that PyNET-Q×Q outperforms existing conventional algorithms in terms of texture and edge reconstruction, despite its significantly reduced parameter count. Code and partial datasets can be found at https://github.com/Minhyeok01/PyNET-QxQ.INDEX TERMS Bayer filter, color filter array (CFA), demosaicing, image signal processor (ISP), knowledge distillation, non-Bayer CFA, Q×Q Bayer CFA.