Continuous progression of Moore's law brings new challenges in metrology and defect inspection. As the semiconductor industry embraces High-Numerical Aperture Extreme Ultraviolet Lithography (High-NA EUVL), there is a current industry-wide evaluation of this technology for potential pitch reduction in future nodes. One of the primary hurdles in implementing High-NA EUVL in High Volume Manufacturing (HVM) is its low depth of focus. Consequently, suppliers of resist materials are compelled to opt for thin resist and/or new underlayers/hardmask's. Experimental combinations of thin resist materials with novel underlayers and hardmask's seem to pose signal detection challenges due to poor Signal-to-Noise Ratio (SNR). In such a scenario, manual classification of these nano-scale defects faces limitations in terms of required time and workforce, and the robustness and generalizability of outcomes are also questionable. In recent years, vision-based machine learning (ML) algorithms have emerged as an effective solution for image-based semiconductor defect inspection applications. However, developing a robust ML model across various image resolutions without explicit training remains a challenge for nano-scale defect inspection. The goal of this research is to propose a scale-invariant Automated Defect Classification and Detection (ADCD) framework capable to upscale images, addressing this issue. We propose an improvised ADCD framework as SEMI-SuperYOLO-NAS, which builds upon the baseline YOLO-NAS architecture. This framework integrates a Super-Resolution (SR) assisted branch to aid in learning high-resolution (HR) features by the defect detection backbone, particularly for detecting nano-scale defect instances from lowresolution (LR) images. Additionally, the SR-assisted branch can recursively generate or reconstruct upscaled images (∼ ×2/×4/×8...) from their corresponding downscaled counterparts, enabling defect detection inference across various image resolutions without requiring explicit training. Moreover, we investigate improved data augmentation strategy aimed at generating diverse and realistic training datasets to enhance model performance. We have evaluated our proposed approach using two original FAB datasets obtained from two distinct processes and captured using two different imaging tools. Finally, we demonstrate zero-shot inference for our model on a new, originating from a process condition distinct from the training dataset and possessing different CD/Pitch characteristics. Our experimental validation demonstrates that our proposed ADCD framework aids in increasing the throughput of imaging tools (∼ ×8) for defect inspection by reducing the required image pixel resolutions.