Aspect-Based Sentiment Analysis (ABSA) is pivotal for diverse applications but faces significant hurdles in under-resourced languages like Urdu, primarily due to the absence of a comprehensive, annotated benchmark corpus. This study tackles this gap by introducing a novel Weakly Supervised technique to construct a benchmark dataset tailored for Urdu ABSA, addressing public availability, domain coverage, and annotation comprehensiveness. Our dataset encompasses detailed annotations across all ABSA dimensions i.e. aspect, opinion, sentiment polarity and category. Through a comparative analysis involving Large Language Models (LLMs), human annotations, and pre-trained models based on expertly curated datasets, we demonstrate the dataset’s complexity and the nuanced nature of ABSA in Urdu, as reflected in the challenging outcomes of ABSA subtasks using a basic LSTM approach. This research not only advances Urdu ABSA techniques but also illuminates the broader challenges of Opinion Mining in under-resourced languages, setting a precedent for future work in this critical area.