Text summarization is the art of succinctly capturing the essence of a lengthy text document through a concise summary. The intricate craft of text summarization involves distilling a voluminous text document into a brief and elegant summation that conveys its core message. Ultimately, the goal of text summarization is to help on grasping the essence of a text without having to wade through its entire length. In this research, we propose to fine-tune and explore the quality performance of the deep learning and transformer-based PEGASUS model for abstractive text summarization on a diverse dataset. The diversity of the dataset is expected to challenge the model and test its capabilities in generating summaries for a wide range of text types and styles. Our experimental results indeed indicate that the model's performance varies based on the topic and category of the text reaching as high as 88.03 F1 (ROUGE-1) score with some topics and as low as 81.22 with others. This is crucial as texts, such as political, economic, literary, legal, and medical, have distinctive writing conventions and styles, and a model that performs well on a diverse dataset is more likely to adapt to other text types.