In-Context Learning (ICL), a strategy that generates text by passing a list of input-output examples to a Large Language Model, has shown state-of-the-art performance (SOTA) for many Natural Language Processing tasks. In this work, we aim to apply ICL for the synthetic generation of opinionated triples (sentence, aspect, and opinion) to enhance the training of models in two Aspect-Based Sentiment Analysis (ABSA) tasks: Target-oriented Opinion Words Extraction (TOWE) and Aspect-Based Opinion Pair Extraction (AOPE). Our approach first adapts a pre-trained Language Model (LM) to a downstream task using Task-adaptive Pretraining (TAPT). Next, it uses ICL prompting approaches on the adapted LM to generate the opinionated triples. Finally, a post-processing step applies some heuristics to clean and format the triples. We evaluate the quality of these generated samples by augmenting existing benchmark datasets with them and comparing the performance of four SOTA models on these tasks. The results show that these models trained on the generated synthetic data outperform those trained on a small percentage of human-labelled data. We manually inspected these triples with three human raters, which validated their quality.