Graph Neural Networks (GNNs) have emerged as powerful tools for analyzing complex structured data, including social networks, biological networks, and recommendation systems. However, their susceptibility to adversarial attacks poses a significant challenge, especially in critical tasks such as node classification and link prediction. Adversarial attacks on GNNs can introduce harmful input graphs, leading to biased model predictions and compromising the integrity of the network. We propose a novel adversarial attack method that leverages the combination of K-Means clustering and Class Activation Mapping (CAM) to conduct subtle yet effective attacks against GNNs. The clustering algorithm identifies critical nodes within the graph, whose perturbations are likely to have a substantial impact on model performance. Additionally, CAM highlights regions of the graph that significantly influence GNN predictions, enabling more targeted and efficient attacks. We assess the efficacy of state-of-the-art GNN defenses against our proposed attack, underscoring the pressing need for robust defense mechanisms. Our study focuses on countering attacks on GNN networks by utilizing K-Means clustering and CAM to enhance the effectiveness and efficiency of the adversarial strategy. Through our observations, we emphasize the necessity for stronger security measures to safeguard GNN-based applications, particularly in sensitive environments. Furthermore, our research highlights the importance of developing robust GNNs that can withstand adversarial attacks, ensuring the reliability and trustworthiness of these models in critical applications. Strengthening the robustness of GNNs against adversarial manipulation is crucial for maintaining the security and integrity of systems that heavily rely on these advanced analytical tools. Our findings underscore the ongoing efforts required to fortify GNN-based applications, urging the research community and practitioners to collaborate in developing and implementing more robust security measures for these powerful neural network models. .