<p>Existing model poisoning attacks on federated learning (FL) assume that an adversary has access to the full data distribution. In reality, an adversary usually has limited prior knowledge about clients' data distributions. In such a case, a poorly chosen target class renders an attack less effective. In particular, we considered a semi-targeted situation where the source class is predetermined but the target class is not. The goal is to cause the global classifier to misclassify data of the source class. Approaches such as label flipping have been adopted to inject poisoned parameters into FL. Nevertheless, it has shown that their performances are usually class-sensitive, varying with different target classes applied. Typically, an attack can become less effective when shifting to a different target class. To overcome this challenge, we propose the Attacking Distance-aware Attack (ADA) that enhances a poisoning attack by finding the optimized target class in the feature space. ADA deduces pair-wise distances between different classes in the latent feature space based on the Fast LAyer gradient MEthod (FLAME). We performed extensive evaluations by varying the factor of attacking frequency in four benchmark image classification tasks. Furthermore, ADA's efficacy was studied under different defense strategies in FL. As a result, ADA succeeded in increasing the poisoning attack performance to 2.8 times in the most challenging case with an attacking frequency of 0.01. ADA bypassed existing defenses where differential privacy that was the most effective defense still could not reduce the attack performance to below 50%.</p>