Client contribution evaluation is crucial in federated learning(FL) to effectively select influential clients. Contrary to data valuation in centralized settings, client contribution evaluation in FL faces a lack of data accessibility and consequently challenges stable quantification of the impact of data heterogeneity. To address this instability of client contribution evaluation, we introduce an empirical method, Federated Client Contribution Evaluation through Accuracy Approximation(FedCCEA), which exploits data size as a tool for client contribution evaluation. After several FL simulations, FedCCEA approximates the test accuracy using the sampled data size and extracts the client contribution from the trained accuracy approximator. In addition, FedCCEA grants data size diversification, which reduces the massive variation in accuracy resulting from game-theoretic strategies. Several experiments have shown that FedCCEA strengthens the robustness to diverse heterogeneous data environments and the practicality of partial participation.