In cross-silo federated learning, clients (e.g., organizations) collectively train a global model using their local data. However, due to business competitions and privacy concerns, the clients tend to free-ride (i.e., not contribute enough data points) during training. To address this issue, we propose a framework where the profit/benefit obtained from the global model can be properly allocated to clients to incentivize data contribution. More specifically, we study the game-theoretical interactions among the clients under three widely used profit allocation mechanisms, i.e., linearly proportional (LP), leave-one-out (LOO), and Shapley value (SV). We consider two types of equilibrium structures: symmetric and asymmetric equilibria. We show that the three mechanisms admit an identical symmetric equilibrium structure. However, at asymmetric equilibrium, LP outperforms SV and LOO in incentivizing the clients' average data contribution. We further discuss the impact of various parameters on the clients' free-riding behaviors. This paper aims to answer the following key questions:Key Question 1: Given a mechanism, how will the clients decide their data sizes used for training when they are business competitors and/or have privacy concerns?Key Question 2: Which mechanism has the best performance in terms of incentivizing clients' data contribution (i.e., addressing the free-rider issue) in cross-silo FL?To answer Question 1, We formulate the interactions between