Particle filters have been widely used in various fields due to their advantages in dealing with non-linear and/or non-Gaussian systems. A large number of particles are needed to guarantee the convergence of particle filters for the state estimation, especially for large-scale complex systems. Therefore, parallel/distributed particle filters were adopted to improve the performance, in which different paradigms of resampling were proposed, including the centralized resampling, the decentralized resampling, and the hybrid resampling. To ease their adoptions, we analyze time consumptions and speedup factors of parallel/distributed particle filters with various resampling algorithms, state sizes, system complexities, numbers of processing units, and model dimensions in this study. The experimental results indicate that the decentralized resampling achieves the highest speedup factors due to the local transfer of particles, the centralized resampling always has the lowest speedup factors because of the global transfer of particles, and the hybrid resampling attains the speedup factors between. Moreover, we define the complexity-state ratio, as the ratio between the system complexity and the system state size to study how it impacts the speedup factor. The experiments show that the higher complexity-state ratio results in the increase of the speedup factors. This is one of the earliest attempts to analyze and compare the performance of parallel/distributed particle filters with different resampling algorithms. The analysis can provide potential solutions for further performance improvements and guide the appropriate selection of the resampling algorithm for parallel/distributed particle filters.