This paper presents the design and results of an experiment to evaluate the impact/effect of data uniformity in automation of acceptance tests. An experiment to specify acceptance tests, represented by the User Scenarios through User Interaction Diagrams (US-UIDs) format, with non-technical user has been set up involving two projects. In the first project, called P1, the treatment of data uniformity is held by an expert, while in the second project, called P2, no treatment of data uniformity is done. In both projects, automation of acceptance tests was developed for evaluation and comparison of the following artifacts: data uniformity, fixture name sharing, automation time, and glue code volume. The results show that there is a meaningful statistics difference of uniformity between projects P1 and P2, where P1 resulted in a better uniformity. However, although the treatment of data uniformity does not show meaningful statistics difference according to the strategy of fixture names sharing used, the time spent in fixture naming was more than two times higher in P2. In addition, the glue code volume was less than half in P1 comparing to P2.