In this paper we present an experimental analysis of robustness of Electronic Design Automation (EDA) tools, with respect to different seemingly unimportant aspects (bias) introduced by the designer, "from outside". The algorithms employed in EDA tools should be immune to these completely, since such aspects do not carry any useful information -source files differing in these aspects are semantically equivalent. However, we show that most of the studied tools are seriously sensitive here, much more than ever reported. The results indicate, that experiments conducted to evaluate the performance of EDA tools must take such behavior into consideration. Also the notion of a benchmark is questioned.