While working with designers and DFT engineers in companies evaluating an "industrial-strength" analog fault simulator, it became apparent that intuition and theory often differ regarding random sampling of defects to simulate. This paper explores these differences. In one case, it was hoped that simulating more defects would increase the estimated coverage. In a second case, it was assumed that pre-simulation analysis of a circuit would more efficiently reveal defects that need to be simulated. In a third, engineering intuition said at least 1% of all potential defects must be simulated to estimate coverage. In a fourth case, it was thought that fault coverage for portions of a circuit could be gleaned from results for faults randomly injected into the whole circuit. In a fifth, the types of faults injected were assumed to greatly affect coverage. In a last case, intuitively it seemed that improving a test to detect the most-likely defects that were undetected would have the greatest impact on coverage.