The basic tenet of this paper is that today's national airspace systems, at least in advanced industrial countries, qualify as so-called Highly Reliable Systems (HRS). In an HRS, even the type of accident that causes the most fatalities is a rare event. This means that in an HRS, the avoidance of accidents is a frequent event. Therefore, the best way to improve an already highly reliable system would be to learn from the cases where accidents have been avoided. This is not possible, however, because you can't learn from what is unknown. Instead, safety managers resort to retrospective analyses of the most deadly accidents overall. In an unreliable system, it makes sense to correct what is wrong. In an HRS, however, any mitigation efforts that arise from rare, unpredictable, and often unique events carry great danger to upset the balance of the HRS. Such interventions must be scrupulously vetted, in a series of steps that become increasingly costly as the series progresses. This paper makes some suggestions for these steps. If the anticipated benefit from the intervention is not worth the cost of such a thorough review for unintended consequences, then it may be better to accept the existing high reliability of the system as good enough and leave the system unchanged.