The competitive nature of most algorithmic experimentation is a source of problems that are all too familiar to the research community. It is hard to make fair comparisons between algorithms and to assemble realistic test problems. Competitive testing tells us which algorithm is faster but not why. Because it requires polished code, it consumes time and energy that could be better spent doing more experiments. This article argues that a more scientific approach of controlled experimentation, similar to that used in other empirical sciences, avoids or alleviates these problems. We have confused research and development; competitive testing is suited only for the latter.Key Words: computational testing, benchmark problems Most experimental studies of heuristic algorithms resemble track meets more than scientific endeavors.Typically an investigator has a bright idea for a new algorithm and wants to show that it works better, in some sense, than known algorithms. This requires computational tests, perhaps on a standard set of benchmark problems. If the new algorithm wins, the work is submitted for publication. Otherwise it is written off as a failure. In short, the whole affair is organized around an algorithmic race whose outcome determines the fame and fate of the contestants.This modus operandi spawns a host of evils that have become depressingly familiar to the algorithmic research community. They are so many and pervasive that even a brief summary requires an entire section of this article. Two, however, are particularly insidious. The emphasis on competition is fundamentally anti-intellectual and does not build the sort of insight that in the long run is conducive to more effective algorithms. It tells us which algorithms are better but not why. The understanding we do accrue generally derives from initial tinkering that takes place in the design stages of the algorithm. Because only the results of the formal competition are exposed to the fight of publication, the observations that are richest in information are too often conducted in an informal, uncontrolled manner. Second, competition diverts time and resources from productive investigation. Countless hours are spent crafting the fastest possible code and finding the best possible parameter settings in order to obtain results that are suitable for publication. This is particularly unfortunate because it squanders a natural advantage of empirical algorithmic work. Most empirical work in other sciences tends to be slow and expensive, requiring well-appointed laboratories, massive equipment, or carefully selected subjects. By contrast, much empirical work on algorithms can be carried out on a work station by a single investigator. This advantage should be exploited by conducting more experiments, rather than by implementing each one in the fastest possible code.