Evaluation of software can take many forms ranging from algorithm correctness and performance to evaluations that focus on the value to the end user. This article presents a discussion of the development of an evaluation methodology for visual analytics environments. The Visual Analytics Science and Technology Challenge was created as a community evaluation resource. This resource is available to researchers and developers of visual analytics environments and allows them to test out their designs and visualization and compare the results with the solution and the entries prepared by others. Sharing results allows the community to learn from each other and to hopefully advance more quickly. In this article, we discuss the original challenge and its evolution during the 7 years since its inception. While the Visual Analytics Science and Technology Challenge is the focus of this article, there are lessons for many involved in setting up a community evaluation program, including the need to understand the purpose of the evaluation, decide upon the right metrics to use, and the appropriate implementation of those metrics including datasets and evaluators. For ongoing evaluations, it is also necessary to track the evolution and to ensure that the evaluation methodologies are keeping pace with the science being evaluated. The discussions on the Visual Analytics Science and Technology Challenge on these topics should be pertinent to many interested in community evaluations.