Automatically generating behavior for Non-Player Characters (NPCs) in serious games can be problematic as the specification of their behavior heavily relies on the availability of domain expertise. This expertise can be difficult and costly to extract, and the specified behavior usually does not allow for generalization to new scenarios or users. Alternatively, behavior can be generated using a pure machine learning approach. However, such NPCs may quickly develop static, non-adaptive behavior by exploiting the environment without proper constraints. In this paper, an approach called Evolutionary Dynamic Scripting (EDS) is presented to effectively cope with the disadvantages of the two extremes sketched above. This technique combines the generative characteristics of an evolutionary approach with an adaptive reinforcement learning method called Dynamic Scripting. Dynamic Scripting essentially learns how to prioritize rules from a fixed rule-base specified by domain experts. EDS was tested in an air combat simulation in which agents co-evolve their tactics using EDS. EDS was able to generate improved behavioral rules over the original Dynamic Scripting approach, given the same initial rule-bases. Both generalization to new situations and specialization into roles for the agents were observed.