Composers and performers communicate emotional intentions through the control of basic musical features such as pitch, loudness, and articulation. The extent to which emotion can be controlled by software through the systematic manipulation of these features has not been fully examined. To address this, we present CMERS, a Computational Music Emotion Rule System for the real-time control of musical emotion that modifies features at both the score level and the performance level. In Experiment 1, 20 participants continuously rated the perceived emotion of works each modified to express happy, sad, angry, tender, and normal. Intended emotion was identified correctly at 78%, with valence and arousal significantly shifted regardless of the works' original emotions. Existing systems developed for expressive performance, such as Director Musices (DM), focus on modifying features of performance. To study emotion more broadly, CMERS modifies features of both score and performance. In Experiment 2, 18 participants rated music works modified by CMERS and DM to express five emotions. CMERS's intended emotion was correctly identified at 71%, DM at 49%. CMERS achieved significant shifts in valence and arousal, DM in arousal only. These results suggest that features of the score are important for controlling valence. The effects of musical training on emotional identification accuracy are also discussed.