We demonstrate a lip animation (lip sync) algorithm for real-time applications that can be used to generate synchronized facial movements with audio generated from natural speech or a text-to-speech engine. Our method requires an animator to construct animations using a canonical set of visemes for all pairwise combinations of a reduced phoneme set (phone bigrams). These animations are then stitched together to construct the final animation, adding velocity and lip-pose constraints. This method can be applied to any character that uses the same, small set of visemes. Our method can operate efficiently in multiple languages by reusing phone bigram animations that are shared among languages, and specific word sounds can be identified and changed on a per-character basis. Our method uses no machine learning, which offers two advantages over techniques that do: 1) data can be generated for non-human characters whose faces can not be easily retargeted from a human speaker's face, and 2) the specific facial poses or shapes used for animation can be specified during the setup and rigging stage, and before the lip animation stage, thus making it suitable for game pipelines or circumstances where the speech targets poses are predetermined, such as after acquisition from an online 3D marketplace. Synchronizing the lip and mouth movements naturally along with animation is an important part of convincing 3D character performance. In this paper, we present a simple, portable and editable lip-synchronization method that works for multiple languages, requires no machine learning, can be constructed by a skilled animator, is effective for real-time simulations such as games, and can be personalized for each character. Our method associates animation curves designed by an animator on a fixed set of static facial poses, with sequential pairs of phonemes (phone bigrams), and then stitches these animations together to create a set of curves for the facial poses along with constraints that ensure that key poses are properly played. Diphone-and triphone-based methods have been explored in various previous works, often requiring machine learning. However, our experiments have shown that animating phoneme pairs (such as phone bigrams or diphones), as opposed to phoneme triples or longer sequences of phonemes, is sufficient for many types of animated characters. Also, our experiments have shown that skilled animators can sufficiently generate the data needed for good quality results. Thus our algorithm does not need any specific rules about coarticulation, such as dominance functions or language rules. Such rules are implicit within the artist-produced data. In order to produce a tractible set of data, our method reduces