Recent developments of social virtual reality (VR) services using avatars have increased the need for facial expression recognition (FER) technology. FER systems are generally implemented using optical cameras; however, the performance of these systems can be limited when users are wearing head-mounted displays (HMDs) as users' faces are largely covered by the HMDs. Facial electromyograms (fEMGs) that can be recorded around users' eyes can be potentially used for implementing FER systems for VR applications. However, this technology lacks practicality owing to the need for large-scale training datasets; furthermore, it is hampered by a relatively low performance. In this study, we proposed an fEMG-based FER system based on the Riemannian manifold-based approach to reduce the number of training datasets needed and enhance FER performance. Our experiments with 42 participants showed an average classification accuracy as high as 85.01% for recognizing 11 facial expressions with only a single training dataset for each expression. We further developed an online FER system that could animate a virtual avatar's expression reflecting a user's facial expression in real time, thus demonstrating that our FER system can be potentially used for practical interactive VR applications, such as social VR networks, smart education, and virtual training. INDEX TERMS Facial expression recognition, facial electromyography, riemannian manifolds, human-machine interface, virtual reality.