Sign language recognition is the task of detection and recognition of manual signals (MSs) and non-manual signals (NMSs) in a signed utterance. In this paper, a novel method for recognizing MS and facial expres sions as a NMS is proposed. This is achieved through a framework consisting of three components: (1) Can didate segments of MSs are discriminated using an hi erarchical conditional random field (CRF) and Boost Map embedding. It can distinguish signs, fingerspellings and non-sign patterns, and is robust to the various sizes, scales and rotations of the signer's hand. (2) Fa cial expressions as a NMS are recognized with sup port vector machine (SVM) and active appearance model (AAM). AAM is used to extract facial feature points. From these facial feature points, several mea surements are computed to distinguish each facial com ponent into defined facial expressions with SVM. (3) Fi nally, the recognition results of MSs and NMSs are fused in order to recognize signed sentences. Experi ments demonstrate that the proposed method can suc cessfully combine MSs and NMSs features for rec ognizing signed sentences from utterance data.