This paper discusses the speaker adaptation method in the HMM-based phonetic vocoder, which is a very low bit rate speech coding system, based on speech recognition and speech synthesis using HMM. In the HMM-based phonetic vocoder, the speech quality is governed entirely by the speech synthesis HMM in the decoder. Consequently, in order to adapt to an unspecified input speaker, the HMM of the decoder must be adapted to the input speech. Consequently, this paper proposes the following adaptation to input speech. The HMM is matched by speech recognition to the input parameter sequence. Then, the mean vector of the HMM output distribution sequence is translated uniformly in the parameter space for each segment. The quantity expressing the translation is called the translation vector in this paper. The encoder determines the translation vector, which is then quantized and transmitted. A subjective evaluation experiment shows that when the translation vector is quantized by the proposed method at approximately 100 bits/s and speaker-independent HMM is adapted using the translation vector, almost the same speech quality is obtained as when the speaker-dependent model is trained by the speech data of the input speaker.