This study for singer identification of mono popular music is in two stages. In the first stage, computational auditory scene analysis (CASA) is exploited to segregate singing voice units. For each frame, the estimated binary T-F mask indi cates the time-frequency (T-F) units dominated by singing voice which are considered reliable, and other units are un reliable or missing. Thus the spectrum is incomplete. In the second stage, two missing feature methods, reconstruc tion and marginalization are used to identify the singer by dealing with the incomplete spectrum data. In the reconstruc tion module, the complete spectrum is first reconstructed and then converted to obtain the Gammatone frequency cepstral coefficients (GFCCs), which are further used to identify the singer. In the marginalization module, the probabilities of the singer's voice are computed on the basis of only the re liable components. We find that the reconstruction module outperforms the marginalization module, while both modules have significantly good performances, especially at signal-to accompaniment ratios (SARs) of 0 dB and -3 dB, in contrast to other system. Index Terms-Singer identification, missing feature, computational auditory scene analysis (CASA).