Cochlear implant (CI) recipients require alternative signal processing for speech enhancement, since the quantities needed for intelligibility and quality improvement differ significantly when direct stimulation of the basilar membrane is employed for CIs. Here, a robust feature vector is proposed for environment classification in CI devices. The feature vector is directly computed from the output of the advanced combination encoder (ACE), which is a sound coding strategy commonly used in CIs. Performance of the proposed feature vector is evaluated in the context of environment classification tasks under anechoic quiet, noisy, reverberant, and noisy reverberant conditions. Speech material taken from the IEEE corpus are used to simulate different environmental acoustic conditions with: 1) three measured room impulse responses (RIR) with distinct reverberation times (T60) for generating reverberant environments, and 2) car, train, white Gaussian, multi-talker babble, and speechshaped noise (SSN) samples for creating noisy conditions at 4 different signal-to-noise ratio (SNR) levels. We investigate 3 different classifiers for environment detection, namely Gaussian mixture models (GMM), support vector machines (SVM), and neural networks (NN). Experimental results illustrate the effectiveness of the proposed features for environment classification.