The speech signal is an important acoustic signal. The quality of speech signal is dependent upon the surroundings of the speaker and listener of speech, sound and audio. The additive noises such as white noise and babble noise severely degrade the performance of the sound-based applications. The conventional methods for noise reduction introduce musical noises in the enhanced speech signal. The discriminative networks map the noisy speech to the clean target speech signal. In this process, the discriminative networks add unpleasant distortions to the signal. Hence, two auto encoder based discriminative approaches: Discriminative UNET model (DUNET) and Discriminative De-noising Auto encoder model (DDAE) are designed and tested with noisy speech samples available from NOIZEUS dataset. The performance of the method is compared with four baseline methods: UNET, Variational Auto encoder, Convolutional auto encoder and Pixel CNN architecture. Five evaluation indexes, PESQ, STOI, SDR, improvement in SNR, and Segmental SNR are used for the comparison of performance. The architecture provides better intelligibility and less signal distortion ratio as compared to given baseline methods.