Speech production has long been viewed as a linear filtering process, as described by Fant in the late 1950's [10]. The vocal tract, which acts as the filter, is the primary focus of most speech work. This thesis develops a method for estimating the source of speech, the glottal flow derivative. Models are proposed for the coarse and fine structure of the glottal flow derivative, accounting for nonlinear sourcefilter interaction, and techniques are developed for estimating the parameters of these models. The importance of the source is demonstrated through speaker identification experiments.The glottal flow derivative waveform is estimated from the speech signal by inverse filtering the speech with a vocal tract estimate obtained during the glottal closed phase. The closed phase is determined through a sliding covariance analysis with a very short time window and a one sample shift. This allows calculation of formant motion within each pitch period predicted by Ananthapadmanabha and Fant to be a result of nonlinear source-filter interaction during the glottal open phase [1]. By identifying the timing of formant modulation from the formant tracks, the timing of the closed phase can be determined. The glottal flow derivative is modeled using an LF model to capture the coarse structure, while the fine structure is modeled through energy measures and a parabolic fit to the frequency modulation of the first formant.The model parameters are used in the Reynolds Gaussian Mixture Model Speaker Identification system with excellent results for non-degraded speech. Each category of source features is shown to contain speaker dependent information, while the combination of source and filter parameters increases the overall accuracy for the system. For a large dataset, the coarse structure parameters achieve 60% accuracy, the fine structure parameters give 40% accuracy, and their combination yields 70% correct identification. When combined with vocal tract features, the accuracy increases to 93%, slightly above the accuracy achieved with just vocal tract information. On smaller datasets of telephone-degraded speech, accuracy increases up to 20% when source features are added to traditional mel-cepstral measures. Perhaps one of his best contributions is that he and I tend to think of things from different angles, hopefully I will carry his viewpoint along with my own after I leave MIT. I would also like to thank Doug Reynolds for helping me understand and use his speaker identification system. Thanks are also owed to all the members of the Speech System Technology Group who have helped me in so many ways. I would also like to thank all the people who thought it would be a shame for me to not get an advanced degree and all my friends in Seattle who signed their letters "quit school."These conflicting views enabled me to make up my own mind with a minimum of outside pressure.And, of course, I wish to thank the sponsors of my research and my time here at MIT. I would like to thank the EECS department for awarding me a fe...