Dependency Length Minimization (DLM) is considered to be a linguistic universal governing word order variation cross-linguistically. However, evidence for DLM from large-scale corpus work is typically based on written (news) corpus and its effect on sentence production during naturalistic dialogue is largely unknown. Furthermore, Subject-Object-Verb languages are known to show a weaker preference for DLM. In this work, we test the validity of DLM using a dialogue corpus of Hindi, an SOV language. We also undertake a quantitative analysis of various syntactic phenomena that lead to DLM and compare the effect of DLM on both spoken and written modalities. Results provide novel evidence supporting a robust effect of DLM in spoken corpus. At the same time, compared to the written data, DLM was found to be weaker in dialogue. We discuss the implications of these findings on sentence production and on methodological issues with regards to the use of corpus data to investigate DLM.