The world acclaimed platform that can efficiently deal with the gigantic amount of data is MapReduce. In order to effectively utilize any computational platform, information about the components affecting its performance is necessary. Encapsulating a factor or optimizing a crucial factor acts as a catalyst which can accelerate overall peformance of the platform. Some researchers provided some techniques to improve the overall performance of MapReduce by suitable selection and scheduling of processing units i.e. mappers. However, negligible attention has been paid towards the intermediate (shuffle) phase optimization for its effect on overall MapReduce performance. This paper aimed at modeling the data migration time within the intermediate phase of MapReduce by encountering the contributing factors with the help of machine learning technique. In addition to it, the contributing factors for their contribution toward the shuffle's phase time are tested both analytically and experimentally. This research provides the Shuffle phase time model that estimates data migration time in the intermediate phase of MapReduce. The data set has been collected by historical MapReduce job execution records. The model has been proposed over training data sets by employing the machine learning techniques termed as regression technique and validated over the different test data sets.