“…They find-out the issues and approaches that are involved in cleaning the complex and noise web log data to tackle efficiency and scalability of analytics. The other approaches [29,30,32,33] are devised to address the remaining individual stages of data preprocessing, procedures of parsing of weblog entries [19,25,29], feature identification [13, 19 ,25], feature selection [13,19,25,29,34], user identification [32,35,36], sessionization [26,32,33] and path completion [32,33,36] etc. In addition to that, very few studies [23,31,32,33] identified the necessity of discarding the transactions that are performed by web robots or crawlers, although the investigators strongly recommend the efficient learning algorithms to differentiate humans from web crawlers.…”