In recent years, online review system attracts attention to assess seller-customer relations in the world of e-commerce. To address the quality concern of online review, especially incentivized ones, this study evaluates credibility and consistency based on reviews’ volume, length, and content to distinguish the impact of incentives on customer review behavior, improving review quality, and purchase decision-making. Software product reviews collected from software review websites, including Capterra, Software Advice, and GetApp undergo Exploratory Data Analysis (EDA) to reveal critical features such as cost, support, usability, and product features. The indirect impact of companies’ size, direct impact of users’ experience, and different impacts of changing situations during years on the incentive reviews’ volume are major findings of sentiment analysis. A/B testing results show a minimal to no impact of such reviews on purchasing decisions, highlighting discrepancies in credibility and consistency in volume, length, and content. Employing advanced techniques like Sentence-BERT (SBERT) and TF-IDF, the study explores semantic differences in reviews to improve recommendation systems for a more customized shopping experience. This approach seeks to establish a framework that discerns between review types and their effect on customer behavior. The findings contribute to developing more sophisticated and consistent e-commerce solutions, emphasizing the importance of authentic and reliable online reviews in influencing consumer choices.