Text pre-processing plays a crucial role in the Sentiment Analysis process. Machine Learning models like Chat GPT-3.5 by OpenAI and Google Bard serve as alternative methods for text pre-processing. This study aims to evaluate the capabilities of both Chatbots in the text pre-processing stage while assessing their performance using a dataset obtained by crawling from source X. The study involves a comparison of Chat GPT-3.5 and Google Bard using Decision Tree and Naïve Bayes algorithms. The validation process employs K-Fold Cross Validation with a K value of 10. Additionally, three sampling methods, namely Linear, Shuffled, and Stratified Sampling, are utilized. The findings reveal that Chat GPT-3.5 performs best when using the Decision Tree algorithm with a K-Fold Cross value of 10, and employing Stratified Sampling, achieving an Accuracy of 90.68%, Precision of 90.63%, and Recall of 100%. On the other hand, Google Bard's optimal performance is achieved with the Decision Tree algorithm, a K-Fold Cross value of 10, and Shuffled Sampling, resulting in an Accuracy of 74.00%, Precision of 72.73%, and Recall of 98.77%. The study concludes that Chat GPT-3.5 and Google Bard are viable alternatives for text pre-processing in Sentiment Analysis. Performance measurements indicate that Chat GPT-3.5 outperforms Google Bard, achieving an Accuracy of 90.68%, Precision of 90.63%, and Recall of 100%. These results were validated by comparing them to human annotations, which achieved an accuracy score of 85.20%, Precision of 85.71%, and Recall of 99.03% when using the Decision Tree algorithm with a K-Fold Cross value of 10 and employing Stratified Sampling. This suggests that Chat GPT-3.5's text pre-processing performance is on par with human annotations.