ChatGPT has emerged as a promising advanced large language model that needs prompt to gain information. However, designing a good prompt is not an easy task for many end-users. Therefore, this study intends to determine the amount of information gained because of varied amounts of information in the prompt. This study used two types of prompts, initial and improved, to query the introduction sections of 327 highly cited articles on traffic safety. The queried introduction sections were then matched with the corresponding human-written introduction sections from the same articles. Similarity tests and text network analysis were used to understand the level of similarities and the content of ChatGPT-generated and human-written introductions. The findings indicate the improved prompts, which have the addition of generic persona and information about the citations and references, changed the ChatGPT's output insignificantly. While the perfect similar contents are supposed to have a 1.0 similarity score, the initial and improved prompt's introduction materials have average similarity scores of 0.56 and 0.54, respectively. Further, the content analysis revealed that themes such as statistics, trends, safety measures, and safety technologies are more likely to have high similarity scores, irrespective of the amount of information provided in the prompt. On the other hand, themes such as human behavior, policy and regulations, public perception, and emerging technologies require a detailed level of information in their prompt to produce materials that are close to human-written materials. The prompt engineers can use the findings to evaluate their outputs and improve their prompting skills.