A rtificial intelligence (AI) has rapidly developed to reshape how patients and physicians access and communicate health information within dermatology and beyond. It is crucial that stakeholders vet this readily accessible resource to ensure patients are empowered with accurate health information. 1 Patients frequently use online resources for health education, particularly regarding surgical management of cutaneous malignant neoplasms, including preoperative surgical appropriateness and postoperative complications. 2 Patient preferences for information accessibility and delivery can affect postoperative satisfaction and information retention. 3 As such, we strove to challenge ChatGPT 4.0 to provide medically accurate responses to common patient questions regarding Mohs micrographic surgery (MMS) to determine if this technology has potential to help create accurate, readable patient education content for patients undergoing MMS.
MethodsChatGPT 4.0 was queried with 26 common patient questions regarding MMS, followed by a prompt to simplify the response to a sixth-grade reading level. These 26 questions were generated in collaboration with boardcertified Mohs micrographic surgeons to identify the most common patient questions they face during clinical practice. Responses were independently evaluated by 3 board-certified Mohs surgeons for accuracy (Likert scale 1-5), relevance for clinical practice (Yes or No), sufficiency for clinical practice (Yes; No; No, I'd be more concise; and No, missing important details), and potential harm to patients (Yes or No). Readability was assessed using a standardized readability tool, the Flesch Reading Ease Score (FRES, 0 5 very difficult to read and 100 5 very easy to read). Concordance between reviewers was assessed using interclass correlation coefficients (ICC), which may be interpreted as follows: ,0.5, poor concordance; 0.5 to 0.75, moderate concordance; 0.75 to 0.9, good concordance; and .0.9, excellent concordance.