Edhem Yılmaz, İbrahimBerhuni, MustafaÖzer Özcan, ZeynepDoğan, Levent2024-11-072024-11-0720241386-5056https://doi.org/10.1016/j.ijmedinf.2024.105592https://hdl.handle.net/11480/11337Background: Strabismus is a common eye condition affecting both children and adults. Effective patient education is crucial for informed decision-making, but traditional methods often lack accessibility and engagement. Chatbots powered by AI have emerged as a promising solution. Aim: This study aims to evaluate and compare the performance of three chatbots (ChatGPT, Bard, and Copilot) and a reliable website (AAPOS) in answering real patient questions about strabismus. Method: Three chatbots (ChatGPT, Bard, and Copilot) were compared to a reliable website (AAPOS) using real patient questions. Metrics included accuracy (SOLO taxonomy), understandability/actionability (PEMAT), and readability (Flesch-Kincaid). We also performed a sentiment analysis to capture the emotional tone and impact of the responses. Results: The AAPOS achieved the highest mean SOLO score (4.14 ± 0.47), followed by Bard, Copilot, and ChatGPT. Bard scored highest on both PEMAT-U (74.8 ± 13.3) and PEMAT-A (66.2 ± 13.6) measures. Flesch-Kincaid Ease Scores revealed the AAPOS as the easiest to read (mean score: 55.8 ± 14.11), closely followed by Copilot. ChatGPT, and Bard had lower scores on readability. The sentiment analysis revealed exciting differences. Conclusion: Chatbots, particularly Bard and Copilot, show promise in patient education for strabismus with strengths in understandability and actionability. However, the AAPOS website outperformed in accuracy and readability. © 2024 Elsevier B.V.eninfo:eu-repo/semantics/closedAccessChatbots in healthcareChatGPTHealth literacyPatient engagementStrabismus patient educationChatbots talk Strabismus: Can AI become the new patient Educator?Article19110.1016/j.ijmedinf.2024.105592391595062-s2.0-85201452962Q1