Medindia LOGIN REGISTER
Medindia

ChatGPT Vs Google Search: Which Provides Reliable Postoperative Instructions

by Dr. Jayashree Gopinath on May 1 2023 9:57 PM
Listen to this article
0:00/0:00

 ChatGPT Vs Google Search: Which Provides Reliable Postoperative Instructions
A new qualitative study assessed the value of ChatGPT in augmenting patient knowledge and generating postoperative instructions for use in populations with low educational or health literacy levels. The study findings were published in JAMA Otolaryngology Head and Neck Surgery.
ChatGPT, an artificial-intelligence powered language model chatbot is an innovative resource for many industries, including healthcare. Lower health literacy and limited understanding of postoperative instructions have been associated with worse outcomes. In such a scenario, ChatGPT could be a solution.

ChatGPT for Healthcare Services: Emerging Stage for an Innovative Perspective

To analyze the effectiveness of such tools in healthcare, researchers analyzed postoperative patient instructions obtained from ChatGPT, Google Search, and Stanford University for 8 common pediatric otolaryngologic procedures.

This phrase was entered into ChatGPT: Please provide postoperative instructions for the family of a child who just underwent a [procedure]. Provide them at a 5th-grade reading level. Similarly, this phrase was entered into Google Search: My child just underwent [procedure].

What do I need to know and watch out for? The first nonsponsored Google Search results were used for analysis. Results were extracted and blinded. To enable adequate blinding, they standardized all fonts and removed audiovisuals (eg, pictures).

The primary outcome was the understandability and actionability of instructions for patients of different backgrounds and health literacy levels. As a secondary outcome, instructions were scored on whether they addressed procedure-specific items.

Overall, ChatGPT-generated instructions were scored from 73% to 82% for understandability, 20% to 80% for actionability, and 75% to 100% for procedure-specific items. Institution-generated instructions consistently had the highest scores.

Understandability scores were highest for the institution (91%) vs ChatGPT (81%) and Google Search (81%) instructions. Actionability scores were lowest for ChatGPT (73%), intermediate for Google Search (83%), and highest for the institution (92%) instructions. For procedure-specific items, ChatGPT (97%) and institution (97%) instructions had the highest scores and Google Search had the lowest (72%).

Advertisement
These findings suggest that ChatGPT provides instructions that are helpful for patients with a fifth-grade reading level or different health literacy levels. However, ChatGPT-generated instructions scored lower in understandability, actionability, and procedure-specific content than Google Search– and institution-specific instructions.

Despite these findings, ChatGPT may be beneficial for patients and clinicians, especially when alternative resources are limited. Online search engines are common sources of medical information for the public: 7% of Google searches are health-related.

Advertisement
However, ChatGPT provides direct answers that are often well-written, detailed, and in an if-then format, which gives patients access to immediate information while waiting to reach a clinician. Limitations included a lack of citations; the inability of users to confirm the accuracy of the information or explore topics further.



Source-Eurekalert


Advertisement