Parents Trust AI Over Experts for Child Sleep Advice

Summary: A University of Kansas study found that parents often trust AI-generated health information for their children more than advice from healthcare professionals when the authorship is unknown. The study highlights concerns about the increasing reliance on AI for critical health decisions, as generative AI can produce inaccurate information. Researchers urge parents to consult experts and verify AI-sourced information.

Key Takeaways:

  1. Parents Trust AI Over Healthcare Experts: The study found that when the authorship was unknown, parents rated AI-generated health information as more trustworthy, accurate, and reliable than content from healthcare professionals.
  2. Risks of Inaccurate AI Advice: Researchers raised concerns about AI ā€œhallucinations,ā€ where AI-generated information can be incorrect due to a lack of proper context, posing significant risks for child health decisions.
  3. Expert Oversight is Crucial: The study emphasizes that while AI can be a useful tool, parents should always verify health information with healthcare professionals to ensure accuracy and safe decision-making.

New research from the University of Kansas Life Span Institute highlights a key vulnerability to misinformation generated by artificial intelligence (AI) and a potential model to combat it.

The study, appearing in the Journal of Pediatric Psychology, reveals parents seeking health care information for their children trust AI more than health care professionals when the author is unknown, and parents also rate AI-generated text as credible, moral, and trustworthy.

ā€œWhen we began this research, it was right after ChatGPT first launchedā€”we had concerns about how parents would use this new, easy method to gather health information for their children,ā€ says lead author Calissa Leslie-Miller, University of Kansas doctoral student in clinical child psychology, in a release. ā€œParents often turn to the internet for advice, so we wanted to understand what using ChatGPT would look like and what we should be worried about.ā€

Rating AI and Expert Text

Leslie-Miller and her colleagues conducted a cross-sectional study with 116 parents, aged 18 to 65, who were given health-related text, such as information on infant sleep training and nutrition. They reviewed content generated by both ChatGPT and by health care professionals, though participants were not informed of the authorship.

ā€œParticipants rated the texts based on perceived morality, trustworthiness, expertise, accuracy, and how likely they would be to rely on the information,ā€ Leslie-Miller says in a release.

Surprising Outcomes

According to the researcher, in many cases, parents couldnā€™t distinguish between the content generated by ChatGPT and that by experts. When there were significant differences in ratings, ChatGPT was rated as more trustworthy, accurate, and reliable than the expert-generated content.

ā€œThis outcome was surprising to us, especially since the study took place early in ChatGPTā€™s availability,ā€ says Leslie-Miller in a release. ā€œWeā€™re starting to see that AI is being integrated in ways that may not be immediately obvious, and people may not even recognize when theyā€™re reading AI-generated text versus expert content.ā€

Raising Concerns for Child Health Care

Leslie-Miller says the findings raise concerns because generative AI now powers responses that appear to come from apps or the internet but are actually conversations with an AI.

ā€œDuring the study, some early iterations of the AI output contained incorrect information,ā€ she says in a release. ā€œThis is concerning because, as we know, AI tools like ChatGPT are prone to ā€˜hallucinationsā€™ā€”errors that occur when the system lacks sufficient context.ā€

Although ChatGPT performs well in many cases, Leslie-Miller says the AI model isnā€™t an expert and is capable of generating wrong information.

ā€œIn child health, where the consequences can be significant, itā€™s crucial that we address this issue,ā€ she says in a release. ā€œWeā€™re concerned that people may increasingly rely on AI for health advice without proper expert oversight.ā€

The authors report, ā€œResults indicate that prompt-engineered ChatGPT is capable of impacting behavioral intentions for medication, sleep, and diet decision-making.ā€

Leslie-Miller says the life-and-death importance of pediatric health information helps to amplify the problem, but that the possibility that generative AI can be wrong and users may not have the expertise to identify inaccuracies extends to all topics.

Consumers ā€˜Need to Be Cautiousā€™

She suggests consumers of AI information need to be cautious and only rely on information that is consistent with expertise that comes from a nongenerative AI source.

ā€œThere are still differences in the trustworthiness of sources,ā€ she says in a release. ā€œLook for AI thatā€™s integrated into a system with a layer of expertise thatā€™s double-checkedā€”just as weā€™ve always been taught to be cautious about using Wikipedia because itā€™s not always verified. The same applies now with AIā€”look for platforms that are more likely to be trustworthy, as they are not all equal.ā€

Indeed, Leslie-Miller says AI could be a benefit to parents looking for health information so long as they understand the need to consult with health professionals as well.

ā€œI believe AI has a lot of potential to be harnessed. Specifically, it is possible to generate information at a much higher volume than before,ā€ she says in a release. ā€œBut itā€™s important to recognize that AI is not an expert, and the information it provides doesnā€™t come from an expert source.ā€

IDĀ 320280112Ā Ā©Ā MinirhenyxĀ |Ā Dreamstime.com

Source

Leave a Reply

Your email address will not be published. Required fields are marked *