Would you tell a stranger all your personal health info? Would you trust them not to tell anyone else?
They’re the kind of questions we need to ask ourselves when using generative AI tools like ChatGPT to support our healthcare decisions.
ChatGPT recently introduced changes to how it stores memories and chat history, raising important questions about data, privacy and trust across all public artificial intelligence models.
In Part 2 of this conversation, Luan continues her chat with health literacy expert Dr Julie Ayre to unpack the next layer of AI and self-advocacy. They dig into how generative AI memory works, how your data is handled, and how you can protect your privacy while still getting value from these tools.
Luan shares her own experience using ChatGPT and her medical history as a test case, giving listeners a first-hand look at how AI “talks back” when fed real-world information.
You’ll also learn about data risk and digital footprint, what AI hallucinations are, and whether the health data you share in a personal AI chat could become interconnected with your work data.
Because advocating for your health means being curious and asking questions - and that includes asking questions of the tech we’re using, too.
We recommend you listen to episode 1 - ‘Can AI & ChatGPT Help You Prepare for an Appointment?’ - which is part 1 of this interview, before diving into this episode.
For links to the resources in this show - visit the episode blog page.
Let’s make your healthcare me-centred!
Do not use AI tools like ChatGPT in a medical emergency—contact your local emergency services immediately.
Subscribe & Share:
Connect with Luan:
Follow Luan on Instagram
Get Luan's free SSASy guide to Self-Advocacy
Sign up for Luan's Newsletter
Check out Luan's website
CREDITS:
Host & Producer: Luan Lawrenson-Woods, Self-Health Advocate
Sound engineering: Paddy from Goosewing Sounds Ltd (UK)
Hosted on Acast. See acast.com/privacy for more information.