Article icon
Article

Ask a Data Ethicist: Can You (Ever) Safely Use ChatGPT for Researching a Medical Condition?

Up until recently, my very succinct and direct answer to this question would be – HARD NO! Don’t do it. There are far too many examples of why this is not a good idea. But, life is nuanced and complex. At a recent talk, I met someone whose family has greatly benefited from using a chatbot in this way, and so I wanted to unpack the question in greater detail for this month’s column …

Can you (ever) safely use ChatGPT for researching a medical condition?

Context of the Case

It’s helpful to have a little more context about this particular situation. This person has a child with a rare condition that nobody in the medical system seemed to be able to diagnose. Their medical journey spanned a decade, three countries, dozens of doctors, including specialists, and no solutions. Enter ChatGPT. 

They used the bot to ask questions about symptoms and possible conditions. They did enter sensitive, personal medical information (more on that in a minute). They were able to get a short list of some suggested medical conditions. They took this information to their doctor. The doctor did some further tests and was able to confirm that one of the suggested conditions appeared to be the correct diagnosis. After receiving the appropriate treatment for that condition their child recovered. The family is grateful that they had access to the chatbot. 

Data and AI Ethics Courses

Explore the ethical considerations and standards implicit in the data industry and the emerging realm of AI.

Privacy and Sensitive Data

My first reaction to hearing this story about using a chatbot for researching a medical condition was one of alarm – especially the part about entering sensitive medical information into the AI system. We were able to have a conversation about privacy and how these models might leak data or train on data. We discussed the limits of privacy laws in terms of being able to protect data that goes into these systems. We talked about strategies to mask information inputs and reduce risk. This was all new information for them and I was glad to be able to provide it, so that moving forward they can make more informed decisions.

But, I didn’t just educate them, they also educated me. I very much see my role as an advisor, to help people be informed of the risks in engaging with AI tools. My role is not to stand in judgement of people’s choices. While I hold my own personal stance on this issue in terms of the choices I would make, at the end of day, this is their decision, not mine. I do not share their lived experience of being a parent desperate to help their child. I did not spend years going from doctor to doctor and getting nowhere. We can talk abstractly about how they should have gotten the help they needed or rant about a failing medical system with not enough resources*, but doing those things did not change their experience of the system. Using the chatbot delivered results and they are not alone in choosing this option.

Use Outputs as Input for a Conversation with a Doctor

We also talked about the outputs of a large language model (LLM) and how it’s not “knowledge” or “understanding” or a “diagnosis” – it’s sophisticated next word prediction and pattern matching. That said, the outputs can sometimes be useful and sometimes be verified as accurate. They already understood they should not simply trust the outputs of the system, and that’s why they took the information to their doctor. But, not everyone will know that. Combined with a shortage of access to doctors, people might act on the outputs of a chatbot, which could be extremely dangerous

  • Don’t use AI as your first point of inquiry. Start with your doctor and the medical system, not a chatbot. For example, in Alberta, Canada, we have 811 – a health link directory to help direct you to appropriate health resources. Other places will have their equivalent directory or authorized resources. 
  • Don’t act on the outputs of the chatbot. Take those outputs to a medical professional as information to foster a conversation. It is dangerous and risky to accept those outputs and act on them so please do not do that!
  • Don’t input sensitive medical data into the system. As a general rule, I can’t in good faith to advise anyone to share sensitive data with a chatbot. However, I do understand why someone might choose to do this. In that case, apply data minimization principles (share only what is necessary), make sure you adjust settings in the tool to opt out of any future use of your data for training and mask your data in the prompt.

In January, OpenAI announced a ChatGPT for health, which it claims has greater safeguards. I have not looked into these claims further and have no experience using the tool. There may be other reasons why a person would not choose OpenAI

This is a controversial topic. I’m already anticipating blowback from people in my field who might think it’s irresponsible to even entertain this use case. Yet, the family I met and their healthcare story deeply impacted me and that’s why I wanted to share it.

*The counter argument will be that if we capitulate to using these tools and do not change the system we get a worse system. I agree with that perspective. I think we need to advocate for bigger changes but that doesn’t always help in the moment – so it’s a both/and situation.

Send Me Your Questions!

I would love to hear about your data dilemmas or AI ethics questions and quandaries. You can send me a note at [email protected] or connect with me on LinkedIn. I will keep all inquiries confidential and remove any potentially sensitive information – so please feel free to keep things high level and anonymous as well. 

This column is not legal advice. The information provided is strictly for educational purposes. AI and data regulation is an evolving area and anyone with specific questions should seek advice from a legal professional.

Live Online Course: AI Risk Lab

Learn how to manage AI to maximize opportunity and avoid liability – June 8 & 15, 2026.