The question practices are actually asking
We hear variations of this question constantly: "Can I use ChatGPT to summarize patient notes?" "Can my front desk use it to draft appointment reminders?" "Is it okay if I copy a clinical note into ChatGPT to clean it up?"
The answer isn't "no" across the board. It's "it depends on which ChatGPT, and the version your staff is likely using by default almost certainly has no HIPAA coverage."
The plan distinction that matters
OpenAI offers several ChatGPT products. From a HIPAA standpoint, they split into two completely different categories:
No BAA available (do not use with PHI)
- ChatGPT Free: No BAA. No healthcare data use.
- ChatGPT Plus ($20/mo): No BAA. Despite the paid tier, this is still a consumer product with no HIPAA coverage. This is the version most individual providers have on their phones.
- ChatGPT Team: No BAA. Team plans offer some data privacy improvements over consumer tiers, but OpenAI does not offer a BAA for Team plan customers.
BAA available (under specific conditions)
- ChatGPT Enterprise: OpenAI offers a BAA with Enterprise contracts. This requires a negotiated contract (not a credit card signup), and the BAA covers usage within the Enterprise tenant. Pricing is not public; it's a sales-negotiated contract typically suited to organizations with 150+ users.
- OpenAI API with Zero Data Retention (ZDR): For developers and organizations building on the API, OpenAI offers a BAA in conjunction with Zero Data Retention. ZDR means OpenAI does not retain API inputs and outputs for model training or storage. This must be explicitly requested and approved. The default API does not have ZDR enabled.
What counts as PHI in a ChatGPT prompt?
Providers often ask whether "de-identified" notes can go to ChatGPT without a BAA. The standard for de-identification under HIPAA's Safe Harbor method (45 CFR §164.514(b)) is specific: 18 categories of identifiers must be removed, including names, dates (beyond year), geographic identifiers smaller than state, and several others.
In practice, a "cleaned up" clinical note still containing the patient's age, diagnosis, visit date, and the provider's name is not de-identified under HIPAA standards. Most providers applying informal de-identification are not meeting the Safe Harbor threshold.
Examples of prompts that likely contain PHI:
- "Summarize these notes for a 67-year-old female patient with Type 2 diabetes seen on [date]"
- "Help me write a referral letter for John S., DOB [date], with the following history..."
- "Write an insurance appeal for the denial of [procedure code] for a patient with [diagnosis]" (when combined with any identifying detail)
- Any uploaded document containing patient records, even if the patient's name has been removed but other identifiers remain
The shadow IT problem
Even if a practice has a policy against using ChatGPT with patient data, enforcement is a separate issue. ChatGPT is free, fast, and available on every employee's personal phone. We've seen:
- Billing staff using ChatGPT Plus on personal devices to draft insurance appeals
- Providers using ChatGPT to clean up dictated notes between appointments
- Administrative staff using it to draft patient communication templates that inadvertently include PHI context from recent cases
- Clinical staff using ChatGPT to summarize medical literature and including patient scenarios as context
A written policy saying "don't use ChatGPT" is not a technical safeguard. HIPAA's Security Rule requires both administrative and technical controls. Written policy alone doesn't satisfy §164.312.
The compliant path for practices that want to use AI
if you want to use AI for documentation, drafting, or summarization, and there are real productivity gains available, the approach is:- Assess current use first. Find out what AI tools staff are already using, on what devices, and with what data. You can't remediate exposure you don't know about.
- Block consumer AI on practice networks. DNS-layer filtering can block access to chatgpt.com, gemini.google.com, and other consumer AI tools on practice-managed networks and devices. This creates a technical barrier alongside policy.
- Deploy a compliant alternative. For most practices, this means Microsoft 365 Copilot (if they're already on M365 E3/E5 with a BAA) or a purpose-built AI scribe tool with a proper BAA in place. The user experience is comparable; the compliance posture is entirely different.
- Execute a BAA before PHI flows. This sounds obvious but it's the step most practices skip when evaluating AI tools. A BAA review should happen before any pilot deployment, not after.
- Train staff on the why, not just the what. Staff who understand the actual risk — "this sends patient data to a company that can use it to train AI models, and you and the practice can be held liable" are more likely to comply than staff who are simply told "don't use it."
The goal isn't to prevent AI use. It's to ensure AI use happens through channels with appropriate safeguards. There are good compliant options at every budget level.
Bottom line
Can doctors use ChatGPT with patient data? With ChatGPT Free or Plus: no. With ChatGPT Enterprise and a signed BAA: yes, within the terms of the agreement. With the OpenAI API and Zero Data Retention: yes, for organizations building applications.
The more important question for most practices isn't about policy — it's about what's actually happening right now on staff devices, and whether there's a compliant path in place for the productivity use cases AI is genuinely good at.