What exactly is OpenAI offering, and who qualifies for free access?

Starting April 22, verified U.S. physicians, nurse practitioners, and pharmacists can use ChatGPT for free through medical license verification, removing the $20/month subscription barrier.

On April 22, OpenAI announced that verified U.S. physicians, nurse practitioners, and pharmacists can use ChatGPT for free. Previously, accessing ChatGPT beyond the limited free tier required a $20/month Plus subscription. That barrier is now gone for licensed clinicians. The verification process uses a third-party service to confirm your medical license is active and valid in a U.S. state. You log in with your license credentials, and if verification succeeds, ChatGPT is free forever (or at least, for as long as OpenAI maintains the program).

This is a deliberate move to lower adoption friction. A $20 monthly subscription doesn't sound like much, but it adds friction. You have to decide it's worth it. You have to enter a credit card. You have to manage another subscription. For a busy clinician, that friction was enough to keep many from trying ChatGPT at all. Removing the cost removes the friction. The barriers are now only technical (you need an internet connection and a web browser) and professional (you need to decide whether the tool is appropriate for your patient and your workflow).

What are the legitimate clinical use cases — and what should clinicians absolutely not use it for?

ChatGPT can help draft notes, summarize research, and compose patient communications, but it absolutely cannot diagnose, prescribe, or replace clinical judgment in any context.

Here's what ChatGPT absolutely cannot do: diagnose. It cannot prescribe. It cannot make clinical decisions. ChatGPT's training data has a knowledge cutoff, meaning it doesn't know about very recent studies or ultra-rare conditions it has minimal training examples for. It can hallucinate — generate plausible-sounding but completely false information — and a clinician in a hurry might not catch the hallucination. Using ChatGPT as a second opinion on a diagnosis is dangerous. Using it to draft documentation while you remain the decision-maker is safe.

OpenAI's own framing is careful: "clinical care" support, not clinical decision-making. The distinction is critical. Support means it helps you do your job faster. Decision-making means it replaces your judgment. ChatGPT is legitimately useful at the former. It's dangerous at the latter.

Use Case Legitimate? Reason Risk Level
Draft clinical note based on physician's findings Yes Physician makes diagnostic decision, AI formats documentation Low (physician retains control)
Summarize a recent journal article Yes AI assists research, physician evaluates relevance Low (output is reviewed)
Draft patient education materials Yes Physician reviews and edits before use with patient Low (physician has final say)
Ask ChatGPT what diagnosis a patient probably has No Diagnosis requires clinical judgment and examination; AI can hallucinate High (medical error risk)
Use ChatGPT to prescribe medication No Prescribing requires full patient history, drug interaction checking, licensure Critical (legal liability)
Use ChatGPT instead of consulting a specialist No Specialist expertise cannot be replicated by a language model High (patient harm risk)

How does the verification process work, and what safeguards are in place?

Medical license verification is one-time through a third party, but OpenAI hasn't clearly published how patient data shared in conversations is handled or whether it's used for model training.

You initiate verification through ChatGPT by clicking a "Verify as Healthcare Professional" button. You're directed to a third-party service that confirms your medical license. The third party doesn't give OpenAI access to your license data — they simply confirm that a license with your credentials exists and is current. Once verified, you get free access. The verification is one-time, tied to your ChatGPT account.

OpenAI hasn't published detailed terms yet around how they handle patient data shared with ChatGPT during free clinical access. This is important. If a clinician pastes a patient note into ChatGPT (even with the patient's name redacted), is that data used to train future models? Is it logged? OpenAI's terms for general ChatGPT say they don't train on conversations by default in business accounts, but the healthcare tier may have different rules. Clinicians need explicit clarity on data handling before they start using this for real clinical work.

What this means in the race between OpenAI and Google for healthcare AI

Google launched MedGemma in April 2026 — a family of open-weight models specifically trained on medical data. Google is also making a play for clinician adoption. The timing is no coincidence. Both companies see healthcare as a critical market for AI adoption. Clinicians are credible, high-touch decision-makers. If Google and OpenAI can make AI standard in clinical workflows, it validates the technology across other professional domains.

OpenAI's strategy is democratization through cost removal. Google's is precision through specialized models. OpenAI is saying "we'll make our general AI free for clinicians." Google is saying "we'll build AI specifically for medicine and prove it's better." Both approaches have merit. The real outcome will likely be that both tools are used — ChatGPT for general documentation and research, specialized models for complex diagnostic support.

What's clear is that healthcare AI is no longer a curiosity. It's a mainstream strategic priority for both companies. Clinicians who've been hesitant to try AI because of cost or skepticism now have clear signals that adoption is coming. The question is no longer whether AI will be used in clinical workflows. It's when, and on what terms.

Should clinicians disclose to patients when AI helps write their clinical notes?

When AI generates clinical content, clinicians must disclose that to patients. Patients have the right to know if documentation was drafted by AI.

Clinicians have an ethical obligation to disclose when they're actively using AI in a patient's care, especially if that AI is generating clinical content (documentation, diagnoses, or recommendations). Patients should know whether a note in their record was drafted by a doctor or by a language model (even if a doctor reviewed it). Some patients may have no objection. Others may want that information before proceeding.

Building patient trust in AI-assisted medicine requires transparency. A physician who says "I used ChatGPT to help draft your discharge instructions" and explains why (faster turnaround, standardized language) is more trustworthy than one who uses it secretly and never mentions it. The onus is on clinicians to make the AI use visible and explain the reasoning. OpenAI can't enforce that, but professional medical societies can, and many are developing guidance now.

Sources

OpenAI ChatGPT Healthcare AI Clinical Documentation Medical AI