According to the American Medical Association, more than 60% of medical professionals have used AI tools. Large language models like ChatGPT are attractive to administrators, with potential efficiency benefits for billing, diagnostics, and care management. But is ChatGPT HIPAA compliant?
What Factors Determine HIPAA Compliance for LLMs?

AI tools are like Gmail. They can be HIPAA-compliant, but it doesn’t happen by default. Your organization has the responsibility to ensure that all of the following are true:
- You have a signed Business Associate Agreement with all third-party providers.
- Vendor systems, infrastructure, and controls meet the requirements of the HIPAA Security Rule.
- Third-party access control policies and cybersecurity adhere to the HIPAA Privacy Rule, preventing unauthorized individuals from viewing records, including employees.
- Patient data is encrypted at rest and in transit.
- All interactions with protected health information are correctly logged and audited.
- Your team regularly performs risk assessments related to vendors and AI tools.
A good starting point for HIPAA compliance is to look for vendors with HITRUST certification. You can’t simply take a software supplier’s word.
Is ChatGPT HIPAA Compliant?
Up until recently, the answer to this question was a definite “no.” OpenAI was unwilling to sign a Business Associate Agreement. Any transmission of PHI to ChatGPT was automatically a HIPAA violation.
As of 2025, things have changed, but only slightly. According to OpenAI’s FAQ page, the company may sign a BAA on request in certain situations. The page also states that “our API platform can be a great fit for any covered entity or business associate looking to process protected health information.”
You can only get a BAA by contacting the company at [email protected] and explaining your use case. Only ChatGPT Edu or Enterprise customers are eligible.
Additionally, only the ChatGPT API provides access to configurations that can meet HIPAA Privacy rules. You would need to deploy the model on HIPAA-compliant infrastructure, with the training dataset hosted locally or on a compliant cloud server.
What About Datasets With De-Identified PHI?

Some healthcare LLM providers meet HIPAA requirements by completely de-identifying patient data used with AI models. This is acceptable as long as the process meets approved standards. When all personally identifiable information is stripped from a dataset, the content can be used safely in an LLM. Unfortunately, this process is time-intensive and costly.
Should You Use ChatGPT for Healthcare?
Enterprise healthcare organizations may be able to make the ChatGPT API HIPAA-compliant through careful management of security settings. But only certain file types are compatible with the required Zero Data Retention or Modified Abuse Monitoring settings. In the end, the costs and risks may outweigh the potential benefits.
Any other version of ChatGPT, public or paid, violates HIPAA standards. Doctors, nurses, and other workers can easily expose sensitive patient data to ChatGPT from their smartphones. Chat records are stored in plaintext, as recent leaks have shown. To avoid penalties, employers must make it clear that all non-approved LLM applications are prohibited.
Stay Up-to-Date With ChatGPT HIPAA Compliance

Healthcare technology is improving quickly, so ChatGPT and other tools may evolve to better support HIPAA compliance. In the meantime, carefully managing compliance and employee training is a must. Choose a HIPAA compliance solution that helps you customize and automate your controls based on real-world usage. Request a Compyl demo today.

