Artificial intelligence is quickly reshaping healthcare. It now supports diagnostic imaging, clinical decision tools, patient messages and back office workflows. According to the World Economic Forum, 4.5 billion people still lack access to essential care, and the global health worker shortage could reach 11 million by 2030. AI could help close that gap.
However, as AI becomes more embedded in care, regulators are zeroing in on a simple question. Should patients be told when AI plays a role in their care?
In the United States, no single federal law requires broad AI disclosure in healthcare. Instead, a growing patchwork of state laws is filling that gap. Some states require clear disclosure. Others mandate transparency indirectly through limits on how AI can be used.
Transparency is not a technical detail; it is a trust issue. Research across industries shows people expect to be informed when AI affects decisions that matter to them. In healthcare, that expectation is even stronger. An analysis published by CX Today found that when AI use is hidden, trust erodes quickly, even when outcomes are accurate.
Healthcare depends on trust. Patients follow treatment plans, share sensitive information and stay engaged when they believe care decisions are ethical and accountable.
While HIPAA does not directly regulate artificial intelligence, its principles still apply. Covered entities must clearly explain how protected health information is used and safeguarded.
When AI systems analyze or generate clinical information using patient data, nondisclosure can undermine that goal. Patients may not fully understand how their information shapes care decisions.
Disclosure also supports informed consent. Patients have the right to understand material factors influencing diagnosis, treatment, or care communications. Just as clinicians disclose new procedures or medical devices, meaningful AI use should be explained so patients can ask questions and stay involved in their care.
AI disclosure means informing patients or members when artificial intelligence systems are used in healthcare-related decisions. This can include clinical messages, diagnostic support tools, utilization review, claims processing or coverage determinations. The goal is transparency, accountability and patient trust.
According to analysis from Morgan Lewis, disclosure requirements most often apply when AI is used for:
- Clinical decision-making
- Utilization review
- Claims processing
- Coverage determinations
These areas are considered high impact because they directly affect access to care and understanding of health information.
Healthcare organizations that fail to disclose AI use face real consequences. These include increased litigation risk, reputational damage and erosion of patient trust. Ethical concerns around autonomy and transparency can also trigger regulatory scrutiny.
States are taking different paths to regulate healthcare AI, but most are starting with one common goal: greater transparency when technology influences care.
California has taken one of the most comprehensive approaches.
- AB 3030 requires clinics and physician offices that use generative AI for patient communications to include a clear disclaimer. Patients must also be told how to reach a human healthcare professional.
- SB 1120 applies to health plans and disability insurers. It requires safeguards when AI is used for utilization review. It also mandates disclosure and confirms that licensed professionals make medical necessity decisions.
Colorado's SB24 205 targets AI systems considered high risk. These are tools that materially influence decisions like approval or denial of healthcare services.
Entities must implement safeguards against algorithmic discrimination and disclose AI use. While broader than clinical care alone, the law directly affects patient access decisions.
Utah has layered disclosure rules that intersect with healthcare.
- HB 452 requires mental health chatbots to clearly disclose AI use. SB 149 and SB 226 extend disclosure requirements to regulated occupations, including healthcare professionals.
This approach ensures transparency in therapeutic interactions and clinical services.
Several other states are moving in the same direction. Massachusetts, Rhode Island, Tennessee and New York are all considering or enforcing rules that require disclosure and human review when AI influences utilization review or claims outcomes. Even when clinical diagnosis is not covered, these laws push accountability where AI affects care access.
If you are a patient, expect more transparency. You may see disclosures in messages, coverage notices or digital interactions. If you work in healthcare, AI governance is no longer optional. Disclosure practices must align across clinical, administrative, and digital systems. Training staff and updating patient notices will matter as much as the technology itself. Trust will increasingly depend on how openly AI is introduced into care.
AI can improve efficiency, expand access, and support clinicians. Yet its value depends on trust. Disclosure does not slow innovation; it strengthens confidence in both the technology and the professionals who use it. As states continue to act, transparency will likely become the norm rather than the exception in healthcare AI.