Artificial Intelligence in Small Medical Practices: Legal Considerations for Michigan Providers Prior to Implementation
Artificial intelligence (AI) tools are increasingly being adopted by independent physician groups, dental practices, behavioral health providers, and other small outpatient providers across Michigan for purposes including clinical documentation, coding support, patient communication, appointment scheduling, and revenue cycle management.
Unlike large health systems, small practice environments frequently implement these tools without dedicated compliance review or formal data governance policies. As a result, many providers may unknowingly expose themselves to liability under federal and Michigan privacy and consumer protection laws when integrating AI-enabled platforms into clinical or administrative workflows.
Many commercially available AI products currently marketed to small practices process or rely upon Protected Health Information (PHI) and, in certain instances, may generate outputs that influence patient care decisions or treatment recommendations. Although there is currently no federal law or regulation explicitly governing the use of AI in health care, on January 3, 2026, the Department of Health and Human Services (HHS) issued a Request for Information (RFI) in furtherance of its December 4, 2025, AI Strategy, seeking comments from the public regarding the adoption and use of AI as part of clinical care. The comment period closed on February 23, 2026, and resulted in the submission of 7,330 comments. While it is unclear when HHS will issue any rules specific to the use of AI in health care, existing laws, regulations and ethical concerns should be front and center when contemplating the addition of AI to your practice.
HIPAA and Part 2 Compliance Implications
To the extent an AI vendor creates, receives, maintains, or transmits PHI on behalf of a provider, that vendor is likely acting as a Business Associate under the Health Insurance Portability and Accountability Act of 1996 (HIPAA). If the same provider also uses or maintains Substance Use Disorder (SUD) treatment records, compliance with separate, stricter federal law and regulations (42 CFR Part 2 or Part 2) is a must. In such circumstances, providers must ensure that an appropriate Business Associate Agreement (BAA) is in place prior to utilizing the platform.
Failure to execute a BAA or confirm vendor safeguards may result in impermissible disclosures of PHI under the HIPAA Privacy and Security Rules (45 C.F.R. Parts 160 and 164) and Part 2, particularly where patient information is uploaded into AI tools for purposes such as documentation summarization or automated messaging.
Michigan Data Breach Notification Requirements
In addition to HIPAA and Part 2 obligations, Michigan providers should be aware that a security incident involving AI platforms may independently trigger notification requirements under the Michigan Identity Theft Protection Act, MCL 445.61 et seq., which requires notice following a breach of a database containing certain personal information.
Recent health care-related cyber incidents in Michigan involving third-party vendors have prompted renewed consumer alerts from the Michigan Attorney General emphasizing the risks associated with unauthorized access to patient data maintained by electronic systems.
Use of AI-enabled tools without appropriate contractual safeguards or security review may increase the likelihood of vendor-related breach exposure.
Clinical Documentation and Decision-Making Risks
Certain AI platforms currently marketed to outpatient providers are capable of generating visit summaries, coding recommendations, or patient communications with minimal user input. Incorporation of such AI-generated outputs into the medical record without clinician verification may create liability risk in the event of documentation errors, inaccurate treatment instructions, or algorithmic bias affecting clinical decision-making.
Providers should consider implementing internal review protocols to ensure that AI-assisted documentation or patient-facing communications are reviewed and approved by a licensed professional prior to inclusion in the patient chart or transmission to the patient.
Consumer Protection and Marketing Considerations
Providers marketing AI-assisted services should exercise caution in making claims regarding diagnostic accuracy, treatment efficacy, or regulatory approval associated with such technologies.
Federal regulators, including the Federal Trade Commission, have identified misleading claims regarding AI capabilities as a potential enforcement priority in the health care and technology sectors.
Practical Considerations for Implementation
Prior to adopting AI-enabled tools within clinical or administrative workflows, small practices may wish to:
- Confirm whether the vendor processes PHI;
- Execute a HIPAA-compliant BAA where applicable;
- Evaluate vendor data storage and retention practice;
- Prohibit staff from inputting PHI into consumer AI platforms;
- Require clinician review of AI-generated documentation; and
- Update existing HIPAA Security Rule risk analyses to account for AI-enabled systems.
Conclusion
While AI technologies may offer operational efficiencies for small medical practices, integration of such tools without appropriate contractual protections, internal policies, and clinical oversight may expose providers to regulatory risk under both federal privacy laws and Michigan breach notification statutes.
Health care providers should consult legal counsel prior to implementing AI-enabled platforms that interact with patient information or influence clinical workflows.
Please feel free to contact the authors of this Client Alert or your Butzel attorney for more information.
Debra Geroux
248.258.2603
geroux@butzel.com
Jennifer Dukarski
734.213.3427
dukarski@butzel.com
Maria Sesi
248.593.2099
sesi@butzel.com