Federal AI Legislation and Regulation
Congress
NIST
HHS
CMS
White House Actions
FTC
FDA
Congress
To date, Congress has not passed any significant legislation that directly impacts the use of AI in healthcare. Rather, Congress has focused its efforts on oversight and policy development. The policy focus of Congressional Republicans has increasingly centered on adopting an industry-agnostic framework of laws that focus on supporting innovation, while addressing targeted risks such as child safety, intellectual property, and infrastructure costs. The framework also aims to have a federal standard for AI use that would preempt state laws it deems burdensome.
This direction was reflected in the March 18, 2026, discussion draft of the TRUMP AMERICA AI Act, sponsored by Sen. Marsha Blackburn. The draft seeks to codify the Administration’s AI executive actions, establish new requirements for AI developers, and limit states’ ability to regulate AI systems, reflecting a more expansive approach to federal AI governance through detailed statutory provisions.
White House Actions
On Jan. 23, 2025, President Donald Trump released an EO that rescinded a Biden-era EO on AI. In October 2023, Former President Joe Biden signed EO 14110, which established the first set of standards for using AI in healthcare and other industries, calling for greater public oversight and regulation of AI. HHS, in compliance with this EO, established an AI Safety Program to track harmful incidents involving AI in healthcare settings, established an AI Task Force and finalized a rule requiring transparency for AI using certain certified health IT.
Holland & Knight Insight: Executive Order: Removing Barriers to American Leadership in Artificial Intelligence (Jan. 23, 2025).
On April 3, 2025, the Office of Management and Budget (OMB) released two AI-focused memorandums, M-25-21 and M-25-22, with implications for HHS, including the FDA, CMS, NIH and other subagencies. Additionally, these memos are important for those in the AI-enabled software or AI/machine learning tool with healthcare applications, as the memos provide substantial insight on the Trump Administration's positioning on AI applications broadly. The memorandums repeal and replace memorandums that were introduced under the Biden Administration, and some key continuities remain such as risk management for high-impact AI, emphasis on fairness/ethics and governance during the procurement of AI tools/products. However, M-25-21 and M-25-22 place greater emphasis on transparency, encouraging innovation and America First policies.
Holland & Knight Insight: Trump Administration Issues AI Memoranda and Executive Order with Government Contracts Impacts (April 21, 2025)
On July 23, 2025, the White House released "Winning the Race: America's AI Action Plan", outlining more than 90 federal actions to boost U.S. leadership in AI. The plan envisions a new era of innovation and progress, driven by three pillars: accelerating AI innovation, building national AI infrastructure and leading in global AI diplomacy and security. To implement this strategy, President Trump signed three EOs: one to promote the export of American AI technologies, another to fast-track permitting for data centers and a third to prohibit federal use of AI systems that incorporate diversity or climate-related criteria.
Holland & Knight Insight: America's AI Action Plan: What's In, What's Out, What's Next (July 25, 2025)
White House Action: Executive Order on Pediatric Cancer – Sept. 30, 2025
The first executive order focuses on pediatric cancer. It was signed on Sept. 30, 2025 and doubles federal funding for AI-driven research. It calls for building a genetic database to help identify patterns and personalize treatments. On paper, this sounds promising. But it also means we're relying on AI systems to handle sensitive data and influence treatment decisions – areas where trust and transparency are still unresolved.
White House Action: Executive Order on the Genesis Mission – Nov. 24, 2025
The second order, signed Nov. 24, 2025, launches the Genesis Mission, described as a Manhattan Project for AI. It aims to create an integrated platform for scientific discovery, including drug development and biomedical research. But the scale and expediency of this effort raises questions about privacy, governance, data integrity and whether the infrastructure is ready for such sweeping changes.
Holland & Knight Insight: Genesis Mission Seeks to Bolster Scientific Discovery, National Security, Energy Dominance (November 26, 2025)
White House Action: White House AI Strategy – Dec. 4, 2025
HHS released its 21-page AI Strategy on Dec. 4, 2025, marking the next phase of its year-long AI effort and building on directives like the AI Action Plan and Executive Order 14179. Led by Acting Chief AI Officer Clark Minor, the strategy aims to integrate AI across internal operations, research and public health programs, positioning AI as a tool to augment – not replace – the federal workforce. It lays out five pillars: governance and risk management, infrastructure design, workforce development, reproducible health research and modernization of care delivery. The plan seeks to move beyond fragmented pilots toward coordinated AI capability, focusing first on internal modernization and later on public-private collaboration.
Holland & Knight Insight: HHS Releases Strategy Positioning Artificial Intelligence as the Core of Health Innovation (December 10, 2025)
White House Action: Executive Order on National Policy Framework for Artificial Intelligence – Dec. 11, 2025
The White House issued an executive order on "Ensuring a National Policy Framework for Artificial Intelligence" on Dec. 11, 2025, establishing a framework for the federal regulation of AI and creating an AI Litigation Task Force to challenge state laws that are inconsistent with federal AI policy objectives. The EO further directs federal agencies to eliminate regulatory obstacles that the administration believes threaten U.S. competitiveness in the global AI race and restrict funding for states with "onerous AI laws."
Holland & Knight Insight: What to Watch as White House Moves to Federalize AI Regulation (December 15, 2025)
White House Actions: OMB Issues new AI guidance on procuring LLMs
The Office of Management and Budget (OMB) released Memorandum M‑26‑04 on Dec. 11, 2025, titled "Increasing Public Trust in Artificial Intelligence Through Unbiased AI Principles," establishing two core principles: truth-seeking and ideological neutrality. These principles will guide federal agencies in the acquisition and development of AI systems, including large language models. The goal is to ensure AI outputs are fact-based, scientifically grounded and free from partisan or ideological bias unless explicitly requested by the user.
On March 20th, 2026, the White House released a National Policy Framework for Artificial Intelligence last week, outlining legislative recommendations for Congress to establish a single federal approach to AI governance. The framework calls for preempting state‑level AI laws in favor of a national standard, while directing lawmakers to focus federal guardrails on areas such as child safety, free speech protections, intellectual property, workforce impacts and national security.
The framework also urges Congress to codify President Donald Trump’s ratepayer protection pledge, requiring technology companies to supply or pay for the electricity used by AI data centers, and recommends streamlining federal permitting to support AI infrastructure development. Rather than creating a new AI regulator, the White House proposes relying on existing federal agencies for oversight, while preserving limited state authority in areas such as child protection, fraud and consumer safeguards.
Holland & Knight Insight: White House Releases a National Policy Framework for Artificial Intelligence (March 27, 2026)
NIST
The NIST released an AI Risk Management Framework (AI RMF 1.0/NIST AI 100-1) in January of 2023. The National Artificial Intelligence Initiative (NAII) Act of 2020, Public Law 116-283, 15 U.S.C. 9401 (AI Act), defines "artificial intelligence" as a "machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments" and use machine and human-based inputs to perceive virtual and real environments, use those perceptions to create models through automated analysis and formulate options for information or action using model inference. The purposes of the NAII created by the legislation included ensuring U.S. leadership in AI research and development, preparing the workforce to integrate AI systems across all sectors of the economy and society, coordinating research and development, and "lead[ing] the world in the development and use of trustworthy artificial intelligence systems in the public and private sectors."
The AI Act, in Section 5301, directs NIST to:
advance collaborative frameworks, standards, guidelines, and associated methods and techniques for AI
support the development of a risk-mitigation framework for deploying AI systems
support the development of technical standards and guidelines that promote trustworthy AI systems
support the development of technical standards and guidelines by which to test for bias in AI training data and applications
NIST has observed in the AI RMF 1.0 framework that, though AI technologies have potential to promote scientific advancements and economic growth, they also have unique risks that differ from traditional software. AI systems are "socio-technical in nature, meaning they are influenced by societal dynamics and human behavior." AI systems may lead to "inequitable or undesirable outcomes for individuals and communities" unless proper controls are in place. The core of NIST's framework for addressing AI risks focuses on four specific functions: Govern, Map, Measure and Manage.
NIST released a risk management framework focused on generative artificial intelligence (GenAI) in July of 2024 (NIST AI 600-1). This document is designed to be a "companion resource" for AI RMF 1.0. The framework discusses a number of risks unique to GenAI or that can be exacerbated by GenAI. These include, among other things, "emotional entanglement" due to humans anthropomorphizing GenAI systems. GenAI can also create data privacy risks due to the potential to deanonymize health, location or other sensitive data.
HHS
On December 4, 2025, HHS released its 21-page Artificial Intelligence (AI) Strategy in fulfillment of requirements of America’s AI Action Plan, Executive Order 14179 ("Removing Barriers to American Leadership in AI"), along with other executive directives (e.g., OMB Memoranda M-25-21 and M-25-22). This strategy represents the next phase of HHS' push to make AI an integral part of the department’s operations, including scientific research and public health programs. Further, the strategy reinforces HHS' aim to utilize technologies to enhance efficiency, promote American innovation and improve patient outcomes. For more information see here.
CMS
Beginning in 2023, CMS took several regulatory actions to regulate the use of AI, most notably in the context of the prior authorization process conducted by Medicare Advantage Organizations (MAOs). A summary of pertinent CMS actions is as follows:
Medicare Program; Contract Year 2024 Policy and Technical Changes to the Medicare Advantage Program Final RuleDate: April 12, 2023Applies to: Health Plans, Utilization Review (UR) CompaniesRequirement: Requires MAOs to ensure that they are making medical necessity determinations based on the circumstances of the specific individual, as outlined at 42 C.F.R. § 422.101(c), "as opposed to using an algorithm or software that doesn't account for an individual's circumstances." CMS also opined that any use of AI in healthcare, including in UR, must adhere to the Health Insurance Portability and Accountability Act (HIPAA) and that any use of AI should ensure fair and equitable decision-making, as well as mechanisms to review and contest AI-generated decisions.Effective Date: Jan. 1, 2024Holland & Knight Insight: Regulation of AI in Healthcare Utilization Management and Prior Authorization Increases (Oct. 31, 2024)
Interoperability and Prior Authorization Final RuleDate: Jan. 17, 2024Applies to: Health Plans, UR Companies. Requirement: Requires MAOs to utilize health care providers to render final prior authorization (PA) decisions; allows AI provided the algorithm accounts for individual’s clinical conditions. CMS clarified allowed uses of AI in a subsequent FAQ issued on Feb. 6, 2024. Effective Date: Jan. 1, 2025Holland & Knight Insight: Regulation of AI in Healthcare Utilization Management and Prior Authorization Increases (Oct. 31, 2024)Citation(s): 42 CFR 422.138
Section 1557 Regulations Date: May 6, 2024Applies to: All entities receiving HHS fundingRequirement: Section 1557 of the Affordable Care Act prohibits discrimination by nearly all healthcare providers in health programs or activities receiving federal financial assistance. On May 6, 2024, CMS issued final rules implementing Section 1557. Per the Final Rule, covered entities cannot discriminate through the use of "patient care decision support tools," which include automated decision systems and AI used to support clinical decision-making in its health programs or activities. Effective Date: Jan. 1, 2025Holland & Knight Insight: OCR Shores Up Access to Healthcare with Nondiscrimination Protections (Dec. 18, 2024)Citation(s): 42 CFR 92.210
CMS Announcement: ACCESS Model: CMS Innovation Center's new Advancing Chronic Care with Effective, Scalable Solutions (ACCESS) model tests outcome aligned payments for technology-enabled chronic care and provides a reimbursement framework that can help providers adopt remote monitoring devices and AI-enabled applications in routine practice, is tightly coupled to FDA's TEMPO.
FDA
The FDA does not independently regulate AI. Rather, the FDA regulates AI based on its incorporation into software and digital health products based on their intended use, including as a medical device, general wellness product and in the development of drugs. The FDA appreciates the potential impact AI can have on the healthcare industry and is updating its guidance to reflect changes in technologies. For example, the FDA published draft guidance, “Considerations for the Use of Artificial Intelligence to Support 2 Regulatory Decision-Making for Drug and Biological Products Guidance for Industry” that discusses its current thinking on the use of AI to produce information or data intended to support regulatory decision-making regarding safety, effectiveness or quality for drugs. Due to the change in administration and new heads of HHS and the FDA, it is unclear if this draft guidance will hold. Holland & Knight is monitoring changes to the FDA's guidance and regulations related to digital healthcare and AI.
As a general matter, under the 21st Century Cures Act, Congress amended the Federal Food, Drug, and Cosmetic Act to remove certain software functions from the definition of "device." Digital healthcare products (unrelated to drug development) generally fall under general wellness products, medical devices under enforcement discretion and regulated medical devices.
Medical devices. Medical devices are products that are "an instrument, apparatus, implement, machine, contrivance, implant, in vitro reagent, or other similar or related article, including a component part, or accessory, which is, among other criteria, intended for use in the diagnosis of disease or other conditions, or in the cure, mitigation, treatment, or prevention of disease in man." The FDA reviews medical devices through an appropriate premarket pathway, such as Premarket Notification (510(k)), De Novo Classification Request or Premarket Approval. A number of digital health medical device applications can fall under the FDA's enforcement discretion, including some mobile medical applications. Digital health medical devices must generally comply with the FDA's medical device requirements, including registration and listing, quality systems, adverse event reporting and recalls.
General wellness products. General wellness products are nonmedical devices if they 1) are intended for only general wellness use and 2) present a low risk to the safety of users and other persons. Both factors must be met to market a product as a general wellness device. A number of digital health products can fall within the definition and regulatory oversight of a general wellness product.
Drug Development. Digital health technologies, such as electronic sensors, computing platforms and IT, provide new opportunities to obtain clinical trial data directly from patients and, therefore, assist drug sponsors in drug development activities. Under the Food and Drug Omnibus Reform Act of 2022, the FDA is required to continue to publish guidance on the modernization of clinical trials through decentralized clinical trials and digital health technologies.
FTC
The FTC is responsible for enforcing federal laws relating to unfair or deceptive business practices and unfair methods of competition, including violations of antitrust laws. In the healthcare setting, the FTC has, for example, the ability to take enforcement action for activities such as misuse of protected health information, anti-competitive conduct and deceptive advertising practices. The FTC has not yet taken any direct action using its enforcement authority under the FTC Act to address the use or misuse of AI in the healthcare setting. The FTC has, however, implemented enforcement actions against several non-healthcare companies resulting from the improper marketing of AI-powered products and services. Specifically, as part of Operation AI Comply, the FTC on Sept. 25, 2024, announced cases against five companies for engaging in deceptive advertising regarding their AI products. The FTC has separately ordered companies to delete algorithms that are developed with improperly obtained data and prohibited a company from making false and unsubstantiated representations regarding the accuracy or efficacy of its AI-powered facial recognition technology.
The key takeaways from these actions are that the FTC can take enforcement action against marketers of AI tools who 1) make unsubstantiated claims about their product capabilities, 2) provide an AI product that enables users to engage in deceptive advertising or otherwise engage in prohibited trade practices (such as algorithmic discrimination), and/or 3) train their AI tools utilizing data that is illicitly or illegally obtained. Organizations should follow the FTC's future enforcement approach and actions to stay ahead of risk and maintain best practices.
Moreover, though previous enforcement actions are informative, it is unclear how the FTC will approach these issues under the Trump Administration. The FTC is chaired by the sitting president's party and typically has two Republican seats and two Democrat seats. FTC Chair Andrew Ferguson, in his previous role as a commissioner, stressed the importance of "strik[ing] a careful and prudent balance" in the FTC's regulation of AI, indicating that he may take a lighter approach to using the FTC's enforcement authority for AI technologies.
Holland & Knight Insights
The FTC Is Regulating AI: A Comprehensive Analysis (July 25, 2023)
Podcast - An FTC Official Speaks About the Regulation of AI Technology (May 1, 2024)
Podcast - Part 2: An FTC Official Speaks About the Regulation of AI Technology (June 12, 2024)
Back to top
Back to top
Back to top
Back to top
Back to top
Back to top
Back to top
DISCLAIMER »
DISCLAIMER
The information presented herein is updated as of May 1, 2025.
This site does not constitute legal advice or establish an attorney-client relationship between you and Holland & Knight. You should consult a licensed attorney to assess and evaluate the contents of this website to verify their accuracy and their applicability to you or your affiliates. By accessing this site, you accept that this tool is not a replacement for legal counsel.
This page summarizes pertinent federal laws, regulation and guidance impacting the use of artificial intelligence (AI) in healthcare. Click on an icon below or scroll down to see summaries of laws, rules and guidance from the U.S. Congress, National Institute of Standards and Technology (NIST), U.S. Department of Health and Human Services (HHS), Centers for Medicare & Medicaid Services (CMS), U.S. Food and Drug Administration (FDA), Federal Trade Commission (FTC) and The White House. For more detailed analyses of these and similar laws and emerging issues, see our linked thought leadership page and consult our team of Holland & Knight attorneys.
FDA Announcement: Agentic AI – Dec. 1, 2025
On Dec. 1, 2025, FDA announced it is deploying agentic AI systems – advanced, goal-driven tools designed to plan, reason and execute multi-step tasks. These systems will assist with premarket reviews, surveillance and compliance checks. This could speed up processes, but introducing AI into regulatory decision-making isn't just a technical challenge – it's a trust challenge. How do we ensure accountability when algorithms start influencing approvals?
FDA Announcement: TEMPO Model – Dec. 5, 2025
FDA has launched Technology-Enabled Meaningful Patient Outcomes (TEMPO) for Digital Health Devices Pilot, a voluntary program to expand safe access to digital health tools for cardio kidney metabolic, musculoskeletal and behavioral health conditions. The pilot is explicitly designed to keep pace with rapid, iterative software development and home based care models, and will solicit statements of interest beginning Jan. 2, 2026, with plans to select up to 10 manufacturers in each of four clinical use areas aligned to chronic disease management.
Holland & Knight Insights
Podcast - Changes in FDA, Cannabis Policies and AI Developments (May 13, 2024)
Podcast - A Look Into the FDA and USDA Regulatory Regimes (Nov. 16, 2023)
Podcast - A Post-Election Checkup: FDA Policy and Regulation (Dec. 21, 2022)
FDA's Oversight of Digital Health Products and Medical Software (July 14, 2021)
ASTP/ONC
The Assistant Secretary for Technology Policy/Office of the National Coordinator for Health Information Technology, originally established through Executive Order 13335 and later codified in the Health Information Technology for Economic and Clinical Health Act, is responsible for advancing national health IT infrastructure, including standards development, certification, and interoperability policy. In 2024, ONC was reorganized as ASTP/ONC to reflect a broader role in federal technology and digital health policy, including emerging areas such as artificial intelligence.
Under the Health Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency, and Information Sharing Final Rule, which took effect on February 8, 2024, certified health IT developers must meet new transparency requirements for Decision Support Interventions (DSIs), including predictive models. These requirements mandate that users have access to standardized information about how such interventions are developed and function, enabling evaluation across the “FAVES” framework: fairness, appropriateness, validity, effectiveness, and safety. Importantly, these criteria emphasize transparency rather than performance validation.
The rule also requires that certified health IT support the secure and interoperable exchange of electronic health information (EHI), including data relevant to DSIs, consistent with existing interoperability standards and privacy requirements. These provisions apply specifically to certified health IT modules and do not extend to all AI systems used in healthcare.
Of note, on December 19, 2025, HHS and ASTP/ONC jointly released a Request for Information (RFI) seeking broad public input on how the U.S. Department of Health and Human Services (HHS) can accelerate the adoption and use of artificial intelligence (AI) in clinical care. According to ASTP/ONC, responses will inform future approaches to regulation, reimbursement, and research and development (R&D) for clinical AI.
Back to top
ASTP/ONC
CDC
CDC
On March 12, 2026, the CDC released Considerations for Agentic Research in Public Health, providing guidance on the use of “deep research,” an agentic AI capability that autonomously plans and executes multi-step research tasks. The resource is designed to help state, tribal, local, and territorial public health agencies use these tools to support evidence-based decision-making, improve efficiency, and accelerate early-stage research and planning, while emphasizing the importance of human oversight and clearly defined use cases. The guidance complements CDC’s broader generative AI considerations.
On March 13, 2026, the Centers for Disease Control and Prevention (CDC) released its first agency-wide artificial intelligence (AI) strategy, outlining how the agency plans to use AI to advance public health while maintaining appropriate governance and public trust. The strategy is organized around four pillars: accelerating responsible AI adoption across public health, strengthening AI governance and trust, advancing AI capabilities across CDC’s enterprise data platforms, and building an AI-ready workforce to support innovation. The strategy is intended to guide CDC’s internal use of AI and inform collaboration with public health partners nationwide.
Back to top
