Federal AI Legislation and Regulation
Congress
NIST
HHS/ASTP/ONC
CMS
White House Actions
FTC
FDA
Congress
Despite previous challenges, key congressional leadership remains optimistic about achieving more robust bipartisan collaboration on AI in 2025. However, enacting comprehensive AI legislation with broad applicability in the healthcare industry will be politically challenging and likely be influenced by Trump Administration leadership. With the 119th Congress now fully Republican-controlled, the landscape, scope and feasibility of future AI legislation has shifted. This Congress will continue to focus on America First Policies, innovation and transparency. Ultimately, for healthcare AI legislation to gain traction in Congress, legislation will likely require endorsement by members of the committees of jurisdiction of both chambers, have significant co-sponsors, and be limited in scope or use case.
In 2024, congressional activity related to AI intensified, evidenced by the introduction of more than 150 AI-related bills in the 118th Congress. Concurrently, members of Congress collaborated through U.S. Senate and U.S. House of Representatives AI caucuses and working groups to highlight industry-specific themes and provide recommendations. These groups released key AI reports in 2024 – the Senate AI Working Group's policy road map in May and the House Bipartisan AI Task Force's 253-page report in December – outlining AI's potential advantages and challenges across sectors. Though bipartisan, the reports reflect the political dynamics of a Republican-led House and Democratic-led Senate of the 118th Congress. When looking to the future, the impacts of Republican control in both chambers of Congress and the administration should not be underestimated.
White House Actions
On Jan. 23, 2025, President Donald Trump released an EO that rescinded a Biden-era EO on AI. In October 2023, Former President Joe Biden signed EO 14110, which established the first set of standards for using AI in healthcare and other industries, calling for greater public oversight and regulation of AI. HHS, in compliance with this EO, established an AI Safety Program to track harmful incidents involving AI in healthcare settings, established an AI Task Force and finalized a rule requiring transparency for AI using certain certified health IT.
Holland & Knight Insight: Executive Order: Removing Barriers to American Leadership in Artificial Intelligence (Jan. 23, 2025).
On April 3, 2025, the Office of Management and Budget (OMB) released two AI-focused memorandums, M-25-21 and M-25-22, with implications for HHS, including the FDA, CMS, NIH and other subagencies. Additionally, these memos are important for those in the AI-enabled software or AI/machine learning tool with healthcare applications, as the memos provide substantial insight on the Trump Administration's positioning on AI applications broadly. The memorandums repeal and replace memorandums that were introduced under the Biden Administration, and some key continuities remain such as risk management for high-impact AI, emphasis on fairness/ethics and governance during the procurement of AI tools/products. However, M-25-21 and M-25-22 place greater emphasis on transparency, encouraging innovation and America First policies.
Holland & Knight Insight: Trump Administration Issues AI Memoranda and Executive Order with Government Contracts Impacts (April 21, 2025)
On July 23, 2025, the White House released "Winning the Race: America's AI Action Plan", outlining more than 90 federal actions to boost U.S. leadership in AI. The plan envisions a new era of innovation and progress, driven by three pillars: accelerating AI innovation, building national AI infrastructure and leading in global AI diplomacy and security. To implement this strategy, President Trump signed three EOs: one to promote the export of American AI technologies, another to fast-track permitting for data centers and a third to prohibit federal use of AI systems that incorporate diversity or climate-related criteria.
Holland & Knight Insight: America's AI Action Plan: What's In, What's Out, What's Next (July 25, 2025)
NIST
The NIST released an AI Risk Management Framework (AI RMF 1.0/NIST AI 100-1) in January of 2023. The National Artificial Intelligence Initiative (NAII) Act of 2020, Public Law 116-283, 15 U.S.C. 9401 (AI Act), defines "artificial intelligence" as a "machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments" and use machine and human-based inputs to perceive virtual and real environments, use those perceptions to create models through automated analysis and formulate options for information or action using model inference. The purposes of the NAII created by the legislation included ensuring U.S. leadership in AI research and development, preparing the workforce to integrate AI systems across all sectors of the economy and society, coordinating research and development, and "lead[ing] the world in the development and use of trustworthy artificial intelligence systems in the public and private sectors."
The AI Act, in Section 5301, directs NIST to:
advance collaborative frameworks, standards, guidelines, and associated methods and techniques for AI
support the development of a risk-mitigation framework for deploying AI systems
support the development of technical standards and guidelines that promote trustworthy AI systems
support the development of technical standards and guidelines by which to test for bias in AI training data and applications
NIST has observed in the AI RMF 1.0 framework that, though AI technologies have potential to promote scientific advancements and economic growth, they also have unique risks that differ from traditional software. AI systems are "socio-technical in nature, meaning they are influenced by societal dynamics and human behavior." AI systems may lead to "inequitable or undesirable outcomes for individuals and communities" unless proper controls are in place. The core of NIST's framework for addressing AI risks focuses on four specific functions: Govern, Map, Measure and Manage.
NIST released a risk management framework focused on generative artificial intelligence (GenAI) in July of 2024 (NIST AI 600-1). This document is designed to be a "companion resource" for AI RMF 1.0. The framework discusses a number of risks unique to GenAI or that can be exacerbated by GenAI. These include, among other things, "emotional entanglement" due to humans anthropomorphizing GenAI systems. GenAI can also create data privacy risks due to the potential to deanonymize health, location or other sensitive data.
HHS: ASTP/ONC
The Office of the National Coordinator for Health Information Technology (ONC), now referred to as the Assistant Secretary for Technology Policy/Office of the National Coordinator for Health Information Technology (ASTP/ONC), was authorized in the Health Information Technology for Economic and Clinical Health (HITECH) Act of 2009 after it was first created in 2004 through Executive Order (EO) 13335 signed by President George W. Bush. ASTP/ONC's duties include reviewing whether to endorse standards, implementation specifications and certification criteria for the electronic exchange of health information and reviewing federal health information technology (IT) investments to ensure that federal health technology programs meet the objectives of a federally mandated strategic plan.
Under a final rule that became effective on Feb. 8, 2024, healthcare entities using ONC-certified health IT must follow certain transparency requirements for AI algorithms. Users must be given access to consistent information regarding the algorithms so that they can assess these tools for the characteristics referred to by the acronym "FAVES," which stands for "fairness, appropriateness, validity, effectiveness and safety." Decision Support Interventions (DSI), including Predictive DSI, must meet certification criteria that require transparency about how the AI models support decision-making in healthcare and how they are developed. ONC-certified health IT using AI must also ensure that AI tools can effectively exchange electronic health information (EHI) while still maintaining data security and privacy.
Related Links: Health Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency and Information Sharing (HTI-1) Final Rule
CMS
Beginning in 2023, CMS took several regulatory actions to regulate the use of AI, most notably in the context of the prior authorization process conducted by Medicare Advantage Organizations (MAOs). A summary of pertinent CMS actions is as follows:
Medicare Program; Contract Year 2024 Policy and Technical Changes to the Medicare Advantage Program Final RuleDate: April 12, 2023Applies to: Health Plans, Utilization Review (UR) CompaniesRequirement: Requires MAOs to ensure that they are making medical necessity determinations based on the circumstances of the specific individual, as outlined at 42 C.F.R. § 422.101(c), "as opposed to using an algorithm or software that doesn't account for an individual's circumstances." CMS also opined that any use of AI in healthcare, including in UR, must adhere to the Health Insurance Portability and Accountability Act (HIPAA) and that any use of AI should ensure fair and equitable decision-making, as well as mechanisms to review and contest AI-generated decisions.Effective Date: Jan. 1, 2024Holland & Knight Insight: Regulation of AI in Healthcare Utilization Management and Prior Authorization Increases (Oct. 31, 2024)
Interoperability and Prior Authorization Final RuleDate: Jan. 17, 2024Applies to: Health Plans, UR Companies. Requirement: Requires MAOs to utilize health care providers to render final prior authorization (PA) decisions; allows AI provided the algorithm accounts for individual’s clinical conditions. CMS clarified allowed uses of AI in a subsequent FAQ issued on Feb. 6, 2024. Effective Date: Jan. 1, 2025Holland & Knight Insight: Regulation of AI in Healthcare Utilization Management and Prior Authorization Increases (Oct. 31, 2024)Citation(s): 42 CFR 422.138
Section 1557 Regulations Date: May 6, 2024Applies to: All entities receiving HHS fundingRequirement: Section 1557 of the Affordable Care Act prohibits discrimination by nearly all healthcare providers in health programs or activities receiving federal financial assistance. On May 6, 2024, CMS issued final rules implementing Section 1557. Per the Final Rule, covered entities cannot discriminate through the use of "patient care decision support tools," which include automated decision systems and AI used to support clinical decision-making in its health programs or activities. Effective Date: Jan. 1, 2025Holland & Knight Insight: OCR Shores Up Access to Healthcare with Nondiscrimination Protections (Dec. 18, 2024)Citation(s): 42 CFR 92.210
FDA
The FDA does not independently regulate AI. Rather, the FDA regulates AI based on its incorporation into software and digital health products based on their intended use, including as a medical device, general wellness product and in the development of drugs. The FDA appreciates the potential impact AI can have on the healthcare industry and is updating its guidance to reflect changes in technologies. For example, the FDA published draft guidance, “Considerations for the Use of Artificial Intelligence to Support 2 Regulatory Decision-Making for Drug and Biological Products Guidance for Industry” that discusses its current thinking on the use of AI to produce information or data intended to support regulatory decision-making regarding safety, effectiveness or quality for drugs. Due to the change in administration and new heads of HHS and the FDA, it is unclear if this draft guidance will hold. Holland & Knight is monitoring changes to the FDA's guidance and regulations related to digital healthcare and AI.
As a general matter, under the 21st Century Cures Act, Congress amended the Federal Food, Drug, and Cosmetic Act to remove certain software functions from the definition of "device." Digital healthcare products (unrelated to drug development) generally fall under general wellness products, medical devices under enforcement discretion and regulated medical devices.
Medical devices. Medical devices are products that are "an instrument, apparatus, implement, machine, contrivance, implant, in vitro reagent, or other similar or related article, including a component part, or accessory, which is, among other criteria, intended for use in the diagnosis of disease or other conditions, or in the cure, mitigation, treatment, or prevention of disease in man." The FDA reviews medical devices through an appropriate premarket pathway, such as Premarket Notification (510(k)), De Novo Classification Request or Premarket Approval. A number of digital health medical device applications can fall under the FDA's enforcement discretion, including some mobile medical applications. Digital health medical devices must generally comply with the FDA's medical device requirements, including registration and listing, quality systems, adverse event reporting and recalls.
General wellness products. General wellness products are nonmedical devices if they 1) are intended for only general wellness use and 2) present a low risk to the safety of users and other persons. Both factors must be met to market a product as a general wellness device. A number of digital health products can fall within the definition and regulatory oversight of a general wellness product.
Drug Development. Digital health technologies, such as electronic sensors, computing platforms and IT, provide new opportunities to obtain clinical trial data directly from patients and, therefore, assist drug sponsors in drug development activities. Under the Food and Drug Omnibus Reform Act of 2022, the FDA is required to continue to publish guidance on the modernization of clinical trials through decentralized clinical trials and digital health technologies.
FTC
The FTC is responsible for enforcing federal laws relating to unfair or deceptive business practices and unfair methods of competition, including violations of antitrust laws. In the healthcare setting, the FTC has, for example, the ability to take enforcement action for activities such as misuse of protected health information, anti-competitive conduct and deceptive advertising practices. The FTC has not yet taken any direct action using its enforcement authority under the FTC Act to address the use or misuse of AI in the healthcare setting. The FTC has, however, implemented enforcement actions against several non-healthcare companies resulting from the improper marketing of AI-powered products and services. Specifically, as part of Operation AI Comply, the FTC on Sept. 25, 2024, announced cases against five companies for engaging in deceptive advertising regarding their AI products. The FTC has separately ordered companies to delete algorithms that are developed with improperly obtained data and prohibited a company from making false and unsubstantiated representations regarding the accuracy or efficacy of its AI-powered facial recognition technology.
The key takeaways from these actions are that the FTC can take enforcement action against marketers of AI tools who 1) make unsubstantiated claims about their product capabilities, 2) provide an AI product that enables users to engage in deceptive advertising or otherwise engage in prohibited trade practices (such as algorithmic discrimination), and/or 3) train their AI tools utilizing data that is illicitly or illegally obtained. Organizations should follow the FTC's future enforcement approach and actions to stay ahead of risk and maintain best practices.
Moreover, though previous enforcement actions are informative, it is unclear how the FTC will approach these issues under the Trump Administration. The FTC is chaired by the sitting president's party and typically has two Republican seats and two Democrat seats. FTC Chair Andrew Ferguson, in his previous role as a commissioner, stressed the importance of "strik[ing] a careful and prudent balance" in the FTC's regulation of AI, indicating that he may take a lighter approach to using the FTC's enforcement authority for AI technologies.
Holland & Knight Insights
The FTC Is Regulating AI: A Comprehensive Analysis (July 25, 2023)
Podcast - An FTC Official Speaks About the Regulation of AI Technology (May 1, 2024)
Podcast - Part 2: An FTC Official Speaks About the Regulation of AI Technology (June 12, 2024)
Back to top
Back to top
Back to top
Back to top
Back to top
Back to top
Back to top
DISCLAIMER »
DISCLAIMER
The information presented herein is updated as of May 1, 2025.
This site does not constitute legal advice or establish an attorney-client relationship between you and Holland & Knight. You should consult a licensed attorney to assess and evaluate the contents of this website to verify their accuracy and their applicability to you or your affiliates. By accessing this site, you accept that this tool is not a replacement for legal counsel.
This page summarizes pertinent federal laws, regulation and guidance impacting the use of artificial intelligence (AI) in healthcare. Click on an icon below or scroll down to see summaries of laws, rules and guidance from the U.S. Congress, National Institute of Standards and Technology (NIST), U.S. Department of Health and Human Services (HHS), Centers for Medicare & Medicaid Services (CMS), U.S. Food and Drug Administration (FDA), Federal Trade Commission (FTC) and The White House. For more detailed analyses of these and similar laws and emerging issues, see our linked thought leadership page and consult our team of Holland & Knight attorneys.
Holland & Knight Insights
Podcast - Changes in FDA, Cannabis Policies and AI Developments (May 13, 2024)
Podcast - A Look Into the FDA and USDA Regulatory Regimes (Nov. 16, 2023)
Podcast - A Post-Election Checkup: FDA Policy and Regulation (Dec. 21, 2022)
FDA's Oversight of Digital Health Products and Medical Software (July 14, 2021)