What Does the U.S. Government Actually Say About AI?

Executive Order 14110 directs federal agencies on AI governance and requires government contractors to meet safety standards. It does not directly regulate private sector AI use.

On October 30, 2023, the Biden Administration issued Executive Order 14110 on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. This isn't a law passed by Congress. It tells federal agencies how to govern AI systems and requires companies with government contracts to meet safety and security standards.

The order created binding reporting requirements for companies building frontier AI models and established an Advisory Committee on Artificial Intelligence within NIST. Here's what gets missed: the order doesn't regulate private AI use directly. It creates federal agency requirements that affect contractors and vendors.

The Office of Management and Budget followed up with memoranda requiring federal agencies to assess and manage AI risks in their systems. Starting in 2026, the General Services Administration will enforce AI governance clauses in federal contracts. If you sell services to government agencies, you'll need to demonstrate AI governance.

Why Does the NIST AI Risk Management Framework Matter for Organizations?

NIST AI RMF is non-binding guidance adopted by federal agencies as their standard. Private enterprises use it to demonstrate structured AI risk governance to regulators.

The NIST AI Risk Management Framework was finalized on January 26, 2023. It's not binding. It's not a law. But adoption has been rapid.

The framework has four parts: GOVERN (establish policies), MAP (identify risks), MEASURE (assess and monitor), and MANAGE (respond). Applied across all industries and types of AI systems.

Federal agencies picked it as their AI governance standard. Private enterprises in finance, healthcare, and defense use it to show regulators they have structured AI risk management. It's become the common language between compliance and innovation, even though regulators can't require it.

The framework demands transparency and documented processes. If you implement NIST AI RMF, you can show regulators your governance structure when asked. If you skip it, scrutiny gets worse when problems emerge.

How Are Different Industries Regulated on AI?

The U.S. regulates AI through existing agencies: SEC for disclosure, FDA for devices, FTC for consumers, CFPB for lending. No standalone federal AI law exists.

The U.S. doesn't have one AI regulation law. Enforcement happens through existing agencies and laws. Here's where regulators are actually active:

Sector Regulatory Body Focus Area Key Requirements
Financial Services SEC, CFPB, Fed Disclosure, Fair Lending, Systemic Risk Companies must disclose material AI risks in 10-K/10-Q filings. Banks must explain AI-driven lending decisions to applicants and demonstrate fair lending compliance under ECOA.
Healthcare & Medical Devices FDA Algorithm Safety, Effectiveness, Change Protocols Medical device manufacturers must demonstrate AI/ML algorithm safety and effectiveness. Algorithm changes must be validated. Performance must be continuously monitored post-market.
Employment & Hiring EEOC, State Labor Boards, NYC/NY State Algorithmic Bias, Discrimination, Transparency Hiring AI systems must not cause disparate impact. New York City requires independent bias audits for hiring AI by certified third parties (Law 144, effective Jan 2024). Several other states watching.
Consumer Protection (General) FTC Unfair/Deceptive Practices, Bias FTC actively enforces against companies using biased algorithms in lending, housing, hiring. Enforcement under FTC Act § 5 (unfair or deceptive acts and practices).
Copyright & Content U.S. Copyright Office Authorship, Training Data Rights AI-generated content is not automatically copyrightable. Requires sufficient human authorship. Fair use doctrine still under legal development regarding AI training data.

This works when your AI use case fits one domain. Most don't. A company using AI for customer service might touch lending, employment hiring, and consumer protection. Figuring out which rules apply is messy.

What Are California and New York Actually Requiring?

California enforces AI disclosure in hiring despite rejecting broad 2024 regulation. New York City requires independent bias audits for hiring AI tools.

State-level AI regulation remains uneven, but two states are leading.

California has passed several AI-adjacent laws without a comprehensive AI regulatory statute. AB 701 requires employers to disclose when they're using AI in hiring decisions. The California Consumer Privacy Act (CCPA) applies to data collection for AI model training. In 2024, California considered SB 1047, which would have required AI developers to conduct safety audits before deploying large language models and would have created a "duty to warn" about risks. Governor Newsom vetoed the bill, citing concerns about federal pre-emption and compliance burden. Some provisions are likely to reappear in future legislation.

New York City and New York State have been more aggressive. Local Law 144 (effective January 1, 2024) requires employers using automated employment decision tools (hiring AI systems) to conduct independent bias audits by certified third parties every two years. The audit must assess whether the tool creates disparate impact for protected classes. Failure to comply can result in civil penalties up to $500 per day of non-compliance.

New York State has also proposed broader AI transparency laws and is considering requirements for AI explainability in high-stakes decisions. The state's approach signals that other cities and states may follow.

The policy puzzle: Why no comprehensive federal AI law yet?

Congress hasn't passed a standalone AI law as of March 2026. Multiple reasons: AI touches every sector, so no single committee owns it. Tech companies disagree (startups fear regulation, incumbents can afford it). There's real uncertainty about whether strict rules kill innovation.

The realistic path: sectoral rules tied to specific domains like banking, healthcare, and employment. The FTC, SEC, and FDA are already using existing laws to enforce against discriminatory AI. This incremental approach will probably continue for 2–3 years. If something catastrophic happens—mass job displacement or an AI-induced financial crisis—Congress will move faster. Until then, we're looking at a patchwork mix of executive orders, agency guidance, and state experiments.

How Should Organizations Navigate This Landscape?

Audit your AI use case against sector rules. Adopt NIST AI RMF now for governance. Watch California and New York for emerging requirements.

If you're deploying AI in your organization, here's what the current policy environment means for you:

Audit your AI use case against sector-specific rules. Is your AI system touching lending, hiring, healthcare, or government contracts? If so, assume there are regulatory requirements and compliance costs. The rule-of-thumb: if an AI decision affects someone's financial, employment, or health status, document it and be prepared to explain it to regulators.

Adopt NIST AI RMF or a similar framework now, not later. Regulators expect organizations to have governance. An organization without a documented AI risk assessment process looks like it's cutting corners. The NIST framework is freely available and well-understood by regulators. It's your insurance policy against future enforcement action.

Watch New York and California closely. If a requirement becomes standard practice in high-population states, federal regulators notice. Bias audits, for example, went from "California experiment" to industry expectation in 18 months. Understanding what early-adopter states require helps you anticipate what federal or other state agencies will eventually mandate.

Expect federal rules to tighten within the next 18 months. The Biden Executive Order is two years old. The OMB memoranda enforcing it on federal agencies are one year old. Agencies are still ramping up enforcement. The FTC, SEC, and CFPB are actively investigating companies with AI systems. Private enterprises that wait for "final rules" before acting are taking a bet. Those that implement governance now have a head start.

What's Missing from U.S. AI Policy Right Now?

Major gaps remain: no AI copyright rules, no federal transparency requirements, no antitrust framework for AI, and no workforce transition assistance for displaced workers.

The U.S. policy landscape is reactive rather than proactive. Regulation happens around specific harms (algorithmic discrimination, medical device failures) rather than creating a comprehensive framework upfront. This leaves gaps:

No copyright or AI data ownership rules. The Copyright Office has stated that purely AI-generated content is not copyrightable, but questions about training data—did companies get permission? Do they owe royalties to creators?—remain unsettled. This is a major unresolved issue affecting every AI company.

No transparency requirements exist at the federal level. The EO requires some reporting to government, but there's no requirement for companies to disclose AI use to customers or users. Customers often don't know an AI system is making decisions about them.

No antitrust framework specific to AI. The FTC is looking at whether large tech companies are using AI to entrench market dominance, but there are no formal rules yet.

No workforce transition assistance tied to AI. As AI displaces workers in certain roles, there's no federal retraining program or wage insurance. This gap may widen political pressure for more aggressive AI regulation.

Frequently Asked Questions About U.S. AI Policy

Is there a federal AI law in the United States?

No standalone federal AI law exists as of 2026. AI is regulated through existing sector-specific agencies and laws — SEC, FDA, FTC, and CFPB — plus Executive Order 14110, which covers federal agencies and contractors but not the private sector broadly.

What is the NIST AI Risk Management Framework?

The NIST AI RMF is a voluntary framework published in January 2023 that provides structured guidance on identifying, assessing, and managing AI risks. Federal agencies use it as their governance standard, and private enterprises adopt it to demonstrate responsible AI management to regulators.

Which states have the strictest AI regulations?

New York City has enacted the most specific requirement: Local Law 144 requires employers using AI in hiring to conduct independent bias audits every two years. California has passed several AI-adjacent laws around hiring disclosure and privacy, though broader AI regulation bills like SB 1047 have been vetoed.

Do companies need to disclose when they use AI?

At the federal level, there is no general AI disclosure requirement for consumers. However, SEC-regulated companies must disclose material AI risks in financial filings, and employers in New York City must disclose AI use in hiring. Federal transparency requirements are a known gap in current policy.

Sources

AI Regulation Federal Policy NIST Framework AI Governance Compliance