How the U.S. is stitching together laws, standards, and enforcement for an era of machine intelligence
By Granite State Report
Introduction: A regulatory race with no finish line
Artificial intelligence is not a single technology; it’s a stack of chips, models, data, and applications interlaced with people and institutions. That stack keeps shifting under our feet, and America’s rules are trying to keep pace without crushing innovation. In practice, U.S. “AI regulation” isn’t a single statute. It’s a patchwork: an Executive Order that sets federal guardrails and homework for agencies; sector regulators applying existing laws to novel harms; early state statutes experimenting with risk regimes; and voluntary standards that companies increasingly treat like obligations. The near future will be shaped less by one sweeping federal bill and more by enforcement plus standards, with Congress deciding whether to add durable scaffolding.
This report maps that living system—what exists, what’s coming, and what to watch in 2025–2026—so builders, buyers, public officials, and citizens can navigate what’s real and what’s hype.
1) The federal backbone: EO 14110 and the NIST safety infrastructure
In October 2023, the White House issued Executive Order 14110 on the “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” a dense directive that mobilized agencies, leaned on national security authorities, and deputized NIST (the National Institute of Standards and Technology) to anchor testing, red-teaming, watermarking research, and evaluation norms. The EO doesn’t read like a traditional law; it sets timelines and tasks across government, marshaling existing authorities to push industry toward safer practices. (The White House)
NIST’s response is now the de facto spine for U.S. AI governance. After releasing the AI Risk Management Framework (AI RMF 1.0) in 2023 (a voluntary risk framework for trustworthy AI), NIST stood up the AI Safety Institute and a 200+ member AI Safety Institute Consortium (AISIC) to scale evaluations and standardize tests for generative models. In July 2024 NIST added a Generative AI Profile to the RMF, and has continued issuing companion documents (like synthetic content transparency overviews). That means even before Congress acts, agencies and companies have a shared language for risk, testing, documentation, incident reporting, and continuous improvement. (NIST)
Why it matters: The RMF is voluntary, but it’s quickly becoming table stakes. Procurement officers, auditors, and litigators are already asking: Did you follow NIST? Expect that pressure to intensify as states adopt “recognized risk frameworks” as safe harbors (Colorado explicitly points this direction), and as federal guidance like OMB’s 2024 memo requires agencies to set up governance, inventory, and safeguards for AI they deploy. (Colorado General Assembly)

Caption: “NIST AI RMF: the common vocabulary behind U.S. AI governance.” (NIST)
2) Enforcement-first regulation: how agencies apply old laws to new AI
While Congress debates, the federal enforcement machine is already active. Three fronts stand out:
Antitrust and “the AI stack”
The FTC and DOJ are scrutinizing AI partnerships, GPU access, and cloud leverage across the stack—from chips to foundation models to distribution—signaling they’ll use existing competition law rather than wait for bespoke AI statutes. In January 2025, the FTC summarized structural risks around Big Tech–model tie-ups (Microsoft/OpenAI, Amazon/Anthropic, Google/Anthropic), and in 2024 the FTC and DOJ split investigative turf across Nvidia (DOJ) and OpenAI/Microsoft (FTC). The point: control of compute, capital, and data cannot quietly tilt the market. (Federal Trade Commission)
FTC Chair Lina Khan has been explicit: the agency is watching for coercive terms up and down the AI stack and will not reprise a “hands-off” Web 2.0 posture. Read her January 2024 Tech Summit remarks and subsequent statements for the blueprint. (Federal Trade Commission)
Consumer protection and robocalls
In February 2024, the FCC ruled that AI-generated voices are “artificial” under the Telephone Consumer Protection Act, making AI robocalls illegal without consent. The ruling followed a high-profile incident in New Hampshire in which a cloned presidential voice targeted voters. This is how AI “regulation” already bites today: by applying decades-old consumer protection law to new deception vectors. (Federal Communications Commission)
Civil rights and employment
The EEOC, CFPB, and DOJ jointly reaffirmed they’ll enforce discrimination and consumer protection laws against harmful automated systems. The EEOC’s technical assistance on Title VII and automated selection tools (resume screening, chatbots, tests) puts employers on notice: document your impact testing, mitigate adverse impact, and keep humans in the loop where required. In health care, HHS/OCR has clarified that nondiscrimination rules under Section 1557 apply to AI—translation: if your triage model or machine translation creates discriminatory outcomes, expect scrutiny. (Federal Trade Commission)
Related YouTube/Videos:
- NIST: Introduction to the AI RMF (explainer) — helpful overview for compliance teams. (https://www.nist.gov/video/introduction-nist-ai-risk-management-framework-ai-rmf-10-explainer-video) (NIST)
- FTC Tech Summit (Jan 25, 2024) — competition + consumer protection across the AI stack. (Hosted video page) (https://www.ftc.gov/media/ftc-tech-summit-january-25-2024) (Federal Trade Commission)
3) Congress: a roadmap without the road—yet
Congress has not passed a comprehensive AI law. However, there’s a bipartisan Senate AI Working Group led by Sen. Schumer with Sens. Rounds, Young, and Heinrich that released a policy roadmap in May 2024. It calls for major investments (the oft-cited $32B/year for non-defense AI R&D), clarity on content provenance/watermarking, civil rights protections, testing infrastructure at NIST, and committee-by-committee work on sectoral risks. In short: fund innovation, police the harms, and use committees to do the heavy lifting. (Mayer Brown)
On political deepfakes, the FEC moved (slowly) through an interpretive process in 2024 on deceptive AI in campaign ads, while states have rushed ahead (more below). Federal privacy remains the missing piece: the American Privacy Rights Act (and prior ADPPA) offered a baseline privacy scaffold that would indirectly rein in some AI abuses—but it’s not law. Expect privacy to be the bellwether for any comprehensive digital governance deal. (Federal Register)
Related YouTube:
- PBS NewsHour: Schumer remarks after an AI “Insight Forum” (archives point to the YouTube stream). (https://www.wpbstv.org/watch-live-senate-majority-leader-schumer-gives-remarks-after-ai-insight-forum/) (wpbstv.org)
4) The states: laboratories for “high-risk AI” rules and deepfake laws
States are moving fastest on two fronts: risk-based regulation of “high-risk” systems and deepfake/political content controls.
- Colorado’s Artificial Intelligence Act (SB24-205) (effective 2026) is the first comprehensive, risk-based AI law in the U.S. It imposes duties on developers and deployers of high-risk systems to use “reasonable care” to prevent algorithmic discrimination, mandates impact assessments and disclosures, and offers a kind of safe harbor if you follow a recognized AI risk framework (read: NIST). Compliance dates, definitions, and the attorney general’s role make it a template other states are studying. (Colorado General Assembly)
- Connecticut passed a wide-ranging AI bill in 2024 focused on bias and transparency; its Office of Legislative Research summary is a useful primer on how states define “high-risk” and allocate duties across developers vs. deployers, with phased obligations starting July 1, 2025. (Connecticut General Assembly)
- Utah has narrowed and refined disclosure obligations—especially for generative AI in consumer interactions—offering an early model for targeted transparency and chatbot disclaimers, including special duties for mental health chatbots. (Perkins Coie)
- Deepfakes in elections: California signed multiple measures to counter deceptive election deepfakes in 2024; at least two dozen states now regulate AI-manipulated political content in some form, though First Amendment challenges are testing the edges (a federal judge blocked a California provision pending litigation). The trend line is clear: more disclosure requirements, more takedown rules near elections, and more court fights over speech. (Governor of California)
AI laws – source: NCSL . Caption: “State deepfake laws are multiplying, with courts testing the boundaries.” (NCSL)
5) Copyright, IP, and provenance: drawing the line around “authorship”
The line from “prompting” to “authorship” is a legal live wire. The U.S. Copyright Office made clear in March 2023—and has reiterated since—that purely AI-generated works lack human authorship and therefore aren’t copyrightable, though human creativity layered on top of AI assistance can qualify. Courts are still charting edge cases, while petitions like Thaler’s bid to push the issue up to the Supreme Court. In parallel, provenance and watermarking commitments (voluntary, so far) aim to make synthetic media detectable without legislating authorship. Expect Congress to revisit training data transparency and remuneration, but there’s no consensus yet. (Federal Register)
On the patent side, USPTO guidance in April 2024 warned practitioners to disclose meaningful AI contributions to inventions and to verify AI-drafted materials—an IP hygiene rule that doubles as a regulatory nudge for law and R&D departments to get their AI governance in order. (Reuters)
6) Elections, integrity, and national security: the “year of the synthetic voter”
Election integrity won’t wait for perfect laws. The FCC move against AI-voice robocalls shows how existing law can check abuse fast; states are layering in disclosures and liability around deceptive political media; and the FEC is inching toward a federal rule on deceptive AI campaign ads. Meanwhile, campaigns themselves are experimenting: parties and PACs are using synthetic media with disclaimers—testing the boundary between persuasion and manipulation. The next step is provenance infrastructure that rides along the content itself (C2PA-like approaches), potentially reinforced by federal procurement and platform policies. (Federal Communications Commission)
National security and critical infrastructure angles are also converging with AI governance. The EO tasked Commerce, Energy, DHS, and others with model red-teaming against biological, cyber, and critical infrastructure risks; NIST and the AI Safety Institute are the testing hubs; and the Senate roadmap calls for funding this test and evaluation capacity at scale. Expect 2025–2026 to bring more incident reporting norms and adversarial testing obligations for frontier models—likely via standards first, enforced through procurement and enforcement later. (The White House)
7) The global angle: the EU’s AI Act as a gravitational field
The EU AI Act entered into force on August 1, 2024 with phased obligations through 2026–2027 and beyond. Even without a comparable U.S. statute, the EU’s risk-based regime—bans on “unacceptable risk,” strict duties for high-risk AI, transparency/technical documentation for general-purpose models, and special rules for systemic-risk models—will shape U.S. companies’ compliance playbooks. As with GDPR, many multinationals will build to the strictest common denominator, then use NIST RMF to align U.S. risk management artifacts with EU conformity assessments. (European Commission)
EU AI Act timeline chart – source: artificialintelligenceact.eu
8) From “voluntary” to “verifiable”: the rise of management-system standards
Another quiet revolution: management-system standards for AI. ISO/IEC 42001 (2023) defines what an AI management system looks like (policies, objectives, controls, continuous improvement). It’s not law, but it gives boards, auditors, and customers a way to ask: Show me your AIMS. As certification markets form (and federal procurement references them), “voluntary” starts to feel mandatory. Combine 42001 with NIST RMF artifacts and sector guidance (HIPAA/OCR, EEOC, etc.), and you have a corporate operating system for AI governance. (iso.org)
IEEE’s ethically aligned design work and the P7000 series continue to fill design-time gaps (values analysis, transparency, data governance). Standards aren’t substitutes for rights and remedies, but in AI’s fast cycles they’re the connective tissue between principle and practice. (IEEE Standards Association)
9) What the next 18–24 months likely look like
1) Enforcement expands before Congress acts.
Expect more FTC/DOJ actions around exclusivity, data sharing, cloud/compute tying, and acquisitions in the model ecosystem; FTC and state AGs will target unfair or deceptive AI claims (including “AI-washing”); EEOC/OCR/CFPB will treat AI harms as ordinary lawbreaking rather than exotic edge cases. Watch for remedies that require deleting models trained on unlawfully obtained data, not just the data—a remedy Khan has flagged. (The Wall Street Journal)
2) States iterate on Colorado/Connecticut patterns.
More states will adopt high-risk AI regimes with duties for developers/deployers, impact assessments, disclosures to consumers, and AG enforcement—often referencing NIST RMF for safe harbor. Political deepfake rules will proliferate, with courts pruning overbroad provisions. (Colorado General Assembly)
3) Federal procurement and OMB governance become leverage points.
OMB’s M-24-10 makes every agency stand up governance, inventory models, and mitigate risk for “safety-impacting” systems. That will cascade into RFPs and vendor requirements—functionally raising the national floor for AI risk management without new legislation. (White House)
4) Content authenticity and provenance harden.
Between FCC enforcement, platform policy, and NIST’s transparency work on synthetic content, expect more durable provenance signals (e.g., C2PA) and more visible disclosures for synthetic media in elections and consumer contexts. The Senate roadmap keeps this in view; the FEC process will grind forward. (Regulations.gov)
5) Copyright keeps simmering.
USCO guidance stands (no copyright for purely AI-generated works), courts work through hybrid authorship, and Congress flirts with training-data transparency. Don’t bet on a grand bargain in 2025; do bet on more guidance updates and case law. (Federal Register)
6) EU gravitational pull intensifies.
As EU AI Act obligations for GPAI and systemic-risk models kick in through 2025, large U.S. firms will adopt EU-grade documentation/testing—and reuse the machinery stateside. (Reuters)
10) What organizations should do now (practical checklist)
- Adopt NIST AI RMF artifacts (risk registers, evaluation plans, incident logging) and map them to ISO/IEC 42001 controls so you can show both operational discipline and auditability. This prepares you for state AG questions, federal procurement asks, and customer diligence. (NIST)
- Inventory and tier your use cases (e.g., safety-impacting vs. low-risk), require model cards/evals from vendors, and set red-teaming expectations proportionate to risk (security, toxicity, bio/cyber misuse). The EO/NIST guidance provides concrete menus for tests and reporting. (The White House)
- Stand up civil-rights guardrails: document disparate-impact testing for hiring, lending, benefits, and health triage; add human review and accessible appeal channels. Regulators already expect it. (EEOC)
- Mark and log synthetic media used in marketing and political/issue ads; include clear disclosures and retain provenance data. FCC, state AGs, and plaintiffs’ bars are watching. (Federal Communications Commission)
- Prepare for antitrust scrutiny if you control critical inputs (GPU clusters, proprietary datasets, user distribution channels) or if your contracts create effective exclusivity across the stack. Document pro-competitive rationales and avoid coercive tying. (Federal Trade Commission)
11) The thesis: America’s AI governance will be “standards-led, enforcement-fed”
If you’re waiting for a single U.S. “AI Act,” you may be waiting a while. The most realistic path is the one we are already on:
- Standards-led (NIST RMF + ISO/IEC 42001 + industry profiles)
- Enforcement-fed (FTC/DOJ/CFPB/EEOC/OCR/FCC using existing law)
- State-piloted (Colorado/Connecticut risk-based regimes + deepfake laws)
- Procurement-driven (OMB mandates that bake governance into agency purchasing)
- EU-influenced (American firms building to EU requirements and porting the practices home)
That mosaic won’t satisfy purists, but it is moving. The gap to watch is federal privacy and civil-rights updates tuned to algorithmic systems. When Congress eventually acts—whether via a comprehensive privacy law with algorithmic provisions, or sector-specific bills on elections, health, employment, and safety-critical AI—it will sit atop an already-maturing infrastructure of tests, disclosures, and management systems.
The future of AI regulation in America will look less like a single statute and more like reliable plumbing: shared definitions, test harnesses, audit trails, and escalation paths that let innovation flow without flooding the basement.
References (selected)
- Executive Order 14110, Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (Oct. 30, 2023), White House / Federal Register. https://www.whitehouse.gov & https://www.federalregister.gov (Texts and sections on NIST, safety, content provenance). (The White House)
- NIST AI Risk Management Framework (AI RMF 1.0) and Generative AI Profile (July 26, 2024), plus AI Safety Institute/AISIC materials. https://www.nist.gov/itl/ai-risk-management-framework ; https://www.nist.gov/artificial-intelligence/artificial-intelligence-safety-institute-consortium-aisic ; NIST AI 100-4/600-1 docs. (NIST)
- OMB Memorandum M-24-10 (Mar. 28, 2024): Advancing Governance, Innovation, and Risk Management for Agency Use of AI. https://www.whitehouse.gov/wp-content/uploads/2024/03/M-24-10-Advancing-Governance-Innovation-and-Risk-Management-for-Agency-Use-of-Artificial-Intelligence.pdf (White House)
- FTC staff report / press releases on AI partnerships (Jan. 17, 2025); DOJ/FTC split turf on AI antitrust inquiries (June 2024). https://www.ftc.gov ; coverage in The Verge. (Federal Trade Commission)
- FCC Declaratory Ruling: AI-generated voices deemed “artificial” under TCPA (Feb. 8, 2024); associated AP coverage of NH incident. https://www.fcc.gov ; AP News. (Federal Communications Commission)
- EEOC/CFPB/DOJ/FTC Joint Statements on AI and discrimination; EEOC Technical Assistance on algorithmic selection procedures (May 18, 2023). Agency docs and legal analyses. (Federal Trade Commission)
- Colorado SB24-205 (2024) and Connecticut SB 2 (2024) on high-risk AI obligations. State legislative texts and analyses. (Colorado General Assembly)
- Deepfake laws: CA measures (Sept. 17, 2024); NCSL summary (2024); litigation blocking portions (AP). https://www.gov.ca.gov ; NCSL; AP. (Governor of California)
- FEC materials on AI in campaign ads (Sept. 2024 release of draft docs; Federal Register docket). https://www.fec.gov ; https://www.federalregister.gov (FEC.gov)
- USCO guidance on AI-generated works and authorship; ongoing policy notes in 2024–2025 coverage. Copyright Office docs; The Verge summary. (Federal Register)
- ISO/IEC 42001 (2023): AI Management System (AIMS) requirements and certification context. ISO, IEC, NSF. (iso.org)
- EU AI Act: Commission press release (Aug. 1, 2024), implementation timelines/guides (Goodwin), and coverage of systemic-risk guidance (Reuters). (European Commission)
Notes on methodology
This report privileges primary sources (federal registers, agency memoranda, official state legislative pages) and reputable secondary analyses that summarize complex legal texts. Where there is policy movement or litigation, we name dates and cite underlying documents. We avoid speculative claims and identify areas still in flux (e.g., FEC rulemaking; federal privacy).
Bottom line
America’s AI regime is coalescing around standards and enforcement. The specifics will keep evolving, but the operational demands are already clear: risk management, documentation, testing, provenance, and accountability. If you can’t show your work, you don’t have governance. And without governance, you won’t have trust—whether from regulators, customers, or voters.



