Tuesday, 3 March 2026
Trending

By Granite State Report


Executive Summary

New Hampshire didn’t wait for Congress to sort out artificial intelligence. In 2024 the state enacted three cornerstone laws that, together, criminalize fraudulent deepfakes, demand disclosures for deceptive AI in political advertising, and set the first statewide ground rules for how state agencies can use AI. Two additional bills—one sweeping criminal measure and one civil remedy bill—didn’t make it across the finish line but defined the edges of the debate. The upshot: Granite Staters now have real protections against AI-enabled deception, and state government has to govern its own use of AI with transparency, human review, and civil-rights guardrails. Expect enforcement to arrive through the Attorney General and election officials—and to be amplified by a high-profile FCC ruling that already made AI/voice robocalls illegal without consent nationwide. (New Hampshire Government)


Why This Matters Now

In January 2024, a wave of AI-generated robocalls mimicking President Biden targeted New Hampshire voters ahead of the primary. State investigators traced the stunt, and it accelerated bipartisan appetite to legislate against synthetic political deception. Weeks later the Federal Communications Commission clarified that AI-generated voices count as “artificial” under the TCPA, giving federal and state enforcers a fast hook to act against such calls. That episode—and the legal tools it triggered—frame how New Hampshire wrote its laws and how they’ll operate in 2025–26. (doj.nh.gov)

AP News explainer on the FCC’s action against AI voice robocalls (context for NH voters): https://apnews.com/article/a8292b1371b3764916461f60660b93e6. (AP News)


The Core Statutes

1) HB 1432 (Chapter 243, effective Jan. 1, 2025): Criminalizing fraudulent deepfakes + private right of action

What it is. HB 1432 adds a new fraudulent use of deepfakes offense to the criminal code and—critically—creates a private claim so victims can sue. The measure targets synthetic media used to commit fraud or to cause specific harms, and it tightens accountability by barring lobbyist registration for individuals found to have fraudulently used deepfakes. The bill passed in July 2024 and took effect at the start of 2025. (New Hampshire Government)

Why it’s different. Lots of states have deepfake bills. Fewer combine criminal penalties with a civil right of action tied to fraudulent use. That pairing is practical: prosecutors can pursue egregious cases; private plaintiffs can move faster to stop harm or recover damages in less clear-cut but still harmful incidents. Legal analysts flagged New Hampshire’s approach as among the sharper state models of 2024. (Akin – Akin, an Elite Global Law Firm)

What counts as a violation? The text centers on knowingly creating or distributing a deepfake of an identifiable person for fraudulent purposes or in specified contexts. Practically, the “knowingly” standard and the fraud element will push enforcers to document intent and impact—facts usually present in deceptive fundraising, impersonation, or commercial scams. (New Hampshire Government)

HB 1432 title block — source: NH General Court PDF. (New Hampshire Government)


2) HB 1596 (Chapter 345, effective Aug. 1, 2024): Disclosures for deceptive AI in political ads

What it is. HB 1596 amends election law (RSA 664) to require clear disclosure when political advertising uses deceptive synthetic media (audio, image, video) within 90 days of an election. The new section (RSA 664:14-c, “Synthetic Media and Deceptive and Fraudulent Deepfakes”) sets label language and creates enforcement mechanisms. Satire and parody are explicitly protected. The governor signed it on Aug. 2, 2024. (New Hampshire Government)

What it means for campaigns and PACs. If you use a voice clone, a generated image/video, or other synthetic content that could mislead a reasonable viewer, you must disclose that fact prominently. Placement matters: the disclosure must be clear to ordinary viewers/listeners. The law rejects a tech arms race in favor of truth-in-advertising basics, tailored to AI. (New Hampshire Government)

Why the 90-day window? That mirrors how other states are tackling deepfakes in election contexts—tightening rules as voters turn their attention and the risk of confusion spikes. New Hampshire’s legislative path and reporting around the bill were explicit that the Biden-voice robocall incident catalyzed this move. (AP News)

https://www.nhpr.org/politics/2024-08-06/new-hampshire-law-requires-more-transparency-in-ai-generated-political-ads. (New Hampshire Public Radio)


3) HB 1688 (Chapter 209, effective July 1, 2024): Guardrails for AI used by state agencies

What it is. HB 1688 establishes Chapter 5-D of state law—“Use of Artificial Intelligence by State Agencies.” It defines AI, mandates transparency when agencies use AI to generate content or make recommendations, requires human review for AI-driven decisions, and prohibits certain practices (e.g., real-time biometric identification for surveillance without a warrant; creation of deceptive deepfakes by agencies). It also exempts specified research uses at state-funded higher ed institutions. (New Hampshire Government)

Why it matters. This is where AI regulation gets real for residents’ daily lives. When an agency uses AI for benefits screening, fraud detection, licensing, or public communications, HB 1688 obligates transparency and oversight: people must be able to tell when AI was involved, humans must check consequential outputs, and civil-rights protections remain paramount. It’s a governance bill, not a gadget bill. (LegiScan)

How agencies will comply. The statute’s definitions and obligations dovetail with federal guidance (OMB’s 2024 memo for federal agencies) and national standards (NIST’s AI Risk Management Framework), even though those documents don’t bind state government. Smart agencies will inventory AI uses, classify risk, publish plain-language notices, and log impact assessments—steps already common in federal procurement and increasingly expected of states. (LegiScan)

Helpful explainer (secondary source): Akin Gump’s legal note distills the core rule set for state agencies (disclosures, human review, civil-rights constraints, surveillance limits). While the NH General Court text controls, the summary captures how counsel is advising clients to operationalize it. (Akin – Akin, an Elite Global Law Firm)


Two Measures That Shaped the Debate (But Didn’t Become Law)

HB 1500 (status: pending/introduced 2024): “Misleading synthetic media” crimes, including election-related offenses

HB 1500 proposed explicit criminal offenses for the unlawful distribution of misleading synthetic media, with felony escalation for repeat election-related violations. It did not pass in 2024, but its language previewed where criminal law might head if lawmakers decide that HB 1432’s fraud-focused approach needs reinforcement during election windows. Expect this bill—or a revised successor—to reappear in future sessions. (LegiScan)

HB 1710 (status: failed 2024): Civil actions for prohibited synthetic media

HB 1710 would have created a civil cause of action specific to certain prohibited synthetic media uses in election contexts—overlapping in spirit with HB 1432’s private right of action but tuned more narrowly to election harms. It failed in 2024, yet pieces of it could be grafted onto future election bills if lawmakers or courts decide a more targeted civil remedy is warranted. (LegiScan)


How These Laws Interact With Federal Enforcement

FCC’s TCPA ruling (Feb. 8, 2024): The Commission explicitly held that AI-generated voices qualify as “artificial” under the Telephone Consumer Protection Act, making many AI voice robocalls unlawful without prior express consent. That ruling already empowers New Hampshire’s Attorney General, telecom carriers, and private litigants to block, fine, and sue over AI robocalls. The NH robocall case that mimicked Biden’s voice is the archetype; federal and state actions are now proceeding in parallel, including proposed and final fines. (FCC Docs)

New Hampshire’s own enforcement: The AG’s Election Law Unit publicly documented how investigators traced the January 2024 robocalls, identifying the source and signaling ongoing civil and criminal avenues. The League of Women Voters filed suit as well, demonstrating how private litigants and civic groups can complement public enforcement—an approach echoed in HB 1432’s private right of action. (doj.nh.gov)

A concise legal explainer is available here: https://docs.fcc.gov/public/attachments/FCC-24-17A1.pdf. (FCC Docs)


What Counts as “Deceptive” or “Misleading” Synthetic Media?

New Hampshire’s laws use common-sense, viewer-centric tests. If a reasonable person could be misled to believe a synthetic audio/visual depicts real words or actions of an identifiable person—especially within 90 days of an election—and you fail to disclose that it’s synthetic, you’re in the danger zone. HB 1596’s disclosure rule is intentionally simple: if you manipulate reality in ways likely to confuse voters, label it so people can evaluate the content properly. HB 1432 then adds teeth when the synthetic media is used to defraud. (New Hampshire Government)

The statutes also carve out satire and parody, recognizing First Amendment boundaries and long-standing political speech traditions. That carve-out is not a blanket shield: if “jokes” are deployed to suppress votes or to commit fraud, expect enforcement to test intent and effect. (New Hampshire Public Radio)


How New Hampshire Compares to Other States

According to the National Conference of State Legislatures, states across the country are experimenting with deepfake laws—especially around elections. New Hampshire now sits in the forward cohort: a criminal deepfake statute with a private right of action (HB 1432), a clear election-window disclosure requirement (HB 1596), and government-use governance rules (HB 1688). Many states have one or two of these; fewer have all three. Watch litigation in high-regulation states like California to see how courts balance speech and deception; those rulings will inevitably inform New Hampshire’s enforcement posture. (NCSL)

NCSL’s legislative tracker/map on deceptive audio/visual media to visualize where NH fits nationally: https://www.ncsl.org/technology-and-communication/deceptive-audio-or-visual-media-deepfakes-2024-legislation. (NCSL)


Practical Guide: What Political Campaigns, Lobbyists, Creators, and Agencies Must Do

Campaigns, parties, PACs

  1. Label it: If you use synthetic media in ads near an election, include prominent disclosures (HB 1596). Plan for audio, video, image, SMS, and web placements—your compliance must match your distribution. (New Hampshire Government)
  2. Avoid AI robocalls without consent: The FCC’s TCPA ruling makes AI voice robocalls a legal tripwire. Even outside state law, this is enforceable now. (FCC Docs)
  3. Don’t fake opponents or voters to mislead: HB 1432’s criminal and civil hooks mean deceptive impersonation isn’t just bad optics; it’s a lawsuit waiting to happen. (New Hampshire Government)

Lobbyists and would-be lobbyists

Using deepfakes fraudulently can bar lobbyist registration under HB 1432. If your firm experiments with synthetic media, implement vetting and documented approvals. (Akin – Akin, an Elite Global Law Firm)

Creators, consultancies, platform vendors

Build disclosure tooling into your creative pipeline. Maintain provenance logs (who made what, when, with which model). Track consent for voices and likenesses. When in doubt, disclose. (New Hampshire Government)

State agencies

  • Publish plain-language notices when AI helps generate content, make recommendations, or shape decisions.
  • Keep humans in the loop for consequential calls, and document reviews.
  • Do not deploy real-time biometric identification for surveillance without a warrant; do not create deceptive deepfakes.
  • Exemptions exist for university research, but those programs should still maintain transparency and ethics boards. (LegiScan)

Frequently Asked Questions (with Straight Answers)

Does New Hampshire “ban” deepfakes?
No. It criminalizes fraudulent uses, and it requires disclosure for deceptive synthetic media in political ads near elections. Legitimate satire, art, and labeled synthetic content are not banned. (New Hampshire Government)

Are all AI robocalls illegal in New Hampshire?
Under federal law (TCPA as interpreted by the FCC), calls using AI-generated voices to deliver messages without consent are illegal nationwide. New Hampshire’s AG has already pursued the alleged election-suppression robocalls; further cases can be brought under federal and state law. (FCC Docs)

What if an out-of-state group targets New Hampshire voters?
Jurisdiction follows the harm. State enforcers and private litigants can pursue actors who target Granite Staters, and telecommunications rules empower carriers and the FCC to block and fine bad actors. (FCC Docs)

Can a victim sue over a harmful deepfake that isn’t strictly election-related?
Yes—HB 1432’s private right of action applies to fraudulent uses generally, not just political ones. Election-specific remedies were the subject of HB 1710 (failed in 2024) and could return in future sessions. (New Hampshire Government)


Enforcement Outlook: 2025–2026

Criminal cases will be selective and headline-making, focused on clear fraudulent deepfake schemes (fundraising scams, reputational attacks tied to monetary harm, identity theft). Civil suits will likely proliferate faster as individuals and organizations seek injunctions and damages under HB 1432. Election enforcement will blend HB 1596, existing election law (RSA 664), and federal telecom rules—especially inside the 90-day window before primaries and the general election. (New Hampshire Government)

The Attorney General’s Election Law Unit has already shown it can move quickly; after the Biden-voice incident, investigators publicly identified the sources, and national regulators followed with fines and compliance actions against the network that carried the calls. This “state-to-federal relay” is the pattern to expect. (doj.nh.gov)


Policy Gaps and Real-World Challenges

  1. Clarity for “disclosure” placement and format. HB 1596 expects clear, prominent labels. Campaigns should standardize readable on-screen text for video, spoken labels up front in audio, and ALT text/captions online. Expect the Secretary of State or AG to issue best-practice guidance as casework accrues. (New Hampshire Government)
  2. Authentication and provenance. Disclosures help, but they’re easy to omit. Public-private projects to watermark or sign content (C2PA-style provenance) aren’t mandatory under NH law today, but vendors and newsrooms should plan for it. Enforcement works best when evidence has chain-of-custody. (Context: states and Congress are watching content authenticity closely post-2024.) (NCSL)
  3. Cross-border posting. Synthetic content posted outside New Hampshire but targeted at Granite Staters (platform geotargeting, NH-specific messaging) will test jurisdictional lines. Coordination with platforms and carriers matters. The TCPA ruling provides a federal hook for voice calls; social and streaming will be a harder knot. (FCC Docs)
  4. Government adoption risks. HB 1688 is clear about human review and transparency, but agencies will still need practical playbooks: model inventories, vendor questionnaires, and impact assessments for services that could trigger disparate impact (e.g., benefits eligibility). This is where borrowing from federal OMB and NIST guidance helps—even if not mandated—so New Hampshire agencies don’t reinvent the wheel. (LegiScan)

A Field Guide for Newsrooms and Fact-Checkers

  • Demand disclosures on political content within 90 days of an election; flag missing labels to audiences.
  • Verify voices: when an audio clip goes viral, seek the original file, metadata, and a statement from the purported speaker. The AG’s robocall investigation shows that even crude forensics—timestamps, carrier records—can crack cases quickly. (doj.nh.gov)
  • Use side-by-side comparisons in coverage: show the labeled synthetic vs. the real source, and link the statute section you’re invoking (RSA 664:14-c). (New Hampshire Government)
  • Treat provenance as a beat. Ask platforms and campaigns what signals they use (hashes, signatures, or watermarking). When labels are absent, ask why.

A short “What is a political deepfake?” explainer (editorial choice) alongside a link to NHPR’s reporting on HB 1596. https://www.nhpr.org/politics/2024-08-06/new-hampshire-law-requires-more-transparency-in-ai-generated-political-ads. (New Hampshire Public Radio)


What To Watch in the Next Session

  • A revival of HB 1500-style offenses for “misleading synthetic media” with election-specific enhancements, possibly harmonized with HB 1596’s disclosure regime. (LegiScan)
  • Targeted civil-remedy bills building on the concept behind HB 1710, tuned to election contexts and clearer damages standards. (LegiScan)
  • Administrative guidance from the AG or Secretary of State on disclosure formatting and record-keeping, especially for digital ads and streaming. (New Hampshire Government)
  • Procurement standards for state agencies under HB 1688—expect RFP language about AI transparency, human review, and data protection, aligned to national best practices. (LegiScan)

Bottom Line

New Hampshire has stitched together a workable, enforceable AI rule set ahead of the 2026 cycle:

  • HB 1432 makes fraudulent deepfakes a crime and lets victims sue.
  • HB 1596 forces disclosure for deceptive AI political ads within 90 days of elections.
  • HB 1688 requires state agencies to be transparent, keep humans in the loop, and respect civil-rights and surveillance limits when they use AI.

This isn’t a ban on AI; it’s a demand for honesty and accountability. If you alter reality to fool people—especially voters—you must label it or face penalties. If you’re the government, you can’t hide behind an algorithm. And if you weaponize synthetic media to defraud, you’re not just playing with new tech—you’re breaking the law. (New Hampshire Government)


Related Videos

  1. FCC Open Meeting / TCPA Ruling Context — Segments explaining the AI voice ruling (source: FCC/YouTube mirrors). Link to the ruling text for accuracy: https://docs.fcc.gov/public/attachments/FCC-24-17A1.pdf. (FCC Docs)
  2. NHPR coverage of HB 1596 — Short broadcast pieces are available through NHPR’s site and channel; use this article as the anchor explainer: https://www.nhpr.org/politics/2024-08-06/new-hampshire-law-requires-more-transparency-in-ai-generated-political-ads. (New Hampshire Public Radio)
  3. Legal explainer on AI deepfakes in elections — Quality editorial choice from a reputable outlet summarizing state approaches; pair on-page video with NCSL tracker for accuracy. https://www.ncsl.org/technology-and-communication/deceptive-audio-or-visual-media-deepfakes-2024-legislation. (NCSL)

References


Methodology & Notes

This report relies on primary legal texts from the New Hampshire General Court and official agency documents, supplemented by reputable journalism and legal analyses to contextualize enforcement and policy trends. Where we summarize statutory obligations, we link directly to controlling text. Where we describe enforcement trends, we cite official announcements and rulings. All links were verified at the time of publication.

Leave a Reply

Discover more from Granite State Report

Subscribe now to keep reading and get access to the full archive.

Continue reading