We are living through one of those rare inflection points in history: the rise of artificial intelligence (AI) — not just as a novelty, but as a force reshaping economies, governance, civil rights, and everyday life. Here in New Hampshire, our state legislature has already recognized this, introducing bills to regulate political ads generated by AI, require disclosure of synthetic images, and address how state agencies use AI.
Yet the debate is far from settled. Between competing visions — of protecting innovation vs. defending rights, of centralized vs. state‐based oversight, of bold regulation vs. moratoria — the stakes could not be higher.
This article investigates where New Hampshire (and the U.S.) are now, what research shows are the risks and benefits, and where policy needs to land if we are to avoid both overreach and neglect.
What’s at Stake
Innovation vs. Risk
AI promises big advances: from health care diagnostics, to smarter infrastructure, to boosting small businesses. But the risks are no less real. Misinformation, discrimination embedded in algorithmic decision‐making, opaque “black box” models, threats to privacy, job displacement — these aren’t hypothetical. They are increasingly documented.
A recent Pew Research / AI experts study found that while many experts believe AI will be personally beneficial, the public is far more wary — attributing both more excitement and more concern, depending on the domain.
Local policymakers, across the country, similarly express both optimism and serious anxiety about AI’s impacts: surveillance, misinformation, political polarization among the top worries. Importantly, many feel underprepared to address them.
Current Policy Landscape: Federal, State, and New Hampshire
At the federal level, the U.S. has no comprehensive law exclusively governing AI. Instead, what exists is a patchwork of sectoral regulations (privacy, discrimination, consumer protection) plus proposals in flux.
In New Hampshire, legislators have moved on multiple fronts: adding “synthetic images” to laws governing illegal sexual content, regulating AI usage in political advertisements (requiring disclosure when ads use manipulated or AI generated content) ; proposals to regulate state agencies’ use of AI, limit decision-making without human oversight, and possibly require registration of foundation models that underlie AI tools.
These steps are sensible and relatively modest. But modesty might not be enough.
The Big Debate: Moratoriums, Preemption, Transparency, and Co-Governance
Several major controversies have arisen in recent months that bear directly on New Hampshire’s future:
-
Moratoriums and Federal Preemption: A proposed federal moratorium to block states from passing any AI regulation for ten years has drawn sharp criticism. Opponents view it as an overreach that strips states of their ability to protect their residents. The Senate overwhelmingly rejected such a provision.
-
Industry Pushback vs Public Concern: Major AI firms have argued that heavy regulation — or conflicting state vs federal rules — could undercut innovation, harm competitiveness (particularly relative to global rivals like China), and slow beneficial applications. At the same time, surveys show broad public support for regulation, especially to ensure fairness, accuracy, accountability, and safety.
-
Regulatory Capture and Expertise Gaps: Research suggests that AI policy is being shaped strongly by industry, and that there exist clear risks of regulatory capture: rules that favor the interests of large tech companies, with weaker protections for privacy, fairness, or civil liberties.
-
Transparency and Disclosure: One common ground in many policy proposals is greater transparency — about how AI systems are trained, tested, audited; when AI is used in media or government or advertising; what safeguards are in place. Many experts and public polls emphasize this as a minimal requirement.
Why New Hampshire Can’t Sit on the Sidelines
Here in the Granite State, we have advantages and vulnerabilities.
Advantages:
-
Smaller population and government structure can allow more agile policy experimentation.
-
Diverse needs (rural, urban, educational, health-care) provide a test-bed for different regulatory models.
-
High civic engagement — citizens and local officials are already aware of AI’s issues.
Vulnerabilities:
-
Without clear state rules, AI misuse (in public services, advertising, law enforcement, etc.) could cause harm before there’s recourse.
-
Patchwork regulation risks conflicting requirements for companies or citizens, creating legal uncertainty and inhibiting responsible innovation.
-
If New Hampshire does not proactively legislate, default norms will be set by big tech or federal decision-makers often removed from local contexts.
What a Sensible Policy Agenda Looks Like
To avoid extremes of overregulation or underprotection, here is a policy blueprint that New Hampshire should consider, drawing on research, best practices, and the state’s capacities:
|
Policy Goal |
Proposed Measures |
Trade-offs & Safeguards |
|---|---|---|
|
Transparency & Disclosure |
Require public disclosure when AI is used in political ads, in public services (e.g. welfare, criminal justice), or in any system that materially affects people. Public registry of “foundation models” operating in NH. |
Protect proprietary information where legitimate, but require minimal standards (auditability, documentation). Use sunset clauses to adapt with evolving tech. |
|
Human Oversight & Accountability |
Any automated decision must allow human review. Government agencies using AI must publish oversight protocols. Define and penalize harms (e.g. discrimination, defamation). |
Must avoid overly burdening small entities; phased implementation; financial or technical assistance for compliance. |
|
Risk Tiers & High-Risk AI |
Classify uses of AI by risk: e.g. low-risk (text generation for entertainment), medium-risk (health recommendations, job screening), high-risk (criminal justice, political influence, critical infrastructure). Higher regulatory burdens for higher risk. |
Risk classification must be transparent and periodically revisited; potential for misclassification must be mitigated; appeals or review process. |
|
Public Participation & Ethics Oversight |
Create an independent NH AI Ethics and Oversight Commission, with experts, civil society, business and public members. Provide mechanism for public comment in rulemaking. Embrace co-governance. |
Need safeguards against capture; ensure diversity of voices; ensure commission has teeth (decision-making or binding influence) rather than just advisory. |
|
Coordination with Federal & Other States |
Push for federal standards that set floors (not ceilings), so states can add protections. Monitor and harmonize with states that are ahead to avoid conflicts (e.g. CA, CO). |
Avoid preemption that removes state rights; careful about conflicts of law; inter‐state compacts or model laws may help. |
Risks of the “Moratorium” Path
A few recent proposals — like a 10-year moratorium on state AI regulation — have been floated and, in many cases, rejected.
Here’s why those moratoria are dangerous:
-
They freeze progress on protections. Harmful AI uses are already here. Waiting a decade gives trust eroding events time to occur.
-
They shift power to the federal and corporate level, reducing local accountability.
-
They reduce flexibility. Different states face different issues; one size will not fit all. Local variation can be a source of innovation (and caution).
-
They may backfire for innovation. Companies may prefer regulatory clarity, including state laws, rather than a vacuum of governance. Clarity sometimes helps more than deregulation.
My View: New Hampshire Should Lead, Not Lurch
Opinions are cheap or plentiful; leadership is rare. New Hampshire has the opportunity to take a middle path: neither stifling innovation with rigid rules nor allowing unprincipled use of AI to erode rights and trust.
Here are my convictions:
-
Regulation done right accelerates trust, which is itself a form of competitive advantage. Companies operating in an environment where people trust AI are more likely to see adoption, investment, and cooperation.
-
We don’t need perfection, but we do need clarity. Practitioners, public servants, citizens are already confused. Giving them rules, duties and oversight will reduce damage.
-
Transparency isn’t optional. When AI is used—especially to make decisions with real consequences—it must be disclosed. Hidden automated pregnancies can’t masquerade as human judgment.
-
Humans must remain in the loop. Where lives, liberty, or dignity are affected, there should always be human agency, oversight, and accountability.
-
South by leadership, not reaction. New Hampshire should not just react to federal preemption, or moratoria, or industry pressure. Nor should it wait for perfect models elsewhere. We can begin now, thoughtfully.
Conclusion
The AI era isn’t coming—it’s already here. New Hampshire, like all states, is being asked to make choices: choices about innovation, justice, safety, privacy, accountability. The path we pick now will echo for years.
If we succeed, Granite Staters will be protected and empowered. If we fail, we may pay a high price: loss of trust, erosion of rights, and missed opportunities.
It’s time for our lawmakers, citizens, and civil society not merely to respond, but to shape the future. We deserve nothing less.



