
A super PAC called Leading the Future has raised $125 million to oppose state-level candidates who support AI regulation. Its backers include Palantir co-founder Joe Lonsdale, OpenAI President Greg Brockman, Andreessen Horowitz, and Perplexity. Their first target: Alex Bores, a New York assembly member and former Palantir engineer running for Congress, who sponsored a transparency law requiring large AI labs to maintain public safety plans and report catastrophic incidents.
Meta has separately committed $65 million to two super PACs backing industry-friendly state candidates. AI companies and executives directed at least $83 million to federal campaigns in 2025. The average New York Assembly race raises about $100,000 total.
The Leadership Question
The most consequential leadership decisions in AI right now are not product decisions. They are governance decisions. And many of the industry's most prominent figures are making a clear choice: spend to prevent oversight rather than engage with it.
This approach treats regulation as a binary. Either there is none, or there is too much. Leaders operating from this framework tend to advocate for federal rules while funding opposition to state efforts, producing, in practice, no regulation at all. For any executive who publicly claims to welcome reasonable oversight, the distance between that position and the dollars flowing into these PACs is a credibility problem that boards and investors should be examining closely.
A Modest Trigger, a Disproportionate Response
The law that provoked this spending is the RAISE Act, signed in December. It requires AI companies with more than $500 million in revenue to publish a safety plan, follow it, and disclose catastrophic incidents. It does not restrict research or prescribe product design. It asks for transparency.
Pharmaceuticals, aviation, and financial services operate under far more demanding regulatory regimes. The fact that a disclosure requirement this modest provoked a $10 million campaign against a single state legislator tells boards and investors something specific about the risk tolerance and strategic judgment of the leaders behind that spending.
The Internal Fracture
Bores draws grassroots support from employees at the same companies whose executives are funding campaigns against him. When the people building the technology disagree with leadership about how it should be governed, the organization has a culture problem with direct consequences for retention, morale, and long-term coherence.
An Anthropic-backed PAC called Public First Action is spending $450,000 in support of Bores, positioning itself as pro-AI with a focus on safety and public oversight. The industry is splitting along governance lines, and those splits are becoming visible in ways that matter for talent markets and organizational credibility.
Implications for Leaders, Boards, and Investors
Executive Selection
Companies in AI or adjacent to it need leaders who treat regulatory engagement as a strategic competency. Boards evaluating C-suite candidates should be asking how prospective leaders think about transparency, accountability, and the role of public institutions in building long-term enterprise value. The leaders who will build durable companies are the ones who understand that earning public trust is a prerequisite for sustained growth.
Board Composition
Many technology boards lack directors with deep experience in regulated industries. Healthcare, financial services, and energy executives understand how to operate within regulatory frameworks and how to shape them constructively. That perspective is becoming essential for AI companies entering a period of intense public scrutiny. Boards composed entirely of growth-oriented technologists and investors will struggle to see the governance risks building beneath the surface.
Reputational Exposure
Employees, customers, and institutional investors are paying attention to how AI companies engage with public policy. Funding campaigns against transparency legislation is a reputational bet with asymmetric downside. Talent leaves. Consumers lose confidence. Regulators become more adversarial. Leaders who fail to weigh reputational exposure alongside political spending are making a calculation that will age poorly.
The Test Ahead
Bores has proposed a national AI governance blueprint spanning eight policy areas and 43 recommendations, including training data disclosure and metadata standards for tracing synthetic content. Whatever one thinks of the specifics, the willingness to do detailed policy work is the kind of substantive leadership this period requires.
The executives and board members who define the next generation of AI companies will be the ones who recognize that accountability and growth operate in the same direction. Those who treat oversight as the enemy will, over time, create the conditions for interventions far more aggressive than anything on the table today.

