Sam Altman’s OpenAI has asked a key California political finance watchdog to investigate the local resident behind a pair of AI-related ballot measures over what the company described as “serious questions” about his potential motives, The Post has learned.
The complaint to California’s Fair Political Practices Commission, or FPPC, references East Bay native Alexander Oldham, who filed two pending proposals that, if approved, would empower state officials to regulate major AI firms – in part by putting a special focus on policing public benefit corporations. OpenAI recently converted into such an entity.
As The Post exclusively reported earlier this month, Oldham is the stepbrother of Zoe Blumenfeld, a senior employee at OpenAI’s chief rival Anthropic, and he also has ties to tech entrepreneur Guy Ravine, who has waged a bitter legal battle with OpenAI over who came up with the idea for the company.
The Post has not seen any evidence that Ravine was involved in the ballot initiative and he is not mentioned by name in OpenAI’s filing.
Oldham’s measures “appear to be designed to impose complex and unnecessary regulatory burdens on OpenAI,” an OpenAI lawyer writes in the complaint, a copy of which was obtained by The Post.
OpenAI alleged that Oldham may have violated state lobbying rules, including failure to make required disclosures.
“Experts stated and warned that the initiatives’ language is surgically tailored to target OpenAI’s unique public benefit corporation structure and could empower regulators to single out specific companies rather than set industry-wide standards — all while Mr. Oldham maintains ties to a businessman with a long-running dispute against OpenAI. These connections raise serious questions about who is really behind this effort,” the complaint states.
Oldham had “no known background in AI policy or political campaigns” prior to filing the ballot proposals, the complaint adds.
OpenAI’s lawyers allege that Oldham “appears to be a stand-in to obscure two of the measures’ true backers” and ask the watchdog agency to explore whether he has any ties to a nonprofit called Coalition for AI Nonprofit Integrity (CANI).
CANI is publicly backing a separate ballot proposal filed by Poornima Ramarao, the mother of an ex-OpenAI employee-turned-whistleblower who was ruled to have died by suicide, that aims to reverse OpenAI’s restructuring.
OpenAI alleges that the three measures “have unmistakable formatting similarities, suggesting that they were drafted by the same individuals.”
The Post has not seen any evidence that Oldham has a connection to CANI.
OpenAI previously accused CANI of obscuring its funding and violating state lobbying laws requiring public disclosures. The company has also accused CANI of possibly being a front for Elon Musk, who is currently suing OpenAI for abandoning its nonprofit mission.
The FPPC dismissed OpenAI’s initial complaint against CANI last fall, citing a lack of sufficient evidence of campaign finance violations.
Notably, OpenAI’s lawyer does not accuse Anthropic of involvement in the initiative.
In the new complaint, OpenAI’s lawyers draw a parallel between Oldham’s low-profile background and the background of onetime CANI President Jeffrey Mark Gardner — a New York-based LSAT instructor who led the nonprofit despite having no apparent connection to California or the AI industry. Gardner has since stepped down.
“When major political activity moves through opaque entities, it erodes public trust and clouds informed decision-making,” OpenAI’s outside law firm Jenner & Block said in a statement. “We respectfully ask the FPPC to encourage full candor and transparency so the public can evaluate these efforts on their merits.”
Oldham’s ballot measures received a title and summary from the California attorney general’s office earlier this month – meaning he could begin gathering the more than 500,000 signatures required to put them up for a vote this fall.
The FPPC, CANI and Oldham did not immediately return The Post’s request for comment on OpenAI’s filing.
Earlier this month, Oldham told The Post that he crafted the ballot measures using AI chatbots because he wanted to “create a public document to spark a necessary debate on AI regulation and get the public thinking about these ideas.” He denied that he collaborated with anyone, including lawyers, to craft them.
“Let me make this very clear: Neither Guy Ravine nor Zoe Blumenfeld are involved in this initiative,” Oldham told The Post in a written statement. “I haven’t been in touch with Guy Ravine in nearly a decade and I have not been in touch with Zoe in more than two years. This initiative was filed, created, and funded by me.”
Anthropic also denied any connection, stating it “has had no involvement in, coordination with, or knowledge of any ballot proposals filed by Alexander Oldham, and the company does not support either proposal.”
Ravine vehemently denied that he had colluded with Oldham in any way or had any foreknowledge about the ballot measures, a sentiment echoed by Oldham.
“I have had no involvement in his initiative,” Ravine said. “I have not been in contact with Alex Oldham in approximately 10 years. My only connection to him is that his mother was an investor in a company I was involved with over a decade ago – a tenuous link at best.”
He also noted that he does “not have the financial resources to fund ballot initiatives.”













