America Doesn’t Make Oppenheimers Like We Used To

What business did Dario Amodei think he was getting into when Anthropic signed on as a Pentagon contractor? 

America Doesn’t Make Oppenheimers Like We Used To
Dario Amodei.

What business did Dario Amodei think he was getting into when Anthropic signed on as a Pentagon contractor? 

Edited by Sam Thielman


RIGHT BEFORE THE PENTAGON gave Anthropic a 5 p.m. deadline to provide it total and uncaveated access to its artificial intelligence agent Claude, President Trump on Friday seemed to settle an impasse that was full of implication for the future of militarized AI: Anthropic is now barred from government contracting—all government contracting, not just with the Pentagon. 

It looks like an eleventh-hour intervention by OpenAI CEO Sam Altman, a fellow recipient of the Pentagon's $200 million-per-company frontier AI contract, to preserve Anthropic's share of the contract was unsuccessful. Anthropic's stated worries over the use of its AI for mass domestic surveillance and autonomous weapons represent "strong-arm[ing] the Department of War" to Trump. The message for every AI company going forward is to do whatever the administration says or lose out on a contracting opportunity that, while small by both Pentagon standards and Wall Street expectations for eventual AI profitability, represents a growing demand for AI. [It’s also one of the few stable revenue streams for AI companies currently struggling with consumer adoption and burning inconceivable sums of cash—Sam.]

Criminal penalties, and the invocation of the Defense Production Act to seize the AI, remain in reserve for the time being. (Trump referred to criminal penalties, but only if Anthropic drags its feet on a six-month phase-out.) It remains to be seen if Anthropic will attempt to fight a contract de-listing, which is not supposed to occur through presidential blacklist

Trump framed Anthropic as "Leftwing nut jobs." But hours after we published our piece yesterday, Anthropic CEO Dario Amodei released an extraordinary statement framing his objections—hesitations might be a better word—to the Pentagon. It is anything but left-wing. And it raised an obvious question: What did Anthropic think was going to happen when it contracted with the Pentagon for "frontier" AI capabilities? 

When you take Doctor Doom's money to provide him a lathe to construct components for anthropomorphic robots, do you not understand that he is going to build Doombots? 

Bro, did you not see Oppenheimer? 

With any corporate statement, it's instructive to look at the concerns the statement attempts to preempt. Those provide clues as to what arguments and sentiments the corporation, and often the milieu from which the corporation emerges, takes seriously. Amodei starts out with this: "I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries." 

From the jump, and animating the entire statement, Amodei declares himself ready, willing and able to provide AI tools to the Pentagon. He feels compelled throughout to say he wants Anthropic to be a Pentagon contractor. He accepts the mission statement that AI is an arms race and the United States should win it. He further accepts the Hulkamania version of geopolitics beloved by the Alexander Karps of the Valley, in which U.S. military power is about promoting "democratic values" and only its enemies are autocratic. Amodei devotes a paragraph to saying that "against the company's short term interest," he rejected cooperation with the Chinese. Google, one of  the remaining recipients of the Pentagon's frontier AI contract, sure didn't

Sean Parnell, the chief Pentagon spokesperson, accused Anthropic of trying to "dictate the terms regarding how we make operational decisions." That criticism, whether made in good faith or not, clearly bothers Amodei. It leads him to this declaration of his good faith as a military contractor: "Anthropic understands that the Department of War, not private companies, makes military decisions. We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner." 

Among the particular military operations for which the military used Claude and which Anthropic apparently "never raised objections to" was, reportedly, the kidnapping of Nicolas Maduro. I wonder if we'll learn later that this bright-line violation of Amodei's proclaimed safeguards ("Facilitat[ing]... any act of violence or intimidation targeting individuals" is supposed to be a terms-of-service violation) seeded the bed for Anthropic's present hesitations, particularly now that they're supposedly disfavored from contracting. On the other hand, I have seen many cases—so many that I didn't even realize it at first—of thosewithin the Security State or who identify with it dividing the world into Americans, who at least have some rights claims, and foreigners/noncitizens, who have none. So we'll see.

All of this is to highlight the absence of any line in Amodei's statement that said something like: And that's why Anthropic must walk away from this contract. 

In this clash with the Pentagon over what Amodei is framing as a fundamental objection—something he cannot "in good conscience" go along with—Amodei never says Anthropic will terminate the arrangement. He says regretfully that he'll work to "offboard" Anthropic if he has to, but that's a decision made by Trump. Amodei's "strong preference" is to continue the Doombot contract that he doesn't want to admit to himself is a Doombot contract, even when Doctor Doom is telling him that it is unacceptable to presume to tell Doom how Doom constructs his arsenal. Instead of quitting over principle, Amodei set Anthropic up to be fired. Now they have neither principles nor government contracts. 


NOW: THE TWO use-cases Amodei raises are extremely serious. Speaking to the point my friend made in yesterday's edition, Amodei says that AI is "simply not reliable enough" to power autonomous weapons (though Amodei also says he doesn't object to autonomous weapons in principle!). Parnell says the Pentagon doesn't "want" to develop autonomous weapons "without human involvement"—which is, conspicuously, a red herring, since any autonomous weapons system will have human involvement somewhere in the chain. The relevant considerations that Parnell elides are where, how and how meaningfully in the process that involvement occurs. This is an ominous reminder that the point of AI weaponry is not precision but scale. If you want to kill a lot of people and destabilize their government, AI is indeed reliable enough. 

Amodei's other hesitation concerns something I've covered for decades: mass domestic surveillance. As Sen. Ron Wyden has warned for years, Amodei acknowledges that right now, AI can siphon all the voluminous, hyper-intrusive personal data that data brokers have acquired. Out of that data, AI can assemble an even more powerful panopticon than currently exists. Parnell says the Pentagon has no interest in mass surveillance of Americans—lmao—since that would be illegal. Amodei's statement says that is currently a legal grey area, as "the law has not yet caught up with the rapidly growing capabilities of AI." (It's not a Constitutional grey area, however, but that never stopped the National Security Agency, which, uh, belongs to the Pentagon.) Remember that Pete Hegseth's maxim is "maximum lethality, not tepid legality."

Amodei's desire to remain a military contractor led him to frame Anthropic's hesitations as a narrow band of objections that are not central to Anthropic's military integration. His competitors, it is worth noting, aren't even doing that. (More on that in a second.) All of them accept that they ought to be in business with the military, even during this current moment of maximal bellicosity. Amodei, it is highly conspicuous, doesn't register building a surveillance panopticon of foreigners as a problem. ("We support the use of AI for lawful foreign intelligence and counterintelligence missions.") Exceptionalizing foreseeable atrocities as regrettable abuses despite their emergence from standard operations is a throughline of the Security State during the War on Terror. 

The time to worry about everything ostensibly concerning Amodei was before signing the contract that Amodei didn't wish to abandon. America is in such steep decline that we don't even make Oppenheimers like we used to. 

Speaking of. Earlier this week, New Scientist reported on research coming out of King's College London that found that in war-game scenarios, three leading LLMs, including Anthropic's Claude, race up the escalation ladder: "In 95 per cent of the simulated games, at least one tactical nuclear weapon was deployed by the AI models. 'The nuclear taboo doesn’t seem to be as powerful for machines [as] for humans,' says [researcher Kenneth] Payne. … OpenAI, Anthropic and Google, the companies behind the three AI models used in this study, didn’t respond to New Scientist’s request for comment."

As always, it is the workers for these companies—the ones without whom the code will be neither written, tested nor debugged—who seek to act responsibly when their employers will not. AI appeals to capitalists in the first place as a response to that reality: to replace as much labor as possible and discipline what remains. 

A coalition of unions representing Amazon, Google and Microsoft workers and their supporters on Friday afternoon released their own statement responding to Hegseth’s ultimatum. They showed the sort of principle that Amodei would not. "We are writing to urge our own companies to also refuse to comply should they or the frontier labs they invest in enter into further contracts with the Pentagon," they wrote, on behalf of No Tech for Apartheid, Amazon Employees for Climate Justice, the Amazon Labor Union, No Azure for Apartheid and others. "Our employers are already complicit in providing their technologies to power mass atrocities and war crimes; capitulating to the Pentagon’s intimidation will only further implicate our labor in violence and oppression…

"Executive leadership at Google, Microsoft and Amazon must reject the Pentagon’s advances and provide workers with transparency about contracts with other repressive state agencies including DHS, CBP and ICE. We invite workers to join us in organizing to ensure our leadership does not use our labor for mass surveillance, weaponry and war." 

WALLER VS. WILDSTORM, the superhero spy thriller I co-wrote with my friend Evan Narcisse and which the masterful Jesús Merino illustrated, is available for purchase in a hardcover edition! If you don't have single issues of WVW and you want a four-issue set signed by me, they're going fast at Bulletproof Comics! Bulletproof is also selling signed copies of my IRON MAN run with Julius Ohta, so if you want those, buy them from Flatbush's finest! IRON MAN VOL. 1: THE STARK-ROXXON WAR, the first five issues, is now collected in trade paperback! Signed copies of that are at Bulletproof, too! And IRON MAN VOL. 2: THE INSURGENT IRON MAN is available here!

No one is prouder of WVW than her older sibling, REIGN OF TERROR: HOW THE 9/11 ERA DESTABILIZED AMERICA AND PRODUCED TRUMP, which is available now in hardcover, softcover, audiobook and Kindle edition. And on the way is a new addition to the family: THE TORTURE AND DELIVERANCE OF MAJID KHAN.

And you can pre-order Friend of FOREVER WARS Colin Asher's new book, The Midnight Special: The Secret Prison History of American Music, at this link!