The US Pentagon demands unrestricted control of AI systems for war, surveillance, and repression.
Details. The Pentagon is threatening to cancel its contract with Anthropic and possibly designate the company a “supply chain risk” unless it drops certain restrictions on how its Claude AI model can be used in the military. This would effectively outlaw subcontractors' use of Anthropic's technologies, severely undermining the company's business.
► Anthropic maintains that its restrictions prevent its AI from being used in “dangerous” ways, particularly in fully autonomous weapons without human oversight and in mass surveillance systems. The company argues these limits are necessary to reduce the risk of harm, and insists that any military use must comply with its “responsible AI” framework.
► Despite these stated safeguards, reports indicate that the US military has already used Claude in a partnership with Palantir to capture Nicolás Maduro in Venezuela, which became one of the conflict's epicentres. The Pentagon considers such cases to be examples of these AI tools being used in real combat or field operations.
Context. The conflict is unfolding against the backdrop of a large-scale Pentagon programme to integrate AI into command systems, intelligence processing, and target designation.
► “Detachment 201” has been established, with technocrats from Palantir, Meta, and OpenAI granted officer ranks within the US Army. Pentagon documents further reveal that AI models are being prepared for informational warfare. The state has also solicited $150 billion from private finance firms for military infrastructure, deepening the integration of financial capital directly within the military apparatus.
Important To Know. This is an explicit example of deepening state-monopoly capitalism in the lead-up to war. Large military contracts are used to force corporations to submit to the interests of dominant monopolies.
► In reality, Anthropic's "safeguards" are likely a bargaining tactic within the financial oligarchy. The company will either secure a larger share of government contracts and drop its safeguards or present itself as an "ethical" alternative in the AI market.