
Hello {{first_name|Motivated and Miffed Community}},
This week, AI stopped being a background story. Anthropic sued the federal government over what it can and can't let its models do. A new report quietly mapped which workers are most in the crosshairs. OpenAI crossed $20 billion in annual revenue like it was a Tuesday. And Apple — bless Apple — still needs more time with Siri. (Bloomberg confirmed this week. We'll check back in another quarter.)
✅ TL;DR
⚔️🏛️ Anthropic is suing the Pentagon to keep its guardrails intact.
📊👤 A new report shows AI exposure is real — but the layoffs haven't followed. Yet.
💰🚀 OpenAI crossed $20B ARR and is circling a trillion-dollar valuation.
🛡️💉 A prompt injection attack just hit 4,000 developers. Security can't be an afterthought.
Today’s Sponsor
What do these names have in common?
Arnold Schwarzenegger
Codie Sanchez
Scott Galloway
Colin & Samir
Shaan Puri
Jay Shetty
They all run their businesses on beehiiv. Newsletters, websites, digital products, and more. beehiiv is the only platform you need to take your content business to the next level.
🚨Limited time offer: Get 30% off your first 3 months on beehiiv. Just use code JOIN30 at checkout.
🧠 AI News
1) Anthropic Took the Pentagon to Court — and Meant It

The Department of Defense wanted Anthropic to strip the safety guardrails off Claude for military use. Anthropic said no. Defense Secretary Pete Hegseth gave them a deadline. CEO Dario Amodei still said no. The DOD formally designated Anthropic a "supply chain risk" — a label historically reserved for foreign adversaries — and Anthropic filed suit Monday to block it.
The core dispute was fairly clear: the Pentagon wanted unrestricted Claude access across all "lawful purposes." Anthropic drew the line at fully autonomous weapons and domestic mass surveillance. Amodei put it plainly — removing those limits "is contrary to American values." That's not a PR line. That's a company betting its government contracts on a principle.
The fallout is tangled. Defense contractors using Claude now have to certify they don't — an extraordinary administrative headache with real operational consequences. Fortune reported that Pentagon leaders had a "whoa moment" when they realized how embedded Claude already is in defense operations, including during the Iran conflict. Hundreds of employees at Google and OpenAI signed an open letter backing Anthropic's position. The company isn't alone in this.
Why it matters: When an AI lab goes to court to protect its guardrails, the debate about AI safety stops being theoretical.
2) Anthropic Also Published a Jobs Report This Week — Different Kind of Bomb

A new Anthropic research paper — "Labor Market Impacts of AI: A New Measure and Early Evidence" — introduced something called "observed exposure": a metric comparing what AI is theoretically capable of doing against what workers are actually using it for in practice.
The findings are worth sitting with. Computer programmers sit at 75% task exposure. Customer service reps, medical records specialists, and data entry workers follow close behind. The most-exposed workers skew older, female, more educated, and earn about 47% more than their zero-exposure counterparts. (So the people with the most credentials are also the most exposed. Make of that what you will.)
The tension in the data: actual unemployment impact right now is "small and insignificant." But hiring rates for workers aged 22–25 in high-exposure roles have measurably slowed. The impact isn't showing up through mass layoffs — it's showing up at the front door, quietly. AI isn't replacing workers. It's replacing job openings.
Why it matters: That's a harder story to tell, and a harder one to track — which is exactly why Anthropic measuring it matters.
(Sources: Anthropic Research, Fortune, CBS News)
3) OpenAI Just Crossed $20 Billion in Annual Revenue

For context: OpenAI was at roughly $2 billion ARR in January 2024. They crossed $20 billion by March 2026 — 810 million monthly active users, 1 million enterprise customers, and a funding round that values the company at $730 billion (backers: Amazon, SoftBank, Nvidia, Microsoft). An IPO targeting a $1 trillion valuation is reportedly in planning for late 2026 or 2027.
They also dropped GPT-5.4 this week — a model with a 1-million-token context window and a specialized "Thinking" version for heavier reasoning work. If you've been waiting to run an entire codebase through a model in a single session, your moment is getting closer.
Ten times the revenue. Two years. That's not a trend. That's infrastructure.
Why it matters: The gap between "AI is a tool" and "AI is the platform" is closing fast — and the capital is already priced in.
(Sources: TechCrunch, Yahoo Finance, Sacra)
🤯 Crazy AI News
A Prompt Injection Attack Just Compromised 4,000 Developers

A security incident this week saw a prompt injection attack spread through developer toolchains, hitting roughly 4,000 accounts. The mechanic: malicious instructions embedded in text that AI tools process, effectively tricking the model into executing commands the user never issued. It's the AI equivalent of someone slipping a note into your to-do list that says "also wire me $500."
This is the part of AI adoption nobody puts in the demo. As models get more embedded in code pipelines and agentic workflows, the attack surface gets wider. Security researchers have been flagging this for months. Now there's a number attached to it.
Why it matters: The more you trust the model, the more valuable it becomes to manipulate it. Security has to scale with capability — not catch up to it.
How would you rate this newsletter?
👋 That’s All
This week had a throughline: AI is no longer something companies experiment with — it's something governments negotiate over, workers factor into career decisions, and investors are pricing at a trillion dollars. The infrastructure is already in place. The arguments are just catching up.
Stay MOTIVATED,
Gio


