Cancel ChatGPT. Seriously. Here Is Why.
Sam Altman stood in the White House in January 2025, beaming next to Donald Trump, announcing a $500 billion AI infrastructure deal called Stargate. The same Sam Altman who, in 2016, said there were things OpenAI would “never do with the Department of Defense.”
That was before he donated $1 million to Trump’s inauguration. Before OpenAI submitted a white paper to the Trump administration proposing to rank every country in the world by its loyalty to American AI interests. Before OpenAI stepped in to take the Pentagon contract that Anthropic turned down on ethics grounds.
I cancelled my subscription the week the Pentagon deal made headlines. The $20 was never the issue. The issue is what $20 a month, multiplied by millions of subscribers, is funding.
TL;DR:
- OpenAI deleted its ban on military use in January 2024 and almost nobody noticed
- Sam Altman is now a Trump administration partner, donating to the inauguration and pitching American AI hegemony from the White House
- OpenAI took Pentagon work Anthropic refused specifically on ethical grounds
- The Trump administration wants AI to run propaganda operations that “suppress dissenting arguments”
- Your conversations are training data by default, and now there are ads
- The bait-and-switch on regulation from “please regulate us” to “regulation would be disastrous”
- What I use instead and honest tradeoffs
⚠️ The Weapons Policy They Hoped You Wouldn’t Notice
On January 10, 2024, OpenAI quietly rewrote its usage policy.
Up until that day, the policy explicitly banned use of OpenAI technology for “weapons development” and “military and warfare.” Not buried in the fine print. Explicit. The company had used this language to justify declining Department of Defense overtures for years, pointing to the policy as evidence that the “benefit of humanity” mission meant something concrete.
On January 10, both of those prohibitions disappeared from the document.
OpenAI’s spokesperson told reporters the rewrite was about making the policy “clearer” and “more readable.” The new version says don’t “harm others” and cites “weapons” as an example. Researchers who study military AI policy were not impressed with the distinction. Heidy Khlaaf, an engineering director at Trail of Bits and a co-author of a 2022 paper with OpenAI researchers specifically flagging military use risks, noted that the former policy “clearly outlines that weapons development, and military and warfare is disallowed, while the latter emphasizes flexibility and compliance with the law.” Her conclusion: “The potential implications for AI safety are significant.”
The timing was not accidental. Within weeks OpenAI announced a cybersecurity collaboration with DARPA. Within months Microsoft, which had paid $10 billion for deep OpenAI integration, was pitching DALL-E as a battlefield command-and-control tool to the Pentagon.
By October 2024, a procurement document obtained by The Intercept confirmed that U.S. Africa Command (AFRICOM) had made the first purchase of OpenAI products by a combatant command “whose mission is one of killing.” AFRICOM stated that OpenAI tools were “essential” to its mission. The document described needing AI for analyzing intelligence to extract “actionable insights” and “decision-making” in “dynamic and evolving threats across the African continent.”
OpenAI’s website still says its mission is to “ensure that artificial general intelligence benefits all of humanity.”
🤝 The Trump/Stargate Alliance
Here is the Sam Altman trajectory, compressed:
2015: Co-founds OpenAI as a nonprofit explicitly to prevent powerful AI from being “captured by any single commercial interest.” Says there are things the company will “never do with the Department of Defense.”
2019: Restructures to “capped-profit” to take investment while claiming guardrails remain.
2023: Testifies before Congress that AI regulation is necessary and important.
Early 2025: Donates $1 million to Trump’s inauguration. Stands in the White House to announce Stargate, a $500 billion AI infrastructure joint venture with the administration.
March 2025: OpenAI submits a white paper directly to the Trump administration. The document proposes that the US create “direct lines” for AI companies to reach “the entire national security community,” develop “custom models for national security,” and increase intelligence sharing between industry and spy agencies.
The framing of that white paper deserves its own paragraph. OpenAI’s global affairs chief Chris Lehane opened it with a quote from Trump’s executive order on AI, then proposed the US should help “countries who prefer to build AI on democratic rails” commit to “deploy AI in line with democratic principles set out by the US government.” The paper argues that any regulation “may hinder our economic competitiveness and undermine our national security.” The Intercept noted that the word “humanity” does not appear anywhere in the document. The company whose entire brand rests on benefiting humanity submitted a policy document to the Trump White House that never mentioned humanity once.
This is not a company that went slightly off course. This is a company that looked at the political winds, decided where the money was, and pivoted completely. The founding mission is now marketing copy.
🪖 The Pentagon Deal Anthropic Wouldn’t Touch
In February 2026, the Guardian reported that OpenAI had agreed to work with the Pentagon after Anthropic was dropped by the Trump administration over the company’s ethics concerns.
Read that again slowly.
Anthropic, a company founded by former OpenAI safety researchers who LEFT OpenAI because they were worried about its direction, refused Pentagon work on ethical grounds. The Trump administration responded by cutting Anthropic off and going to OpenAI instead. And OpenAI said yes.
This is the company you’re subscribing to. The one that fills the gap when even Anthropic draws a line.
📢 Your ChatGPT Subscription Funds the Propaganda Machine
The Pentagon’s AI ambitions are not modest.A 2025 document from U.S. Special Operations Command (SOCOM), reviewed by The Intercept, described wanting AI that could “increase the scale of influence operations,” specifically to “control narratives and influence audiences in real time” and “suppress dissenting arguments” among foreign populations. SOCOM explicitly described the information environment as moving “too fast for military members to adequately engage and influence an audience on the internet,” and said AI would solve that problem.
“Suppress dissenting arguments.” In a real-time automated propaganda system. Built with tools like ChatGPT.
You could argue this is about foreign audiences only. The US military is technically prohibited from running domestic propaganda. But as SOCOM itself acknowledged in the document, “the porous nature of the internet makes that difficult to ensure.” The same AI that suppresses dissent in a foreign country gets deployed in an environment where that country’s content circulates on the same platforms as yours.
🕵️ Your Conversations Are the Product
None of the above is hypothetical or paranoid. All of it is documented. And it’s the backdrop against which the more mundane privacy complaint takes on a different weight.
By default, OpenAI uses your conversations with ChatGPT to improve and train its models. The opt-out is buried in settings most subscribers never visit. That means the company with direct intelligence-sharing relationships with US spy agencies is also holding a corpus of your most candid, sensitive, specific conversations as training data.
Your medical questions. Your financial anxieties. Your work documents. Your relationship problems. Your passwords accidentally included in code snippets. All of it is sitting in OpenAI’s systems, subject to their data retention policies, their security infrastructure (which has had publicly acknowledged breaches), and whatever future policy changes they decide to make.
In 2023 a bug temporarily exposed user conversation histories and payment information between accounts. OpenAI patched it and moved on. But the question isn’t whether any single breach is catastrophic. It’s whether you should be building a detailed record of your private thoughts inside a company that has now integrated itself with the national security apparatus.
Oh, and one more thing. ChatGPT started running ads in the US in late 2025. The $20/month you’re paying for “premium” now includes ads. The company valued at over $800 billion decided it needed more revenue from the people already paying $240 a year.
📜 The Stolen Content Nobody Is Talking About
ChatGPT was trained on internet content scraped without consent from publishers, journalists, and creators. Multiple lawsuits are ongoing. The New York Times. The Intercept. Dozens of other outlets.
OpenAI’s position, simplified: we needed this content to build the product, the product benefits society, and anyway the law is unclear.
The legal question is genuinely unsettled. The ethical one is not. OpenAI built a product worth hundreds of billions of dollars using other people’s work without paying for it, without asking, and without crediting it. They turned that content into a system that now competes with the publications whose journalism it was trained on.
Every ChatGPT subscription is subsidizing a company actively fighting in court for the right to keep doing this.
📉 The Regulatory Flip
In May 2023, Sam Altman testified before Congress and made headlines by calling for AI regulation. Consumer groups and policymakers were cautiously optimistic. Maybe the industry would actually engage with oversight.
Two years later, Altman told a Senate hearing it would be “disastrous” for the US to adopt regulations like Europe’s. The White House’s AI action plan, shaped heavily by OpenAI’s input, included a 10-year moratorium on state-level AI regulation. The moratorium passed the House as part of a budget bill that simultaneously cut Medicare and funneled money to autonomous weapons development.
Forty state attorneys general called it “irresponsible.” Consumer Reports opposed it. A bipartisan coalition of legal experts argued it would strip protections against AI-generated political deepfakes, AI-powered health insurance claim denials, and AI revenge porn.
OpenAI’s position on all of this: any regulation might “hinder economic competitiveness.”
The flip from “please regulate us” to “regulation is disastrous” took exactly as long as it took OpenAI to get big enough that regulation would slow them down.
💸 And the Product Isn’t Even That Good Anymore
After all of the above, here’s the final insult: the product no longer justifies the price on its own merits.
The current competitive landscape:
- Google Gemini Advanced ($20/month bundled with 2TB Google One storage most people pay for anyway, making the AI essentially free)
- Claude free tier regularly outperforms ChatGPT Plus on reasoning and nuanced writing in independent benchmarks
- Microsoft Copilot free tier runs GPT-4 class models (Microsoft paid $13 billion for that access)
- Meta AI free, built into apps most people already use
- Ollama with local models completely free, no data leaves your machine, no company gets your conversations
ChatGPT’s advantage in 2026 is almost entirely brand recognition and interface familiarity. The underlying models are no longer uniquely superior. OpenAI rolls out new model access to subscribers and throttles it within weeks when usage spikes. The “unlimited” access has had asterisks on it since basically the day they announced it.
You are paying $240 a year for a product that has ads, is training on your data, is deeply entangled with the Trump administration and Pentagon, lobbied against your state’s ability to protect you from AI harms, and is no longer the best option on the market.
🔄 What I Actually Use Now
For writing and analysis: Claude free tier. Genuinely better than ChatGPT for nuanced reasoning and writing. The context window is large. Anthropic has its own problems, but it did refuse the Pentagon contract.
For code: Locally hosted models via Ollama. Nothing leaves your machine. The gap between local models and frontier models has narrowed to the point where most coding tasks are covered.
For research: Perplexity AI free tier or Google Gemini. Perplexity’s citation behavior is better for factual research.
For image generation: Local Stable Diffusion. Lower quality ceiling than DALL-E 3 for some styles, but your creative prompts stay on your hardware.
None of these options are perfect. The voice interface quality gap is real. If you’re a professional who has deeply integrated ChatGPT into production workflows, there are genuine switching costs. But for general daily use? The alternatives are good enough that the decision is now purely about whether you want to keep funding OpenAI’s current direction.
I don’t.
Questions to Consider
- At what point does “the product is good” stop being enough of a reason to fund a company’s military contracts, government lobbying, and anti-regulatory campaigns?
- Have you read OpenAI’s actual data retention and training policies recently? Not the summary. The actual policy.
- Anthropic drew an ethical line with the Pentagon and got dropped for it. OpenAI stepped in. Does that distinction matter to you?
- Your $20/month is a vote. What are you voting for?
The cancellation took forty-five seconds. The clarity about who I was funding took longer.
