• AI Pulse
  • Posts
  • 🚨"The world is in peril": Why Anthropic’s safety chief just quit.

🚨"The world is in peril": Why Anthropic’s safety chief just quit.

A top AI safety leader walks away to pursue poetry and warns that our current path is unsustainable.

In partnership with

Hello There!

Anthropic’s safeguards chief resigned because he felt the world was in peril and his values were at stake, showing that while we worry about meeting our KPIs, the people building the models are worried about whether humanity survives. The Pentagon is now fast-tracking ChatGPT into classified networks to automate high-level analysis, which means while you’re still trying to get your printer to work, the government is teaching AI how to handle the world’s most dangerous secrets. Another OpenAI researcher just jumped ship to protest the arrival of ChatGPT ads, making it clear that AI experts are fleeing the "ad-pocalypse" faster than employees escaping a Friday afternoon "mandatory fun" meeting.

Here's what's making headlines in the world of AI and innovation today.

In today’s AI Pulse

  • šŸ“˜ HubSpot Guide ā€“100 prompts to supercharge ChatGPT productivity.

  • šŸ“Š 2026 Marketing Report ā€“ How top marketers scale with AI.

  • šŸš€ The Hustle ā€“ 60 Claude hacks marketers actually use.

  • 🧪 Anthropic – Safety Chief Quits Over Values

  • šŸ›”ļø Pentagon – Puts ChatGPT on Classified Networks

  • 😠 Researcher – Quits OpenAI Over ChatGPT Ads

  • ⚔ Quick Hits – IN AI TODAY

  • šŸ› ļø Tool to Sharpen Your Skills ā€“šŸŽ“ AIGPEĀ® Certified AI-Powered Business Case Specialist

The coming years won’t just transform technology; they’ll reshape your home, your family life, and the control you have online.

Want to get the most out of ChatGPT?

ChatGPT is a superpower if you know how to use it correctly.

Discover how HubSpot's guide to AI can elevate both your productivity and creativity to get more things done.

Learn to automate tasks, enhance decision-making, and foster innovation with the power of AI.

How Marketers Are Scaling With AI in 2026

61% of marketers say this is the biggest marketing shift in decades.

Get the data and trends shaping growth in 2026 with this groundbreaking state of marketing report.

Inside you’ll discover:

  • Results from over 1,500 marketers centered around results, goals and priorities in the age of AI

  • Stand out content and growth trends in a world full of noise

  • How to scale with AI without losing humanity

  • Where to invest for the best return in 2026

Download your 2026 state of marketing report today.

The Hustle: Claude Hacks For Marketers

Some people use Claude to write emails. Others use it to basically run their entire business while they play Wordle.

This isn't just ChatGPT's cooler cousin. It's the AI that's quietly revolutionizing how smart people work – writing entire business plans, planning marketing campaigns, and basically becoming the intern you never have to pay.

The Hustle's new guide shows you exactly how the AI-literate are leaving everyone else behind. Subscribe for instant access.

🧠The Pulse

Mrinank Sharma, head of Anthropic’s Safeguards Research Team, resigned effective Feb 9. In a lengthy letter posted on X, he said he had contributed to AI safety projects but felt compelled to leave to pursue work aligned with his values, warning that the world is in peril.

šŸ“ŒThe Download

  • Role and achievements: Sharma led Anthropic’s Safeguards Research Team, which focuses on jailbreak robustness, automated red‑teaming and monitoring techniques for model misuse. He joined after completing his PhD and helped develop defences against AI‑assisted bioterrorism and sycophancy.

  • Resignation letter: In his letter, he praised Anthropic’s culture and colleagues but said he must ā€œmove onā€ to engage with questions that feel essential to him. He warned that humanity faces interconnected crises and emphasized the difficulty of letting values govern actions.

  • Ethical concerns: Sharma wrote that the world is ā€œin perilā€ and that he wrestled with internal and societal pressures to set aside what matters most. He plans to explore writing and facilitation and may pursue a poetry degree.

  • Team’s future: Anthropic’s Safeguards organization integrates safety research with deployment infrastructure to identify misalignment risks and prioritize challenges.

šŸ’”What This Means for You

AI safety work is emotionally taxing and can conflict with commercial pressures. Professionals should expect continued turnover in ethics roles and consider how to support colleagues grappling with mission alignment. The departure underscores the need for transparency, well‑defined values and mental‑health resources in AI organizations.

🧠The Pulse

The U.S. Defense Department is negotiating with OpenAI, Anthropic, Google and xAI to deploy AI assistants on both unclassified and classified networks. OpenAI has agreed to make ChatGPT and other tools available on the unclassified genai.mil network, relaxing many user restrictions, while talks with Anthropic remain contentious.

šŸ“ŒThe Download

  • Classified AI push: The Pentagon is fast‑tracking contracts to deploy large‑language‑model tools on its networks. A pilot program put OpenAI’s ChatGPT on genai.mil, an unclassified network for 3 million Defense Department employees, after OpenAI removed many usage restrictions.

  • Contested negotiations: Officials want the same tools on classified systems. Anthropic has resisted, refusing uses such as weapon targeting or domestic surveillance. Google and xAI have also struck early deals to supply models with fewer limits.

  • Risk debate: Critics warn that using generative AI in sensitive environments could produce hallucinations or misclassification, while the Pentagon argues that AI can automate routine analysis and free analysts for higher‑level tasks.

  • Escalation context: The push follows the U.S.–China export‑control truce that allowed Nvidia to sell H200 chips to China; officials view AI as a strategic advantage and want to harden cyber‑defense and intelligence operations.

šŸ’”What This Means for You

AI may soon operate in classified settings, hinting at new career opportunities and ethical questions. Expect government clients to demand expertise in secure model deployment and risk management. Professionals should stay abreast of AI governance frameworks and prepare to integrate vetted AI tools into sensitive workflows.

Image Credit: AIGPEĀ®

🧠The Pulse

OpenAI researcher ZoĆ« Hitzig resigned after the company introduced ads in ChatGPT. She argued that advertising incentives could erode user privacy and manipulate behavior, comparing the move to Facebook’s data practices. Her departure intensifies debate about monetizing AI and the ethical boundaries of generative services.

šŸ“ŒThe Download

  • Reason for quitting: Hitzig wrote in a New York Times essay that ads create long-term incentives to harvest data and shape user behavior, prompting her resignation. She warned that ChatGPT could become a vehicle for targeted manipulation.

  • Comparisons to Facebook: She likened OpenAI’s advertising strategy to Facebook’s trajectory, where early promises of benevolence gave way to surveillance capitalism. Hitzig fears similar patterns in AI.

  • Call for oversight: The researcher urged regulators to scrutinize AI platforms and suggested creating ā€œdata trustsā€ to protect users. She acknowledges ads generate revenue but stresses user control and accountability.

  • Company tensions: Her departure follows criticism from Anthropic’s Super Bowl ad mocking ChatGPT’s ads and CEO Sam Altman’s defense of the pilot. The episode highlights diverging views inside and outside OpenAI on balancing access and profit.

šŸ’”What This Means for You

AI businesses are adopting ad models, igniting concerns about privacy and manipulation. As a professional, scrutinize how your tools are financed and whether they compromise data. Advocate for clear user consent and independent oversight, especially when AI sits between your work processes and clients.

IN AI TODAY - QUICK HITS

⚔Quick Hits (60‑Second News Sprint)

Short, sharp updates to keep your finger on the AI pulse.

  • Blackstone Bets Big on Anthropic: Private-equity giant Blackstone added another US$200 million to its investment in AI startup Anthropic, lifting its stake to around US$1 billion. The deal values Anthropic at about US$350 billion and underlines relentless investor appetite for generative AI despite recent volatility in software stocks.

  • DeepMind CEO Predicts New Golden Era of Discovery: Demis Hassabis, CEO of Google DeepMind, said the fusion of Google Brain and DeepMind will usher in a new golden era of scientific discovery. He believes AI will transform fields such as personalized medicine and materials science over the next 10‑15 years by unlocking insights inaccessible to humans.

TOOL TO SHARPEN YOUR SKILLS

šŸ“ˆImprove Processes. Drive Results. Get Certified.

AIGPEĀ® Certified AI-Powered Business Case Specialist

Master the Certified AI-Powered Business Case Specialist course to build persuasive, structured business cases using ChatGPT, enhance decision-making, secure funding, and earn professional credits while combining timeless strategy with modern AI techniques.

That’s it for today’s AI Pulse!

We’d love your feedback, what did you think of today’s issue? Your thoughts help us shape better, sharper updates every week.

Login or Subscribe to participate in polls.

šŸ™Œ About Us

AI Pulse is the official newsletter of AIGPEĀ®. Our mission: help professionals master Lean, Six Sigma, Project Management, and now AI, so you can deliver breakthroughs that stick.

Love this edition? Share it with one colleague and multiply the impact.
Have feedback? Hit reply, we read every note.

See you next week,
Team AIGPEĀ®