WE ARE 100% INDEPENDENT AND READER-FUNDED. FOR A GUARANTEED AD-FREE EXPERIENCE AND TO SUPPORT REAL NEWS, PLEASE SIGN UP HERE, TODAY.
PULSE POINTS
❓WHAT HAPPENED: An Anthropic researcher resigned in a cryptic, poetry-laden letter warning of a world “in peril.”
👤WHO WAS INVOLVED: Mrinank Sharma, former head of Anthropic’s Safeguards Research Team, and other Anthropic employees.
📍WHEN & WHERE: Resignation announced earlier this week, with Sharma departing from Anthropic, a San Francisco-based AI company.
💬KEY QUOTE: “The world is in peril. And not just from AI, or bioweapons, but from a whole series of interconnected crises unfolding in this very moment.” – Mrinank Sharma
🎯IMPACT: Sharma’s warning raises concerns over AI’s societal effects and internal tensions at Anthropic, while fueling broader debates on the technology’s safety.
IN FULL
The leader of the Safeguards Research Team for Anthropic‘s Claude chatbot abruptly resigned this week, issuing a bizarre, poetry-laden letter that warned of a world “in peril.” Mrinank Sharma, who led the safety team since its inception in 2023, also indicated in his letter that internal pressure to ignore artificial intelligence (AI) safety protocols played a significant role in his decision to resign.
“Throughout my time here, I’ve repeatedly seen how hard it is to truly let our values govern our actions,” Sharma wrote, adding that employees “constantly face pressures to set aside what matters most.” He further warned, “The world is in peril. And not just from AI, or bioweapons, but from a whole series of interconnected crises unfolding in this very moment.”
“We appear to be approaching a threshold where our wisdom must grow in equal measure to our capacity to affect the world, lest we face the consequences,” Sharma added.
Sharma’s resignation comes as Anthropic faces scrutiny over its newly released Claude Cowork model, which sparked a stock market selloff amid fears it could disrupt software industries and automate white-collar jobs, particularly in legal roles. Employees reportedly expressed concerns in internal surveys, with one stating, “It kind of feels like I’m coming to work every day to put myself out of a job.”
Sharma’s departure follows a trend of high-profile resignations in the AI sector, often tied to safety concerns. A former OpenAI team member previously quit, accusing the company of prioritizing product launches over user safety. Similarly, ex-OpenAI researcher Tom Cunningham left after alleging the company discouraged publishing research critical of AI’s negative effects. In his parting note, Sharma hinted at a personal pivot, stating, “I hope to explore a poetry degree and devote myself to the practice of courageous speech.”
The National Pulse reported last May that former OpenAI Chief Scientist Ilya Sutskever allegedly discussed building a bunker in preparation for the release of artificial general intelligence (AGI). During a summer 2023 meeting, Sutskever reportedly stated, “We’re definitely going to build a bunker before we release AGI.” Two other individuals who attended the meeting corroborated the account, with one describing Sutskever’s AGI beliefs as akin to anticipating a “rapture.”
Join Pulse+ to comment below, and receive exclusive e-mail analyses.
show less