A respected researcher for AI giant Anthropic has quit his job, leaving a dire warning that the world is in danger from the misuse of advanced computers.
Mrinank Sharma, an AI safety researcher at Anthropic, posted his resignation letter on social media Monday, claiming the 'world is in peril' due to AI advances and related risks such as bioterrorism.
Anthropic builds advanced AI systems like chatbots and tools that can generate text or ideas, including the popular program Claude.
However, Sharma claimed in his letter that he and the AI firm had been pressured to set aside their values in order to prioritize the growth of artificial intelligence.
His job at Anthropic, estimated to come with a salary of more than $200,000, was to lead a team focused on 'AI safety,' which means figuring out ways to make sure AI doesn't cause harm to the humans utilizing it.
For example, Sharma noted he had helped create defenses so that AI couldn't be used by bad actors to make dangerous substances such as biological weapons.
He also studied problems like 'AI sycophancy,' where AI chatbots might overly flatter or agree with users in ways that could manipulate them and distort people's sense of reality.
'We appear to be approaching a threshold where our wisdom must grow in equal measure to our capacity to affect the world, lest we face the consequences,' Sharma wrote in his letter.
Mrinank Sharma resigned from his role as an AI safety scientist at Anthropic, warning that unchecked AI research was putting the world in danger.
Sharma's resignation was immediate, stepping away from his high-profile role with Anthropic after nearly three years.
The California resident had studied at both the University of Oxford and University of Cambridge, earning a Masters degree in engineering and machine learning.
However, the AI safety expert said a mix of major global problems that are all interconnected—including wars, pandemics, climate change, and AI's unchecked growth—all influenced his decision to quit.
Sharma expressed fears that powerful AI programs were making it easier for scientists to formulate bioweapons which could spread disease around the globe.
Without proper regulations on AI's usage, these advanced tools help can quickly answer tough biology questions and even suggest genetic changes to make viruses more contagious or deadly.
Thanks to large language models, like ChatGPT, being trained on millions of scientific papers, AI could potentially provide step-by-step instructions for creating new bioweapons or help bypass safety checks on DNA-making services.
Sharma also mentioned AI's ability to mess with people's minds, providing the public with answers that are so tailored to each person's personal views that it warps their decisions and undermines independent thought.
'I continuously find myself reckoning with our situation. The world is in peril. And not just from AI,' the former Anthropic scientist declared in his letter shared on X.
Sharma's post on X has been viewed more than 14 million times as of Thursday. The self-described poet said his next career move would involve something where he could contribute in a way where he 'feels fully in my integrity.'
Anthropic is an AI company founded in 2021 by seven former employees from OpenAI, the firm that created ChatGPT.
That group included siblings CEO Dario Amodei and Anthropic President Daniela Amodei, who said they left due to concerns over OpenAI's lack of focus on safety and wanted to create reliable, interpretable AI systems that prioritize human well-being.
The company's main products are the Claude family of AI models, which include chatbot assistants for coding and other personal and professional tasks.
Anthropic reportedly holds roughly 40 percent of the AI market in terms of AI assistants, with their annual revenue sitting at an estimated $9 billion.
However, Dario Amodei has publicly advocated for imposing stronger regulations on all AI systems, testifying before the US Senate in 2023 on the principles of oversight for this new technology.
Amodei recently pushed for thoughtful federal standards to replace wide-ranging state laws that regular the use of AI in the US.