Bombshell twist in death of AI whistleblower found dead by 'suicide'

Bombshell twist in death of AI whistleblower found dead by 'suicide'
Source: Daily Mail Online

The mother of the AI whistleblower found dead of an apparent suicide just weeks after speaking out revealed that her son 'felt AI is a harm to humanity.'

Suchir Balaji, 26, was found dead in his San Francisco home on November 26, three months after he accused his former employer OpenAI of violating copyright laws in its development of ChatGPT.

Now, Balaji's mother is demanding that police reopen their investigation into his death, saying it 'doesn't look like a normal situation.'

'We want to leave the question open,' his mother, Poornima Ramarao, told Business Insider in an interview to discuss the troubled engineer's final months.
'He felt AI is a harm to humanity,' Ramarao said.

The young tech genius had joined OpenAI believing in its potential to benefit society, particularly drawn to its open-source philosophy.

But his mother revealed his perspective dramatically shifted as the company became more commercially focused following ChatGPT's launch.

She described the horrifying moment she realized her son was dead when she saw a stretcher arrive at his San Francisco apartment.

Suchir Balaji, 26, was found dead on November 26, three months after he accused his former employer, OpenAI, of violating copyright laws in its development of ChatGPT

Now, Balaji's mother is demanding that police reopen their investigation into his death, saying it 'doesn't look like a normal situation'

'I was waiting to see medical help or nurses or someone coming out of the van,' she said.

His death came just months after he resigned from OpenAI over ethical concerns and weeks after being named in The New York Times' copyright lawsuit against the company.

In August, he left OpenAI because he 'no longer wanted to contribute to technologies that he believed would bring society more harm than benefit,' the Times reported.

'If you believe what I believe, you have to just leave,' Balaji had told the Times in one of his final interviews.

Police initially ruled the death a suicide and told Ramarao that CCTV footage showed Balaji had been alone.

However, his parents arranged for a private autopsy - completed in early December - which they say produced concerning results.

'We want to leave the question open,' Ramarao said. 'It doesn't look like a normal situation.'

His mother described the horrifying moment she realized her son was dead when she saw a stretcher arrive at his San Francisco apartment.

His parents arranged for a private autopsy - completed in early December - which they say produced concerning results.

Balaji was found dead in his San Francisco apartment on November 26.

The family is working with an attorney to urge San Francisco police to reopen the case for a 'proper investigation.'

Over the past two years, companies like OpenAI have been sued by various individuals and businesses for claims on their copyrighted material.

Balaji's role and knowledge in legal proceedings against the company was considered 'crucial.'

The New York Times was involved in their own lawsuit against OpenAI and its primary partner, Microsoft, which both denied claims that they had used millions of published articles to inform the intelligence and began competing with the outlet as a result.

On November 18, the outlet filed a letter in federal court that named Balaji as a person with 'unique and relevant documents' that would be used in their litigation against OpenAI.

Their suit said: 'Microsoft and OpenAI simply take the work product of reporters, journalists, editorial writers, editors and others who contribute to the work of local newspapers - all without any regard for the efforts, much less the legal rights, of those who create and publish the news on which local communities rely.'

While other researchers have warned of potential future risks of the technology, Balaji told the Times that he believes the risk to be far more 'immediate' than feared.

'I thought that AI was a thing that could be used to solve unsolvable problems, like curing diseases and stopping aging,' he said.

While OpenAI, Microsoft and other companies have claimed their use of internet data to train the technology falls under 'fair use', Balaji did not believe the criteria had been met.

'I thought we could invent some kind of scientist that could help solve them.'

Balaji said that he believed the threats posed by chatbots such as ChatGPT were destroying the commercial viability of the individuals, businesses and internet services that created the digital data used to train such systems.

'This is not a sustainable model for the internet ecosystem as a whole,' he said.

While OpenAI, Microsoft and other companies have claimed their use of internet data to train the technology falls under 'fair use', Balaji did not believe the criteria had been met.