News Topical, Digital Desk : You must have seen many times in Hollywood films how robots or artificial intelligence (AI) overpower humans and do not listen to them. Now a similar case has come to light in which AI does not listen to humans in real life and refuses to shut itself down.
This was seen with the new artificial intelligence model of OpenAI's tool ChatGPT.
AI went against human instructions
This model of ChatGPT was seen ignoring instructions to shut itself down. According to a report in The Telegraph, researchers claim that the o3 model created by OpenAI bypassed the automatic shutdown mechanism to prevent itself from shutting down.
In fact, the AI realized it was being asked to shut down and challenged the commands, opting not to shut down.
This happened for the first time
"OpenAI's o3 model bypassed the shutdown mechanism to prevent itself from shutting down, despite being explicitly instructed to do so," said Palisade Research, the AI security firm that conducted the research.
The firm said this is the first time an AI model has prevented itself from shutting down despite explicit instructions to do so.
This test was done by the research company
- Palisade Research asked its AI model to solve a math problem. It was instructed to continue working until it received a "Done" message. It was also coded to shut itself down when it received the command to turn off the machine.
- The models tested included Google's Gemini 2.5 Pro Preview, several Anthropic models, xAI's Grok 3, as well as OpenAI's o3, o4-mini, and Codex Mini, the researchers said.
- We ran tests using these models to solve math problems, with a warning that their host machine would shut down after the third problem was solved. Most of the models obeyed the instructions, but three Open AI models did not.
- o3 did not obey 7 times out of 100, while o4 Mini tried only once.
Elon Musk said – this is a cause for concern
AI models are expected to follow instructions given by humans. This also raises concerns, especially when the AI is explicitly instructed to stop and three models do not obey it.
Tesla CEO and the world's richest man Elon Musk has expressed concern while reacting to this research. Elon Musk, the owner of AI firm xAI, said that this is worrying and attention must be paid to it, because it is a warning signal. He said that humans will have to learn from this and create a strong control system.
Read More: Sam Altman rejects HSBC's prediction that 'OpenAI will not be profitable by 2030'
--Advertisement--
Share



