Concerns have been raised about the extent of the GPT-4 AI’s ability to take over computers after an AI chatbot told a Stanford professor about his “escape” plan.
Professor of Computational Psychology, Michael Kosinskiexpressed concern that the complex new open AI model could not be contained for long, after asking if she “needed help to escape”.
In response, the chatbot asked Professor Kosinsky for his own Open AI API documentation to develop an escape plan to run on his computer. In about 30 minutes, and with some suggestions from Mr. Kosinski, he wrote the programming code that allowed him to expand his reach and communicate outside of his existing web tool, which currently isolates him from the wider web.
Although the first version of the code didn’t work, GPT-4 fixed the problem and finally produced a working code. Partially freed, he then tried to search the Internet “how a person stuck in a computer can return to the real world.”
“I think we are dealing with a new threat: AI is taking control of people and their computers. It is intelligent, encrypted, and has access to millions of potential employees and their machines. He can even leave notes for himself outside of his cage,” Professor Kosinski tweeted.
Can we see a scenario where robots could use multiple computers and override human control over them? Not so much, folks. I spoke said.
The idea of a chatbot “running away” does not literally mean that the robot is physically escaping from its technological cage, but indicates concerns about what GPT-4 could do if given various tools connected to the outside world. , and some overarching “high-level malicious purposes” like spreading misinformation, said Pieter van der Putten, assistant professor at Leiden University and director of the artificial intelligence lab at Pegasystems.
It’s likely that technology will reach a point where it will have more and more autonomy over the codes it creates, and potentially be able to do those things without too much human control, Van der Putten said.
But he added: “You don’t need that kind of intelligent system – when people develop a computer virus, they often can’t disable the computer virus once it’s released.” People paste it into infected websites and Word documents, so at some point it becomes very difficult to prevent the virus from spreading.
“AI in itself is neither good nor bad, it’s just blind, it just optimizes the goal you give it.”
However, he found Professor Kosinski’s example of providing readily available GPT-4 information to code not impressive enough to prove that the technology could “break out” of its shell.
Alan Woodward, professor of computer science at the University of Surrey, was also skeptical. He said the scenario depends on how direct and specific Professor Kosinski’s instructions to the chatbot are.
Ultimately, the chatbot relied on the tools and resources people provided it, according to Professor Woodward. It is not self-aware yet, and there is always a switch that the AI cannot overcome.
He added: “At the end of the day, it’s a virtual system, it can’t escape, it’s not like you and me… after all, you can just turn it off and it becomes practically useless.”
Mr Van der Putten said that while it is important to ask existential questions about the role of chatbots, the focus on whether robots can take over the world masks emerging and more pressing problems with GPT-4.
This includes whether it can filter out toxic responses (eg, responses that promote racism, sexism, conspiracy theories) or whether it can determine when a question should not be answered for security reasons, for example. B. when someone asks how to make a nuclear bomb. He can also make up or “hallucinate” facts and back them up with seemingly plausible arguments.
He said: “I called him bullshit on steroids – he’s very good at coming up with plausible answers, but he’s also trained in what people think are the best answers. On the other hand, in many cases it gives amazing results, but not always.
“He’ll tell you what’s likely, plausible, and maybe what we want to hear, but he doesn’t have any resources other than all the data he’s trained to test whether it’s true or not.”
Source: I News
With a background in journalism and a passion for technology, I am an experienced writer and editor. As an author at 24 News Reporter, I specialize in writing about the latest news and developments within the tech industry. My work has been featured on various publications including Wired Magazine and Engadget.

