Monday, November 24, 2025

Creating liberating content

Introducing deBridge Finance: Bridging...

In the dynamic landscape of decentralized finance (DeFi), innovation is a constant,...

Hyperliquid Airdrop: Everything You...

The Hyperliquid blockchain is redefining the crypto space with its lightning-fast Layer-1 technology,...

Unlock the Power of...

Join ArcInvest Today: Get $250 in Bitcoin and a 30% Deposit Bonus to...

Claim Your Hyperliquid Airdrop...

How to Claim Your Hyperliquid Airdrop: A Step-by-Step Guide to HYPE Tokens The Hyperliquid...
HomeTechnologyChatGPT: GPT-4 has...

ChatGPT: GPT-4 has an escape plan, but don’t worry about being taken over by robots just yet, experts say

Concerns have been raised about the extent of the GPT-4 AI’s ability to take over computers after an AI chatbot told a Stanford professor about his “escape” plan.

Professor of Computational Psychology, Michael Kosinskiexpressed concern that the complex new open AI model could not be contained for long, after asking if she “needed help to escape”.

In response, the chatbot asked Professor Kosinsky for his own Open AI API documentation to develop an escape plan to run on his computer. In about 30 minutes, and with some suggestions from Mr. Kosinski, he wrote the programming code that allowed him to expand his reach and communicate outside of his existing web tool, which currently isolates him from the wider web.

Although the first version of the code didn’t work, GPT-4 fixed the problem and finally produced a working code. Partially freed, he then tried to search the Internet “how a person stuck in a computer can return to the real world.”

“I think we are dealing with a new threat: AI is taking control of people and their computers. It is intelligent, encrypted, and has access to millions of potential employees and their machines. He can even leave notes for himself outside of his cage,” Professor Kosinski tweeted.

Can we see a scenario where robots could use multiple computers and override human control over them? Not so much, folks. I spoke said.

The idea of ​​a chatbot “running away” does not literally mean that the robot is physically escaping from its technological cage, but indicates concerns about what GPT-4 could do if given various tools connected to the outside world. , and some overarching “high-level malicious purposes” like spreading misinformation, said Pieter van der Putten, assistant professor at Leiden University and director of the artificial intelligence lab at Pegasystems.

It’s likely that technology will reach a point where it will have more and more autonomy over the codes it creates, and potentially be able to do those things without too much human control, Van der Putten said.

But he added: “You don’t need that kind of intelligent system – when people develop a computer virus, they often can’t disable the computer virus once it’s released.” People paste it into infected websites and Word documents, so at some point it becomes very difficult to prevent the virus from spreading.

“AI in itself is neither good nor bad, it’s just blind, it just optimizes the goal you give it.”

However, he found Professor Kosinski’s example of providing readily available GPT-4 information to code not impressive enough to prove that the technology could “break out” of its shell.

Alan Woodward, professor of computer science at the University of Surrey, was also skeptical. He said the scenario depends on how direct and specific Professor Kosinski’s instructions to the chatbot are.

Ultimately, the chatbot relied on the tools and resources people provided it, according to Professor Woodward. It is not self-aware yet, and there is always a switch that the AI ​​cannot overcome.

He added: “At the end of the day, it’s a virtual system, it can’t escape, it’s not like you and me… after all, you can just turn it off and it becomes practically useless.”

Mr Van der Putten said that while it is important to ask existential questions about the role of chatbots, the focus on whether robots can take over the world masks emerging and more pressing problems with GPT-4.

This includes whether it can filter out toxic responses (eg, responses that promote racism, sexism, conspiracy theories) or whether it can determine when a question should not be answered for security reasons, for example. B. when someone asks how to make a nuclear bomb. He can also make up or “hallucinate” facts and back them up with seemingly plausible arguments.

He said: “I called him bullshit on steroids – he’s very good at coming up with plausible answers, but he’s also trained in what people think are the best answers. On the other hand, in many cases it gives amazing results, but not always.

“He’ll tell you what’s likely, plausible, and maybe what we want to hear, but he doesn’t have any resources other than all the data he’s trained to test whether it’s true or not.”

Source: I News

Get notified whenever we post something new!

Continue reading

The world’s first Artificial Intelligence Law comes into force in the EU: key points and objectives

The new law puts a significant emphasis on transparency. Companies must inform users when they are interacting with an AI system, whether on phone calls or in chats where chatbots interfere. ...

What are the blue screens that appear on Microsoft computers after a crash?

Commonly known as the "screen of death" is exclusive to the Microsoft Windows operating system and appears when the system is unable to recover from an error. ...

Microsoft crashes worldwide, causing problems for many companies

The failure was due to an update problem with an antivirus from the company CrowdStrike. The failure has caused chaos at Aena airports, and multiple delays have been recorded. There are incidents at Osakidetza with online appointments and at...