The big tech news this week is that Jeffrey Hinton, a computer scientist and psychologist, one of the most famous experts in the field of artificial intelligence, has decided to quit his job at Google to protect the world from…
…What exactly? If you read news reports on this subject, you’ll get a pretty clear story. Hinton is concerned about the “danger of misinformation,” the newspaper wrote. Defender. financial times Hinton’s story was about “misinformation flooding the public” and “AI taking jobs”. Time focused on Hinton’s concerns about AI taking over jobs and “spreading misinformation”. Sky News notes that Hinton is concerned about “disappearing AI”.[ing] Confidentiality and work. CNN also told us that it is wary of “the trend of chatbots to spread misinformation and crowd out jobs.” Irish time only mentioned that misinformation bothered him.
All of these stories miss or detract from the big wake-up call that Hinton raises. In fact, Hinton acknowledges that AI will spread disinformation and endanger people’s jobs, and that this will create serious problems for the foreseeable future. But that’s not his main point.
His main argument is that AI could destroy humanity as we know it.
Hinton expresses concern that if AI becomes much smarter than humans – which now seems very likely, if not inevitable – we will be in a situation similar to humans with much less intelligent species like chimpanzees: they cannot understand our motives. to its base, and we could easily erase it if we wanted to.
Our motives are currently “aligned” with chimpanzee survival, but that doesn’t mean it will always be – just ask the dodo if humanity’s motives are tied to their survival. Oh wait – you can’t. And that’s the point.
Think about a possible future. As we place more and more responsibility on AI, giving it access to transportation, power plants, financial transactions, and even weapon systems, we will see all sorts of benefits, but we will also face a growing risk of becoming a chimpanzee. . This means that we risk putting ourselves at the mercy of a very capable mind that we don’t really understand and that may not see our survival as a special priority. It may even perceive us as being negative towards the world and respond to that motivation.
It’s a scary thought, straight out of sci-fi movies like TerminatorA: Just now, a truly pioneering AI expert says he’s very concerned that this is actually happening. How is it possible that so many media outlets missed such an important message?
Part of the problem is Hinton’s own indecisive, jargon-filled communication style. Take, for example, the interview he gave to the BBC on Tuesday. He begins by noting that others have discussed the (very real) problems of misinformation and disinformation, problems that go “…associated with the ability to automatically generate large amounts of text… [and] let[ing] authoritarian leaders to manipulate their constituents.”
But he said, “I’m not really talking about that. There’s something specific I want to talk about.”
FINE! Maybe he’s going to uncover a big problem? He started off well, citing “the existential risk of these things becoming smarter than us…”. But then: “Big difference [from human intelligence] is that with digital systems you have many copies of the same set of weights, of the same model of the world, and all these copies can learn separately but share their knowledge instantly…”
The same set with… the hell with what? Sorry for digressing. The average person won’t know what “weights” are (by the way, they are part of the process by which AI learns things – AI is just the basis of a large set of statistical equations that need to be “weighed” according to the strength of relationships between things, such like words or facts, in the data you provided them).
By moving so quickly to jargon, Hinton ensured that the vast majority of the public would turn their backs on the big issue he was trying to convey. In any event, shortly afterward, the BBC cut an extract from her interview. So if he goes into more detail about his true fears, he will be much harder to find.
The idea of risk from AI is a whole new problem for many people. While a small group of researchers have tried to warn of existential danger for decades, it has only come to the forefront of discussion with recent startling advances in technology.
Not only that, in recent years “disinformation and disinformation” has become an obsession in some parts of the media (for example, the BBC now has several “disinformation reporters”). With that in mind, it’s understandable why so much of the media has focused on these aspects of AI risk rather than Hinton’s even more terrifying long-term perspective.
This is not to say that everyone in the AI world fully agrees with Hinton: some argue that the whole issue is exaggerated and that we shouldn’t worry so much about what remains a hypothetical risk. However, some have recently updated their views to be more concerned about the existential threat posed by AI. Hinton appears to be one of the latter: he recently concluded that AI could easily overtake humans in terms of intelligence.
While many researchers disagree on how likely they are to see an AI apocalypse, they agree that scientists should slow down or even pause AI development (as signed in an open letter from Elon Musk and others in March 2023), with this is how we put more effort into it. in figuring out how to bring your goals in line with the goals of humanity. But with incredibly fast progress in all areas of AI happening every week, it seems like a bit of a lost hope.
It’s a very strange feeling to see a calm, sober expert talking about the serious possibility of an existential threat to humanity, and this part of his message is buried by most of the media. Regardless of what you think about the development of AI, better communication between people, rather than between chatbots, is urgently needed.
Not everyone was so wrong: although Telegraph mentioning only the danger of misinformation in the title, they mentioned “killer robots” in the first paragraph of their story. Evening standard they called it “the end of mankind”. But Sun was most relevant: the headline explicitly referred to “Hinton’s fears about killing robots” and his concerns about “technology that could destroy us”. Headlines that sound like panic and hype are appropriate this time.
Source: I News

With a background in journalism and a passion for technology, I am an experienced writer and editor. As an author at 24 News Reporter, I specialize in writing about the latest news and developments within the tech industry. My work has been featured on various publications including Wired Magazine and Engadget.