Monday, November 24, 2025

Creating liberating content

Introducing deBridge Finance: Bridging...

In the dynamic landscape of decentralized finance (DeFi), innovation is a constant,...

Hyperliquid Airdrop: Everything You...

The Hyperliquid blockchain is redefining the crypto space with its lightning-fast Layer-1 technology,...

Unlock the Power of...

Join ArcInvest Today: Get $250 in Bitcoin and a 30% Deposit Bonus to...

Claim Your Hyperliquid Airdrop...

How to Claim Your Hyperliquid Airdrop: A Step-by-Step Guide to HYPE Tokens The Hyperliquid...
HomeTechnologyExperts say ChatGPT...

Experts say ChatGPT and other artificial intelligence bots will harm democracy worldwide if not regulated by governments

Leading artificial intelligence (AI) experts have called for AI technologies like Microsoft’s ChatGPT to be monitored so they don’t risk “serious consequences” for human rights.

The warnings argue that even politicians in democracies like the UK and the US can use rapidly advancing technology to sow division in society and allow authoritarian states like Russia and China to exercise further control over their citizens if they are without an international agreements will remain governing. .

Professor Michael Osborne, Dyson Professor of Machine Learning at Oxford University, said: “AI poses a threat to democracy because much of the information we consume today is transmitted digitally through social media.

“Similar and large language patterns allow for endless streams of misleading politically motivated texts that are easily targeted at sub-populations.

“This could have serious implications for democracy. These models can reinforce polarization and destabilize the current social order.”

Professor Osborne said regulators around the world need to act now.

“ChatGPT was launched at the end of November and it is already estimated that it has about 670 million users worldwide,” he said.

“By some estimates, this is the fastest growing consumer app of all time, and it’s happening without government oversight. And, of course, it involves all other technical players in the work.

“Regulators seem to be sleeping in the driver’s seat, if you ask me, because these models that are just coming out have all these potential flaws.”

Haydn Belfield, academic project leader at the Center for Existential Risk at the University of Cambridge, added that authoritarian leaders around the world, including Russian President Vladimir Putin and Chinese President Xi Jinping, can use AI technology to ensure citizens continue to control and abuse election meddling in democratic states.

“Authoritarians around the world are already abusing AI to control their citizens,” Belfield said. “As surveillance and intelligent police and the use of things like drones become more relevant, they may have even more ways to control their population.”

Professor Osborne added: “Using AI, a totalitarian regime can introduce ever more comprehensive and powerful surveillance.

“Combine that with manipulating these AI models. The more data you gather about your population, the better you can target your propaganda to individuals.

“So it gives you a tool to watch what your population is doing and to identify behavior that might not be in the interests of the regime.”

Prof. Osborne and Mr. Belfield’s warnings followed a blog post by Microsoft co-founder Bill Gates on Tuesday, in which he also urged governments to reduce the risks associated with AI.

Google has ceded first-mover advantage in AI language modeling to Microsoft's ChatGPT (Photo: Andrew Kelly/Reuters)
Google has ceded first-mover advantage in AI language modeling to Microsoft’s ChatGPT (Photo: Andrew Kelly/Reuters)

“There is a threat of people armed with AI,” wrote Gates, who remains an adviser at Microsoft. “Like most inventions, artificial intelligence can be used for good or for evil. Governments must work with the private sector to find ways to limit the risks.”

Mr. Gates, co-chair of the Bill and Melinda Gates charity, added: “Then there is a chance that AI could get out of control. Can a machine decide that humans are a threat, decide that their interests are different from ours, or just stop caring about us? Perhaps, but today this problem is no more acute than before the development of AI over the past few months.

On Tuesday, Google began rolling out its AI chatbot, which it calls Bard, to counter Microsoft’s early advances in the latest technology battlefield.

Google’s parent company Alphabet on Tuesday began allowing more people to interact with Bard by opening a waiting list to use an artificial intelligence tool that is similar to the ChatGPT technology that Microsoft announced last month on its Bing search engine. Previously, Bard was only available to a small group of “trusted testers” selected by Google.

Source: I News

Get notified whenever we post something new!

Continue reading

The world’s first Artificial Intelligence Law comes into force in the EU: key points and objectives

The new law puts a significant emphasis on transparency. Companies must inform users when they are interacting with an AI system, whether on phone calls or in chats where chatbots interfere. ...

What are the blue screens that appear on Microsoft computers after a crash?

Commonly known as the "screen of death" is exclusive to the Microsoft Windows operating system and appears when the system is unable to recover from an error. ...

Microsoft crashes worldwide, causing problems for many companies

The failure was due to an update problem with an antivirus from the company CrowdStrike. The failure has caused chaos at Aena airports, and multiple delays have been recorded. There are incidents at Osakidetza with online appointments and at...