The UK is lagging behind in artificial intelligence (AI) regulation, according to two of the country’s top tech experts.
Haydn Belfield, project leader at the Center for Existential Risk at the University of Cambridge, explains: I that the UK is not moving fast enough to protect people and businesses from the consequences of rapid software improvements.
Mr Belfield said: “To avoid a patchwork of different rules in the EU, the EU should adopt an EU law on artificial intelligence as soon as possible – and make sure there are ‘common AI systems’ like ChatGPT, GPT-4, Bard and Bing. includes .
“For us in the UK, we need to move much faster in AI regulation – we are behind our closest allies: the US and the EU.”
His comments come after Italy’s decision on Friday to ban the ChatGPT AI bot. The UK government has no plans to follow Italy’s lead and ban the bot.
Professor Michael Osborne, Dyson Professor of Machine Learning at the University of Oxford, is also concerned about the lack of government intervention.
“Regulators seem to sleep in the driver’s seat, if you ask me, because these models have all these potential downsides that exist right now,” he said.
Microsoft founder Bill Gates has also called for regulation of AI as the technology permeates more and more areas of people’s lives and work.
“There is a threat of people armed with AI,” said Gates, who remains an adviser to Microsoft. “Like most inventions, artificial intelligence can be used for good or for evil. Governments must work with the private sector to find ways to limit the risks.”
Italy has imposed an effective ban on artificial intelligence (AI) bot ChatGPT after accusing OpenAI software developers of “illegal collection of personal data”.
The country’s national data protection regulator has ordered OpenAI, which is backed by tech giant Microsoft, to immediately stop collecting data from Italian users until it changes its data collection practices.
Last week, the UK’s newly created Department of Science, Innovation and Technology laid out five principles that regulators should consider “in order to best promote the safe and innovative use of AI in the sectors they oversee.”
Principles to ensure that AI applications must operate in a safe, secure and trustworthy manner with careful risk management, that organizations developing and deploying AI must do so openly and transparently, and that AI must comply with applicable UK law, including Equality Acat and /or the UK General Data Protection Regulation.
The Principles also aim to take steps to ensure that there is adequate oversight of how AI is being used and that users have a clear means to challenge harmful AI-generated results or decisions.
Source: I News
With a background in journalism and a passion for technology, I am an experienced writer and editor. As an author at 24 News Reporter, I specialize in writing about the latest news and developments within the tech industry. My work has been featured on various publications including Wired Magazine and Engadget.
