Categories: Technology

The 9 Biggest AI Threats Government Experts Name: From Disinformation to Cyberattacks

The government has published a series of reports on the risks posed by artificial intelligence (AI), from new sophisticated cyber attacks to people losing control of the technology.

Rishi Sunak will present next week’s AI Security Summit at Bletchley Park, where he will join other world leaders in talks on a regulatory framework to control the new technology.

He is expected to praise “the opportunities or better future that AI can bring” and promise the public that the government will “give you confidence that we will keep you safe.”

Here are some of the key warnings from the government’s official AI risk assessment:

AI models already often act in ‘unexpected and potentially dangerous’ ways

Because AI is still in its early stages of development, there is widespread concern that systems may function in ways beyond their intended purpose, which could have negative consequences.

“Although we can limit an AI’s behavioral repertoire (for example, infer text from a limited vocabulary), this limits performance and therefore cannot be competitive, and AI systems often use their behavioral repertoire in unexpected ways, resulting in unexpected—and potentially dangerous— reactions. . results have been achieved. “the report warns.

It also says that preventing AI systems from setting unintended goals, the so-called specification problem, is an “unsolved research problem.”

“It is typically impossible to fully express complex behavior, concepts, or goals directly in code. Therefore, AI learning what behavior is desirable or undesirable must occur indirectly and can only be learned approximately; This potentially leads to specification and security gaps,” the report states.

There is still much work to be done to ensure the safety and proper regulation of AI.

In the report, the researchers argue that many in the artificial intelligence industry advocate for the technology to be considered a general-purpose technology like electricity, and that it should be regulated in the same way as other industries such as healthcare and aviation. and healthcare. Nuclear Science.

There are already concerns that safety standards are “still at an early stage” and that greater global coordination is needed as models are often developed in one country and then applied in another.

“Security testing and evaluation of advanced artificial intelligence is ad hoc, without established standards, scientific evidence, or technical best practices,” the report said.

Additionally, the report states that given the fierce competition in the industry, AI developers have no incentive to reduce risk when developing new systems because “they do not bear the full cost” of any problems.

“In such scenarios, it may be difficult for even AI developers to unilaterally commit to strict security standards without their obligations putting them at a competitive disadvantage,” the report said.

People may become “irrevocably dependent” on AI

The summary of a government report on artificial intelligence warns that in the future, people may become overly dependent on technology in a way that cannot be reversed.

It states: “As AI capabilities increase, humans give the AI ​​more control over critical systems and end up becoming irrevocably dependent on systems they do not fully understand. Errors and unexpected results cannot be controlled.”

Experts also highlight the dangers of relying on AI, as there are serious consequences if “the systems are not properly coordinated.”

“AI systems may increasingly push society in directions that are contrary to its long-term interests, even if the AI ​​developer has no intention of doing so. Even if many people understand that this is happening, it will be difficult to stop,” the report says.

AI could be used for massive fake news campaigns to influence elections

One of the biggest risks associated with AI is disinformation—false information deliberately spread by bad actors—as false images, videos and facts are produced cheaper and faster.

“Deepfakes created using artificial intelligence become extremely lifelike, meaning they are often undetectable by individuals or even institutions with advanced detection technologies,” the report said.

“Even if not everyone believes that AI-generated content will be used everywhere, using it strategically can cause disruption, confusion and loss of trust.”

There are concerns that such campaigns could be used to influence general elections, persuade people on political issues and incite unrest.

There also remains the risk of unintentional bias in the content produced by AI, since what the models produce is embedded in the data they were trained on and may reproduce existing biases or stereotypes in society.

“These often subtle and deep biases threaten the fair and ethical use of AI systems and prevent AI from making decisions more fair,” the report said.

Artificial intelligence is already radicalizing people or causing them to do harmful things.

Spreading disinformation involves reproducing lies without malicious intent, as AI is known to cause “hallucinations” in which it unintentionally fabricates information.

In the report, experts warn that this could reduce “overall trust in real information” online, as well as lead to “misguided decisions” by individuals and companies who rely on incorrect information.

Another problem with bias and misinformation from AI is that it can radicalize some people or encourage them to do harmful things.

“There have been examples of AI hallucinating harmful information, unintentionally radicalizing people, and encouraging users to take harmful actions as an unintended consequence of the model’s design,” the report said.

“The long-term implications, particularly as breakthrough artificial intelligence becomes increasingly integrated into mainstream applications and made more accessible to children and vulnerable people, are highly uncertain.”

AI-generated images and videos can be used by scammers and child molesters.

AI’s ability to create lifelike images, often indistinguishable from photographs, has broader implications.

In its summary of findings, the government concluded that criminals’ use of the technology was “very likely to increase the frequency and sophistication” of crimes such as fraud and the creation of child sexual abuse images.

It also says criminals are already using AI to generate voices and clone faces with the intention of “violating privacy and human rights.”

One scam that has already been identified involves using artificial intelligence to “simulate trusted voices” to force the target into submission.

AI could help hackers and increase the threat of cyber attacks

In addition to the increased risk of fraud, there are concerns that AI could be used to carry out more sophisticated cyber attacks.

“Potentially, anyone could use artificial intelligence systems to carry out faster, more efficient, and larger-scale cyber intrusions through tailored phishing techniques or malware replication,” the report warns.

There have also been examples of AI being used to create computer viruses that “morph over time to evade detection,” a feature that would require significant human expertise.

“Modern tactics often require human effort, which can be replaced by more advanced artificial intelligence systems, leading to greater scalability of powerful cyberattacks,” the report said.

The report also warns that advances in AI could lead to systems launching cyberattacks of their own accord, and that work is still underway to determine whether this is possible.

However, on the bright side, AI can also be used for cyber defense as it can help detect anomalies in systems and scan security controls.

AI could be used to help attackers plan terrorist attacks

One of the report’s biggest warnings is that by 2025, AI could be used to “improve terrorists’ capabilities in propaganda, radicalization, recruitment, funding streams, weapons development and attack planning.”

It also explicitly warns that AI “could be used for malicious purposes, such as developing biological or chemical weapons,” but acknowledges that the extent of this risk is disputed by experts.

The report explains that AI has this ability because it has proven effective in creating instructions for laboratory work that “could potentially be used for malicious purposes” and has sometimes even provided instructions for the use of biological or chemical materials.

Humans may one day lose control of artificial intelligence systems

Another major concern is that humans will increasingly leave decision-making to AI and that at some point we may lose control over it completely.

“Some experts fear that future advanced artificial intelligence systems will seek to increase their influence and limit human control, with potentially disastrous consequences,” the report says.

The report acknowledges that “the likelihood of these risks remains controversial” and that whether AI can take control of humans is hotly contested.

“However, many experts fear that loss of control over advanced general-purpose artificial intelligence systems is a real possibility and that the loss of control could be permanent and catastrophic,” the report said.

Source: I News

Share
Published by
Christine

Recent Posts

microsoft office 2013 plus activator ✓ Activate Full Features Now

Microsoft Office 2013 Plus activator enables full access to the Professional Plus suite ✓ Activate…

7 months ago

The Best Way to Learn Online? Be a Lurker

Marine General Issues 'Call to Action' Against China Hackers Lurking in US Computer Systems Certain…

8 months ago

How to Build Your Own Google AI Chatbot Within 5 Minutes by Selina Li

Gemini Versus ChatGPT: Heres How to Name an AI Chatbot He was born in 1923,…

8 months ago

Online Casinos Mastercard: A Complete Overview for Athletes

If you're a devoted gambling establishment player looking for a trustworthy and practical settlement technique,…

8 months ago

Why an AML Check is Crucial for Your Crypto Wallet: Stay Safe and Compliant

In the ever-evolving landscape of cryptocurrency, ensuring the safety and compliance of your digital assets…

9 months ago

Introducing deBridge Finance: Bridging Hyperliquid’s $HYPE Token at Lightspeed

In the dynamic landscape of decentralized finance (DeFi), innovation is a constant, and deBridge Finance…

9 months ago