Monday, November 24, 2025

Creating liberating content

Introducing deBridge Finance: Bridging...

In the dynamic landscape of decentralized finance (DeFi), innovation is a constant,...

Hyperliquid Airdrop: Everything You...

The Hyperliquid blockchain is redefining the crypto space with its lightning-fast Layer-1 technology,...

Unlock the Power of...

Join ArcInvest Today: Get $250 in Bitcoin and a 30% Deposit Bonus to...

Claim Your Hyperliquid Airdrop...

How to Claim Your Hyperliquid Airdrop: A Step-by-Step Guide to HYPE Tokens The Hyperliquid...
HomeTechnologyAmazon and other...

Amazon and other companies are restricting ChatGPT as employees enter sensitive data into an AI chatbot.

Thousands of employees are inserting sensitive data into ChatGPT, prompting companies to ban or restrict access to the software, warning that material sent to powerful internet chatbots could potentially become public.

The numbers show that more than one in 20 people who use ChatGPT at work have sent their company data to Microsoft-backed artificial intelligence software.

According to cybersecurity firm Cyberhaven, the percentage of employees posting internal data on ChatGPT has more than doubled in less than a month from 3.1 percent to 6.5 percent, with content sent with regulated health information and personal information.

Organizations are increasingly concerned about the surge in the use of chatbots, as well as the commercial and security implications of potentially sensitive information that is regularly “leaked” to remote databases.

Amazon has already warned employees against inserting sensitive data into ChatGPT, while banking giant JPMorgan and US cell phone network Verizon have banned employees from using the software altogether.

Samsung, the world’s largest smartphone maker, became the latest conglomerate this week to worry about how its workforce is using ChatGPT after Korean media reported that workers at the company’s main semiconductor plants are entering sensitive information, including highly sensitive “source codes”. “.’ to fix programming errors.

Source code, the foundation of any operating system or software, is one of a technology company’s best-kept secrets. Samsung did not respond to a request for comment but reportedly restricted employee access to ChatGPT and is now developing its own AI chatbot for internal use.

Millions of people have been using ChatGPT since its regular launch last November. In addition to its ability to answer questions or transform datasets into usable material using natural human-like language, it can also explore and generate computer code at phenomenal speed and query images.

Legal experts warn that employers urgently need to understand how employees are using this next generation of AI-based software, such as ChatGPT, developed by San Francisco-based firm OpenAI, and competitors such as Google Bard.

Es gibt spezifische Bedenken, die von Stellen wie dem britischen GCHQ-Geheimdienst geteilt werden, dass Informationen, die in KI-Systeme eingespeist werden, schließlich wieder öffentlich zugänglich sein könnten, entweder durch educate”. chatbots. .

OpenAI admits it uses data entered into ChatGPT to “improve our models”. However, the company insists it has taken precautions, including removing information that could identify a person.

OpenAI said in an online statement: “We remove all personal information from the data we plan to use to improve model performance. We also use only a small sample of data per client to improve model performance. We are committed to using appropriate technical and technological controls to protect your data.”

Experts argue that the sudden surge in the use of chatbots, known as generative AI, could mean that companies and other organizations could be violating regulations such as GDPR privacy rules and be held responsible for information that then pops up when searched in the future. may appear or be a hacking operation of criminal or state-sponsored groups.

Richard Forrest, legal director of Hayes Connor, a law firm that specializes in data breaches, said employees should “take it all in”. [into AI chatbots] may later become public.”

Describing AI software regulation as “uncharted territory,” Mr. Forrest said, “Companies that use chatbots like ChatGPT without proper preparation and care can unknowingly expose themselves to GDPR data breaches, resulting in significant fines.” , reputational damage and may lead to legal action. . ”

There is growing concern about the ability to regulate and shape the use of tools such as ChatGPT. Last week, Italy became the first Western country to block ChatGPT after data protection regulators raised privacy concerns.

OpenAI, which released a much more powerful version of its existing chatbot called ChatGPT-4 last month, is pushing for privacy rules.

Cyberhaven, a US company that provides data security services to companies with important intellectual property rights, including consulting giants and pharmaceutical manufacturers, said that based on an analysis of the use of ChatGPT by 1.6 million employees, it estimated that sensitive data in chatbot about 200 times a week for every company with 100,000 or more employees.

Cyberhaven’s analysis says unspecified health and personal information is among six categories known to be sent to the chatbot. Most leaks or output events.

The analysis says: “The number is still relatively low, but each of these outbound events that we have identified could be the cause of critical business data exposure.”

Source: I News

Get notified whenever we post something new!

Continue reading

The world’s first Artificial Intelligence Law comes into force in the EU: key points and objectives

The new law puts a significant emphasis on transparency. Companies must inform users when they are interacting with an AI system, whether on phone calls or in chats where chatbots interfere. ...

What are the blue screens that appear on Microsoft computers after a crash?

Commonly known as the "screen of death" is exclusive to the Microsoft Windows operating system and appears when the system is unable to recover from an error. ...

Microsoft crashes worldwide, causing problems for many companies

The failure was due to an update problem with an antivirus from the company CrowdStrike. The failure has caused chaos at Aena airports, and multiple delays have been recorded. There are incidents at Osakidetza with online appointments and at...