Thousands of employees are inserting sensitive data into ChatGPT, prompting companies to ban or restrict access to the software, warning that material sent to powerful internet chatbots could potentially become public.
The numbers show that more than one in 20 people who use ChatGPT at work have sent their company data to Microsoft-backed artificial intelligence software.
According to cybersecurity firm Cyberhaven, the percentage of employees posting internal data on ChatGPT has more than doubled in less than a month from 3.1 percent to 6.5 percent, with content sent with regulated health information and personal information.
Organizations are increasingly concerned about the surge in the use of chatbots, as well as the commercial and security implications of potentially sensitive information that is regularly “leaked” to remote databases.
Amazon has already warned employees against inserting sensitive data into ChatGPT, while banking giant JPMorgan and US cell phone network Verizon have banned employees from using the software altogether.
Samsung, the world’s largest smartphone maker, became the latest conglomerate this week to worry about how its workforce is using ChatGPT after Korean media reported that workers at the company’s main semiconductor plants are entering sensitive information, including highly sensitive “source codes”. “.’ to fix programming errors.
Source code, the foundation of any operating system or software, is one of a technology company’s best-kept secrets. Samsung did not respond to a request for comment but reportedly restricted employee access to ChatGPT and is now developing its own AI chatbot for internal use.
Millions of people have been using ChatGPT since its regular launch last November. In addition to its ability to answer questions or transform datasets into usable material using natural human-like language, it can also explore and generate computer code at phenomenal speed and query images.
Legal experts warn that employers urgently need to understand how employees are using this next generation of AI-based software, such as ChatGPT, developed by San Francisco-based firm OpenAI, and competitors such as Google Bard.
Es gibt spezifische Bedenken, die von Stellen wie dem britischen GCHQ-Geheimdienst geteilt werden, dass Informationen, die in KI-Systeme eingespeist werden, schließlich wieder öffentlich zugänglich sein könnten, entweder durch educate”. chatbots. .
OpenAI admits it uses data entered into ChatGPT to “improve our models”. However, the company insists it has taken precautions, including removing information that could identify a person.
OpenAI said in an online statement: “We remove all personal information from the data we plan to use to improve model performance. We also use only a small sample of data per client to improve model performance. We are committed to using appropriate technical and technological controls to protect your data.”
Experts argue that the sudden surge in the use of chatbots, known as generative AI, could mean that companies and other organizations could be violating regulations such as GDPR privacy rules and be held responsible for information that then pops up when searched in the future. may appear or be a hacking operation of criminal or state-sponsored groups.
Richard Forrest, legal director of Hayes Connor, a law firm that specializes in data breaches, said employees should “take it all in”. [into AI chatbots] may later become public.”
Describing AI software regulation as “uncharted territory,” Mr. Forrest said, “Companies that use chatbots like ChatGPT without proper preparation and care can unknowingly expose themselves to GDPR data breaches, resulting in significant fines.” , reputational damage and may lead to legal action. . ”
There is growing concern about the ability to regulate and shape the use of tools such as ChatGPT. Last week, Italy became the first Western country to block ChatGPT after data protection regulators raised privacy concerns.
OpenAI, which released a much more powerful version of its existing chatbot called ChatGPT-4 last month, is pushing for privacy rules.
Cyberhaven, a US company that provides data security services to companies with important intellectual property rights, including consulting giants and pharmaceutical manufacturers, said that based on an analysis of the use of ChatGPT by 1.6 million employees, it estimated that sensitive data in chatbot about 200 times a week for every company with 100,000 or more employees.
Cyberhaven’s analysis says unspecified health and personal information is among six categories known to be sent to the chatbot. Most leaks or output events.
The analysis says: “The number is still relatively low, but each of these outbound events that we have identified could be the cause of critical business data exposure.”
Source: I News
With a background in journalism and a passion for technology, I am an experienced writer and editor. As an author at 24 News Reporter, I specialize in writing about the latest news and developments within the tech industry. My work has been featured on various publications including Wired Magazine and Engadget.

