Geeks logo

ChatGPT Confirms Data Breach, Raising Security Concerns

ChatGPT Confirms Data Breach

By Syed Shaharyar Raza RazviPublished about a year ago 3 min read
Like

The popularity of ChatGPT and other similar chatbots has not come without concerns around cybersecurity. When the technology was first released, there were concerns about how it could be used to launch cyberattacks. Threat actors soon figured out how to bypass safety checks to use ChatGPT to write malicious code. However, the tables have turned, and the technology itself has become the target of cyberattacks.

OpenAI, the company that developed ChatGPT, recently confirmed a data breach in the system that was caused by a vulnerability in the code’s open-source library. The vulnerability allowed users to see the chat history of other active users. While OpenAI patched the bug within days of discovery, the incident highlights the risks associated with the use of chatbots.

Open-source libraries are used to develop dynamic interfaces by storing readily accessible and frequently used routines and resources, such as classes, configuration data, documentation, help data, message templates, pre-written code and subroutines, type specifications, and values. OpenAI uses Redis to cache user information for faster recall and access. Thousands of contributors develop and access open-source code, making it easy for vulnerabilities to go unnoticed. Threat actors are well aware of this, and attacks on open-source libraries have increased by 742% since 2019.

The ChatGPT exploit was minor, and OpenAI patched the bug within days of discovery. However, the incident could be a harbinger of the risks that could impact chatbots and users in the future. Already there are privacy concerns surrounding the use of chatbots. ChatGPT and other similar chatbots store vast amounts of data and then use that information to generate responses to questions and prompts. Anything in the chatbot’s memory becomes fair game for other users.

Chatbots can record a single user’s notes on any topic and then summarize that information or search for more details. But if those notes include sensitive data—an organization’s intellectual property or sensitive customer information, for instance—it enters the chatbot library, and the user no longer has control over the information.

Because of privacy concerns, some businesses and entire countries are clamping down. For example, JPMorgan Chase has restricted employees’ use of ChatGPT due to the company’s controls around third-party software and applications. Italy cited the data privacy of its citizens for its decision to temporarily block the application across the country. The concern, officials stated, is due to compliance with GDPR.

Experts also expect threat actors to use ChatGPT to create sophisticated and realistic phishing emails. Chatbots can mimic native speakers with targeted messages, and ChatGPT is capable of seamless language translation, which will be a game-changer for foreign adversaries.

A similarly dangerous tactic is the use of AI to create disinformation and conspiracy campaigns. Researchers used ChatGPT to write an op-ed, and the result was similar to anything found on InfoWars or other well-known websites peddling conspiracy theories.

Each evolution of chatbots will create new cyber threats, either through the more sophisticated language abilities or through their popularity. This makes the technology a prime target as an attack vector. To that end, OpenAI is taking steps to prevent future data breaches within the application. It is offering a bug bounty of up to $20,000 to anyone who discovers unreported vulnerabilities.

However, the program does not cover model safety or hallucination issues, wherein the chatbot is prompted to generate malicious code or other faulty outputs. So it sounds like OpenAI is still playing catch-up when it comes to securing ChatGPT and similar chatbots against potential attacks.

In conclusion, while chatbots have brought significant benefits to the table, the cybersecurity concerns surrounding them cannot be ignored. There is a need for robust security measures to be implemented to prevent threat actors from exploiting vulnerabilities in the technology. OpenAI and other developers

entertainmentfeature
Like

About the Creator

Syed Shaharyar Raza Razvi

I'm Syed, a tech enthusiast with a love for sports & games. I'm passionate about staying up-to-date with the latest advancements in technology. I hope to share my insights and perspectives on the intersection of technology, sports & gaming

Reader insights

Be the first to share your insights about this piece.

How does it work?

Add your insights

Comments

There are no comments for this story

Be the first to respond and start the conversation.

Sign in to comment

    Find us on social media

    Miscellaneous links

    • Explore
    • Contact
    • Privacy Policy
    • Terms of Use
    • Support

    © 2024 Creatd, Inc. All Rights Reserved.