FYI logo

Content warning

This story may contain sensitive material or discuss topics that some readers may find distressing. Reader discretion is advised. The views and opinions expressed in this story are those of the author and do not necessarily reflect the official policy or position of Vocal.

Google Is Now Warning Their Employees of AI Chatbots!!

With all of the AI hype going around the world right now, it seems to be making everyone's lives easier. But can it really be as Google warns their employees about using them?

By Darron KossPublished about a year ago 3 min read

Since buying (and later selling) Boston Dynamics, making scientific advances through DeepMind, and more recently making the topic the focus of this year's Google I/O conference in the wake of the release of its AI chatbot, Google Bard, Alphabet, the parent company of Google, has been all in on artificial intelligence. The business now cautions its staff to use caution when speaking to these AI bots, including its own bot.

According to a Reuters article, Alphabet told its employees not to divulge sensitive information to AI chatbots because the companies that own the technology will later store it.

No matter who says it, this is wonderful counsel that comes straight from the source. Additionally, revealing private or personal information anywhere online regularly is not a smart idea.

Because ChatGPT, Google Bard, and Bing Chat are based on large language models (LLMs) that are constantly being trained, whatever you say to one of these AI chatbots can be used to teach it. The businesses that created these AI chatbots also store the data, which is accessible to their staff.

Of Bard, Google's AI chatbot, the company explains in its FAQs:

"When you interact with Bard, Google collects your conversations, your location, your feedback, and usage information. That data helps us provide, improve and develop Google products, services, and machine-learning technologies, as explained in the Google Privacy PolicyOpens in a new window"

Google further emphasizes to "not include information that can be used to identify you or others in your Bard conversations" by saying that it chooses a subset of talks as samples to be examined by expert reviewers and retained for up to three years.

According to OpenAI, AI trainers evaluate ChatGPT talks as well to aid the company's systems. The company states on its website: "We review conversations to improve our systems and to ensure the content complies with our policies and safety requirements."

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -- - - - - - - - -- - - - - - - - - -- - - - - - -- - - - - - - - -- - - - - - - - - - - - - - - - - - - - - -- - - - - - - - - - - - - - -- - - -- - - - - - - -- - - - - - - - - - - - - - -- - - - - - - - -- - - - - - - - -- - - - - -- - - - - - - -- - - - - - - - - - -- - - - - - - - -- - - - - - - - - -- - - -- - - - - - - - - - - -- - - - - - - - - - - - - -- - - -- - -- - - - - - - - - - - - - - - - - -- -- -- - - - - - - - -- - - - - - - - -- - - - - - - - - -- - - - - - - - - -- - - - - -- -- - - - - - - - - - - - - - - -

ScienceHumanity

About the Creator

Darron Koss

Hello, I am just a teen who enjoys spreading news! I hope everyone enjoys.

Enjoyed the story?
Support the Creator.

Subscribe for free to receive all their stories in your feed. You could also pledge your support or give them a one-off tip, letting them know you appreciate their work.

Subscribe For Free

Reader insights

Be the first to share your insights about this piece.

How does it work?

Add your insights

Comments

There are no comments for this story

Be the first to respond and start the conversation.

    Darron KossWritten by Darron Koss

    Find us on social media

    Miscellaneous links

    • Explore
    • Contact
    • Privacy Policy
    • Terms of Use
    • Support

    © 2024 Creatd, Inc. All Rights Reserved.