Education logo

AI

Ai is polluting our culture

By Tawhid Hossen SumonPublished 2 months ago 3 min read
1
AI
Photo by Steve Johnson on Unsplash

A.I. companies seem to resist incorporating any identifiable patterns into their outputs that could enhance A.I.-detection efforts to reasonable levels. This resistance may stem from their concern that imposing such patterns might excessively restrict the model's performance, although there is currently no evidence to support this claim. Despite their public commitments to develop more advanced watermarking techniques, it is becoming increasingly evident that these companies are hesitant to take action as it contradicts their financial interests to have detectable products in the A.I. industry.

To address this corporate reluctance, we require a regulatory measure akin to the Clean Air Act, but for the internet—a Clean Internet Act. One potential solution could involve mandating advanced watermarking techniques that are inherently embedded in generated outputs, making them difficult to remove. Just as the 20th century necessitated significant interventions to safeguard the shared environment, the 21st century demands extensive interventions to protect a distinct yet equally vital common resource: our shared human culture. Until now, we may not have recognized the threat it faces, but it is imperative that we take action.

Before implementing any specific policy solution, it is crucial to acknowledge that environmental pollution was only addressed through external legislation. In 1968, biologist and ecologist Garrett Hardin put forth a perspective that greatly influenced this viewpoint. Dr. Hardin emphasized that pollution was a consequence of individuals acting in their own self-interest, leading to a "tragedy of the commons" where we unknowingly harm our collective well-being. This perspective played a pivotal role in the environmental movement, which relied on government regulations to address issues that companies alone were unwilling or unable to tackle.

Once again, we find ourselves facing a tragedy of the commons. Short-term economic self-interest drives the utilization of inexpensive A.I. content to maximize clicks and views, thereby polluting our cultural landscape.Increasingly, our feeds and searches are inundated with synthetic A.I.-generated outputs. The impact of this extends far beyond our screens, permeating our entire culture and infiltrating our most vital institutions.

Let's consider the realm of science. Following the highly anticipated release of GPT-4, an advanced artificial intelligence model developed by OpenAI, the language used in scientific research began to undergo a transformation. This was particularly evident within the field of A.I. itself.

A recent study, published this month, delved into scientists' peer reviews at several prestigious scientific conferences focused on A.I. The findings were intriguing. At one of these conferences, the frequency of the word "meticulous" in peer reviews increased by more than 34 times compared to the previous year. Similarly, the usage of "commendable" was approximately 10 times more frequent, and "intricate" appeared 11 times more often. Similar trends were observed at other major conferences.

These phrases, undoubtedly, are among the favored buzzwords of modern large language models like ChatGPT. In essence, a significant number of researchers at A.I. conferences were discovered to be relying on A.I. assistance when crafting their peer reviews or even outsourcing the task entirely to A.I. systems. Furthermore, it was observed that the closer the reviews were submitted to the deadline, the greater the reliance on A.I. support.

If this situation makes you uneasy, particularly considering the current unreliability of A.I., or if you believe that scientific reviews should be conducted solely by human scientists rather than A.I., these sentiments underscore the inherent paradox of this technology. The ethical boundaries surrounding A.I. remain unclear, leaving us to ponder where the line should be drawn.In particular, A.I. companies appear opposed to any patterns baked into their output that can improve A.I.-detection efforts to reasonable levels, perhaps because they fear that enforcing such patterns might interfere with the model’s performance by constraining its outputs too much — although there is no current evidence this is a risk. Despite public pledges to develop more advanced watermarking, it’s increasingly clear that the companies are dragging their feet because it goes against the A.I. industry’s bottom line to have detectable products.

To deal with this corporate refusal to act we need the equivalent of a Clean Air Act: a Clean Internet Act. Perhaps the simplest solution would be to legislatively force advanced watermarking intrinsic to generated outputs, like patterns not easily removable. Just as the 20th century required extensive interventions to protect the shared environment, the 21st century is going to require extensive interventions to protect a different, but equally critical, common resource, one we haven’t noticed up until now since it was never under threat: our shared human culture.

book reviews
1

About the Creator

Reader insights

Be the first to share your insights about this piece.

How does it work?

Add your insights

Comments

There are no comments for this story

Be the first to respond and start the conversation.

Sign in to comment

    Find us on social media

    Miscellaneous links

    • Explore
    • Contact
    • Privacy Policy
    • Terms of Use
    • Support

    © 2024 Creatd, Inc. All Rights Reserved.