Humans logo

The Power of AI

Artificial intelligence is taking over

By Clenias DubePublished 10 months ago 3 min read
1

AI has become an integral part of our lives, revolutionizing various aspects of our culture and daily routines. Thanks to the advancements in training AI models with large amounts of data, it can now compete with customer service agents and make rapid decisions, such as providing significant discounts. The ability to create digital interpretations of human creativity is another fascinating aspect, enabling us to commission artwork from any artist in history. Furthermore, AI has even shown glimpses of its potential to decipher our thoughts, as long as it has access to patterns within our brain.

AI advocates, like former Google CEO Eric Schmidt, highlight the tremendous potential of AI in areas such as material advancements, climate change solutions, and improved management of energy systems. Cincinnati Children's Hospital exemplifies this potential by using AI to proactively address child suicide. By analyzing electronic medical records, doctor visits, and even suicide notes, their AI system assists pediatricians in identifying children at high risk and providing immediate intervention.

However, such exceptional projects are exceptions rather than the norm. In most cases, the AI you encounter is built by a few profit-driven companies, utilizing data that you may not have realized was being collected. Increasingly, these AI systems make critical decisions that significantly impact your life, even without your awareness. For instance, automated decision-making systems determine access to government support for housing, food, medical care, and more.

Michelle Gilman, a University of Baltimore law professor, and her students frequently encounter situations where algorithms purchased from secretive companies are responsible for wrongly denying people their entitled benefits. The lack of transparency surrounding these algorithms makes it difficult for individuals to understand the factors and weighting behind the decisions. Paradoxically, judges often grant undue deference to automated systems, assuming they are infallible due to their mathematical nature.

These automated systems possess immense power, and their flaws can have far-reaching consequences. The inadvertent and intentional assembly of these invisible power structures has the potential to shape not only individual lives but also the future of humanity. Meredith Whitaker, a former AI researcher at Google and now the president of the anonymous messaging app Signal, warns about the dangers of concentrated power in the hands of a few companies who have amassed vast amounts of data over the past two decades. She emphasizes that these companies are now leveraging this data to build AI systems without adequate regulations.

Whitaker argues against the notion that AI will automatically bring about social good or be accessible to all in an equitable manner. She cautions that AI systems designed to replace human judgment and driven by profit motives are not inherently socially positive. Instead, she believes they serve the economic interests of a select few, amplifying existing power imbalances.

Despite the concerns raised by critics, many AI leaders, including Sam Altman, CEO of OpenAI (creator of Chat GPT), express tremendous optimism about the potential of AI. Altman envisions a future with artificial general intelligence (AGI), where AI surpasses human intelligence and benefits all of humanity. While some within the AI community share this optimism, they differ in their timelines and recognize that only a small number of companies currently possess the resources and expertise to build such systems.

Eric Schmidt, former CEO of Google, suggests that rather than imposing external regulations, it is preferable to establish industry-crafted boundaries and guardrails. He argues that the industry's expertise is necessary to navigate the complexities of AI, as the government lacks the necessary knowledge. Once the industry establishes reasonable boundaries, the government can then implement a regulatory structure to ensure responsible use.

However, critics contend that relying solely on the industry's self-regulation may lead to a lack of accountability and exacerbate existing power dynamics. They emphasize the need for meaningful public input, international agreements, and comprehensive regulations to safeguard against potential harm and ensure AI systems are developed and deployed ethically.

science
1

About the Creator

Reader insights

Be the first to share your insights about this piece.

How does it work?

Add your insights

Comments

There are no comments for this story

Be the first to respond and start the conversation.

Sign in to comment

    Find us on social media

    Miscellaneous links

    • Explore
    • Contact
    • Privacy Policy
    • Terms of Use
    • Support

    © 2024 Creatd, Inc. All Rights Reserved.