Futurism logo

5 TED Talks on AI Ethics you should watch today

Some of the most impactful “food for thoughts” TED Talks about AI Ethics can really make you start thinking about it today.

By Jair RibeiroPublished 3 years ago 7 min read
Like

AI has a significant impact on our daily lives, and it should require us a better understanding of the positive and negative effects of this technology.

The ethical challenges that artificial intelligence poses in our lives today are becoming well known, and it’s time to understand better how the ethics aspects of this technology can be systematized in a realistic and enforceable way.

Today AI has an impact on our jobs, safety, shopping, justice, and several other activities. In many cases, all of this is happening without a shared and well-defined ethical and legal structure to ensure that the technology below it is transparent, accountable, and responsible.

We are now in 2020, and I firmly believe that this should start the change, and we should begin to pay attention and consider it an emergency at the same level of climate change today.

I’ve been dedicating a consistent part of my studies, my public speaking in conferences, and my networking on promoting awareness and call-to-action regarding the necessity to bring up the topic of AI Ethics in every possible occasion when technology frameworks have been discussed.

And from today, you will find on my Medium more and more articles about AI Ethics in a practical and impactful way.

To start, I would like to share some of my favorite “food for thoughts” TED Talks about AI Ethics that can really make you start thinking about it.

I’ve selected five thought leaders who share what they consider necessary to govern AI and ensure a comprehensive ethical framework, in addition to human intelligence.

How to Keep Human Bias Out of AI

AI algorithms make important decisions about you all the time — like how much you should pay for car insurance or whether you get that job interview.

But what happens when these machines are built with human bias coded into their systems? Technologist Kriti Sharma explores how the lack of diversity in tech is creeping into our AI, offering three ways we can start making more ethical algorithms.

Kriti Sharma is the Founder of AI for Good, an organization focused on building scalable technology solutions for social good. In 2018, she also launched rAInbow, a digital companion for women facing domestic violence in South Africa. This service reached nearly 200,000 conversations within the first 100 days, breaking down the stigma of gender-based violence. In 2019, she collaborated with the Population Foundation of India to launch Dr. Sneha, an AI-powered digital character to engage with young people about sexual health, an issue that is still considered a taboo in India.

Sharma was recently named in the Forbes “30 Under 30” list for advancements in AI. She was appointed a United Nations Young Leader in 2018 and is an advisor to both the United Nations Technology Innovation Labs and to the UK Government’s Centre for Data Ethics and Innovation.

Can We Protect AI from Our Biases?

As humans, we’re inherently biased. Sometimes it’s explicit, and other times it’s unconscious, but as we move forward with technology, how do we keep our biases out of the algorithms we create? Documentary filmmaker Robin Hauser argues that we need to have a conversation about how AI should be governed and ask who is responsible for overseeing the ethical standards of these supercomputers. “We need to figure this out now,” she says. “Because once skewed data gets into deep learning machines, it’s challenging to take it out.”

Robin is the director and producer of cause‐based documentary films at Finish Line Features, Inc. and Unleashed Productions, Inc. As a businesswoman, long-time professional photographer, and social entrepreneur, Robin brings her leadership skills, creative eye, and passion for her documentary film projects. Her artistic vision and experience in the business world afford her a unique perspective on what it takes to motivate an audience. Her most recent award-winning film, CODE: Debugging the Gender Gap, premiered at Tribeca Film Festival 2015 and has caught the eye of the international tech industry and of policymakers and educators in Washington, DC, and abroad. Robin is currently directing and producing bias, a documentary about unconscious bias and how it affects our lives socially and in the workplace.

The Era of Blind Faith in Big Data Must End

Algorithms decide who gets a loan, who gets a job interview, who gets insurance, and much more — but they don’t automatically make things fair. Mathematician and data scientist Cathy O’Neil coined a term for algorithms that are secret, important, and harmful: “weapons of math destruction.” Learn more about the hidden agendas behind the formulas.

In 2008, as a hedge-fund quant, mathematician Cathy O’Neil saw firsthand how really, really bad math could lead to financial disaster. Disillusioned, O’Neil became a data scientist and eventually joined Occupy Wall Street’s Alternative Banking Group.

With her popular blog, mathbabe.org, O’Neil emerged as an investigative journalist. Her acclaimed book Weapons of Math Destruction details how opaque, black-box algorithms rely on biased historical data to do everything from sentence defendants to hire workers. In 2017, O’Neil founded consulting firm ORCAA to audit algorithms for racial, gender, and economic inequality.

Machine Intelligence Makes Human Morals More Important

Machine intelligence is here, and we’re already using it to make subjective decisions. But the complex way AI grows and improves makes it hard to understand and even harder to control. In this cautionary talk, techno-sociologist Zeynep Tufekci explains how intelligent machines can fail in ways that don’t fit human error patterns — and in ways, we won’t expect or be prepared for. “We cannot outsource our responsibilities to machines,” she says. “We must hold on ever tighter to human values and human ethics.”

We’ve entered an era of digital connectivity and machine intelligence. Complex algorithms are increasingly used to make consequential decisions about us. Many of these decisions are subjective and have no right answer: who should be hired, fired, or promoted, what news should be shown to whom, which of your friends do you see updates from, which convict should be paroled. With the increasing use of machine learning in these systems, we often don’t even understand how exactly they are making these decisions. Zeynep Tufekci studies what this historic transition means for culture, markets, politics, and personal life.

Tufekci is a contributing opinion writer at the New York Times, an associate professor at the School of Information and Library Science at the University of North Carolina, Chapel Hill, and a faculty associate at Harvard’s Berkman Klein Center for Internet and Society.

Her book, Twitter and Tear Gas: The Power and Fragility of Networked Protest, was published in 2017 by Yale University Press. Her next book, from Penguin Random House, will be about algorithms that watch, judge, and nudge us.

How to Get Empowered Not Overpowered

Many artificial intelligence researchers expect AI to outsmart humans at all tasks and jobs within decades, enabling a future where we’re restricted only by the laws of physics, not the limits of our intelligence. MIT physicist and AI researcher Max Tegmark separate the real opportunities and threats from the myths, describing the concrete steps we should take today to ensure that AI ends up being the best — rather than the worst — thing to ever happen to humanity.

Max Tegmark is an MIT professor who loves thinking about life’s big questions. He’s written two popular books, Our Mathematical Universe: My Quest for the Ultimate Nature of Reality and the recently published Life 3.0: Being Human in the Age of Artificial Intelligence, as well as more than 200 nerdy technical papers on topics from cosmology to AI.

He writes: “In my spare time, I’m president of the Future of Life Institute, which aims to ensure that we develop not only technology but also the wisdom required to use it beneficially.”

It’s time for action

I know that we actually have some pretty complicated questions to answer when it comes to the ethical framework of AI, and often there are no simple answers.

After all, ethical dilemmas, by definition, do not have clearly defined answers, just like everything related to morality; that’s why we must discuss them to agree on a consensus with the right urgency and awareness.

AI is transforming our society, and it can’t just be the privilege of a few to decide how that will happen or frame what that world looks like.

Transparency is fundamental when there is bias, even though it is unintended. It is the responsibility of the technology leaders to open up the black box of AI and ensure there is as much transparency as possible.

And we have our part of doing. Let’s start from here?

Also, I’ve just published my books on Amazon, click on the link to have a look.

This article was originally published on Medium.com.

artificial intelligence
Like

About the Creator

Jair Ribeiro

A passionate and enthusiastic Artificial Intelligence Evangelist who writes about people's experiences with technology and innovation.

Reader insights

Be the first to share your insights about this piece.

How does it work?

Add your insights

Comments

There are no comments for this story

Be the first to respond and start the conversation.

Sign in to comment

    Find us on social media

    Miscellaneous links

    • Explore
    • Contact
    • Privacy Policy
    • Terms of Use
    • Support

    © 2024 Creatd, Inc. All Rights Reserved.