Futurism logo

DEEP FAKES: The Most Serious Threats in AI

Deep Fakes At the Threat posed by it

By Vishnu AravindhanPublished 3 years ago 5 min read
2
DEEP FAKES: The Most Serious Threats in AI
Photo by Senor Sosa on Unsplash

A Deep fake is basically a type of syntactic media that is artificially created by using the self-learning algorithms of artificial intelligence. So basically a deep fake could be an audio clip, a video footage or an image that has been produced synthetically or artificially via the usage of artificial intelligence. Through this technology, you can swap phases in a video or in an image, you can carry out lip synching and you can modulate the voice in an audio clip. So this technology of creating deep fakes or synthetic media in a world driven by cloud computing, artificial intelligence and vast volumes of data represents tremendous opportunities and as well as challenges. So the ability to create substantive media using A.I. creates numerous opportunities because it will simply transform the way we listen, speak or communicate on social media.

But however, it poses numerous challenges as well, because deep fakes can be easily misused to promote fake news, to propagate disinformation and propaganda. Such deep fake audio clips, videos and images can be easily misused to manipulate individuals and as well as institutions, because through such deep fakes once reputation could be easily destroyed by synthetically making the individual say unethical or controversial statements.

Through deep fakes extremist propaganda can be carried out on the Internet, which could be used to polarize the society and to eventually source social discord that could lead to communal riots as well. These fake videos could be easily used to damage one's reputation, especially popular individuals such as celebrities, politicians, etc.. Through such disinformation and manipulation, violence can be easily incited in the society and hence the possible misuse of deep fakes has emerged as a major security challenge.

Now, imagine a deep fake video of a popular religious leader or a political leader who seemed to be making controversial statements against other communities. Such manipulative videos can easily instigate riots and violence, thereby threatening security and stability in the society. Deep fakes also pose a specific threat to women because by using deep fakes, synthetic pornography could be generated in order to malign and destroy the reputation of women. This could also become a tool for blackmail, through which the individuals could be exploited and harassed further. Then more dangerously, deep fakes can be easily used to disrupt democratic elections and to instigate revolt against governments and as well as to sow discord and confusion in the minds of the voters, thereby disrupting the electoral process. Then another alarming possibility is that deep fakes could be used to counter truth and facts and replace it with fake news and misinformation.

And the real threat is that these deep fakes could be easily weaponized by a nation state for geopolitical reasons. Through a well designed campaign Intelligence agencies could wage a psychological war by promoting a negative campaign against a particular government, by disrupting elections in a target country, by trying to swing voter preferences in favor of a particular candidate or a certain party, thereby compromising the national security of the target country.

In a world which is driven by the Internet and social media fake news tends to travel faster and gain more acceptance as compared to the truth. That being the case, the potential misuse of deep fakes generated through artificial intelligence could further enable the spread of fake news and propaganda, thereby threatening the national security of a country, societal order in a particular country and as well as an individual's reputation or an organization's reputation.

Conclusion

So I conclude that this has to be the most serious threat that has emerged from artificial intelligence, and hence there is an urgent need to come up with solutions to counter deep fakes. Therefore, we need a multi stakeholder and multi-modal approach through which all the stakeholders can be brought together and through collaborative exercise, the governments, the tech firms, the civil society, the common public and more importantly, the media could be brought together to design solutions to counter deep fakes.

It would be the responsibility of the government and the legislature to come out with appropriate legislative regulation that can not only regulate the creation and spread of deep fakes and also prohibit them and provide for suitable action against the perpetrators. So appropriate policies will have to be brought on by the government and even the tech firms, especially the social media platforms, they need to come up with appropriate policies to flag deep fakes and also to take suitable action against them.

The tech firms also have a responsibility to come out with suitable technological interventions through which deep fakes can be easily detected, thus helping in preventing the spread of deep fakes.

Then more importantly, we need to focus on media literacy wherein the consumer and as well as the journalists and the media organizations are made aware of the impact of deep fakes. And they should be trained as to how such deep fake videos, audios and clippings can be identified.

So there has to be greater awareness amongst the common public and as well as amongst the media organizations so that they can immediately dismiss deep fake, thereby curbing its spread and impact. So considering the emergence of this threat, what we need is a critical consumer, especially when we are consuming vast volumes of media on the Internet. Every time we watch a video or look at an image or listen to an audio clip, we need to pause and question ourselves whether what I'm watching is authentic and genuine. And by following a basic set of precautions, one should be able to identify such deep fakes and then ensure that such deep fakes do not gain legitimacy.

artificial intelligence
2

About the Creator

Reader insights

Be the first to share your insights about this piece.

How does it work?

Add your insights

Comments

There are no comments for this story

Be the first to respond and start the conversation.

Sign in to comment

    Find us on social media

    Miscellaneous links

    • Explore
    • Contact
    • Privacy Policy
    • Terms of Use
    • Support

    © 2024 Creatd, Inc. All Rights Reserved.