Futurism logo

The ‘Blueprint’ for an AI Bill of Rights: What It Means for Defense

Charles Muizers on the AI Bill of Rights.

By Chuck MuizersPublished 9 months ago 3 min read
Like

As the government deploys artificial intelligence (AI) in more applications, it has issued guidelines on developing and using it. Is this enough to ensure that agencies can achieve ethical AI? As AI services such as Lensa and ChatGPT make their way into the mainstream, some question their outputs’ ethical implications.

Some have criticized the renderings made by Lensa AI, which they claimed were biased due to their gender and race. Similarly, ChatGPT was accused of producing limited responses. Meta, the parent company of Facebook, ended its public demo of the Galactica language model in late November after only a few days. The model was heavily criticized for producing outputs that were inaccurate and biased. The ethical implications of using AI in decision-making are also becoming more prominent as government agencies expand their use of the technology.

In response to this issue, the Biden Administration released the AI Bill of Rights last October. The following month, the NIST released a framework designed to protect society and individuals from the risks associated with using AI. These documents are necessary steps to ensure that the technology is used ethically.

Despite the various steps that have been implemented, there are still many steps that agencies can still take to ensure that their use of artificial intelligence is ethical.

AI Tech Today

As the government uses AI in various applications, such as tax returns and delivery routes, it is still essential that agencies follow proper ethical guidelines. Some agencies currently using AI include the US Postal Service and the Department of Energy. In these applications, it is also crucial that the machine learning (ML) models and their outputs do not exhibit biases.

The White House’s Blueprint for AI Bill of Rights provides guidelines on designing and using AI. These principles include protecting individuals from algorithmic discrimination and ensuring people can opt out of automated systems.

Aside from ethical guidelines, there are also other issues that agencies should consider when using AI. For instance, the energy required to run the ChatGPT language model has been associated with greenhouse gas emissions. Despite the potential advantages of ChatGPT, its cost, and impact should also be considered.

In addition to ethical guidelines, there are other factors that agencies should consider when using AI. These include human rights, fairness, and sustainability. In September, the UN issued a framework designed to address these issues.

Ethical and Responsible AI

Here are six ethical and responsible AI principles to help government agencies make informed decisions when using the technology.

Human Rights

AI should not be used to infringe on certain fundamental human rights, such as freedom, autonomy, dignity, and fairness. In addition, consideration should be given to sustainability issues.

Human Oversight

AI should be used with the necessary human oversight to accommodate various human-centric factors. Its outputs and inputs should undergo regular human reviews to ensure that they are not biased. People should also be allowed to opt out of systems that are automated.

Explainable Use of AI

The use of AI should be explained to the public. Organizations should avoid adopting a “black box” approach, where data inputs and ML models are kept secret. While ML algorithms can be incredibly complex, they should be transparently explained.

Security, Safety, and Reliability

Systems that use AI should be reliable, secure, and safe. They should not cause unsafe conditions that could put people at risk. To identify potential issues, systems should be built with input from experts in various fields.

Privacy

AI systems should have safeguards to prevent unauthorized access and use of people’s personal information. Individuals should have a say in how their data is collected and used.

Equity and Inclusion

ML algorithms should regularly update the latest data sets to ensure they do not reflect biases. In addition, they should be deployed to maximize their output equity. Along with data scientists, the design and implementation of AI systems should include various stakeholder groups. As a result, the involvement of diverse individuals can help ensure that the technology is used reasonably and efficiently.

What the Future Holds

Among the possible adverse effects of not being very responsible with AI is the erosion of trust in the government. The federal government should follow the six principles of responsible AI to ensure its use of AI is ethical and practical. The adoption of AI by government agencies can be improved by following ethical AI guidelines. Unfortunately, there is a lack of accountability regarding the use of this technology.

All federal agencies must ensure that their AI projects follow ethical guidelines for reliability and accountability.

artificial intelligence
Like

About the Creator

Chuck Muizers

Reader insights

Be the first to share your insights about this piece.

How does it work?

Add your insights

Comments

There are no comments for this story

Be the first to respond and start the conversation.

Sign in to comment

    Find us on social media

    Miscellaneous links

    • Explore
    • Contact
    • Privacy Policy
    • Terms of Use
    • Support

    © 2024 Creatd, Inc. All Rights Reserved.