Futurism logo

Can You Trust a Computer Algorithm?

Artificial intelligence can mimic human decisions — but also drastically amplify hidden biases.

By Wilson da SilvaPublished 3 years ago Updated 3 years ago 11 min read
Like

ARTIFICIAL INTELLIGENCE is increasingly creeping into everyday life, from Google searches and matching singles on dating sites to Siri recommendations and detecting credit card fraud. But how much can we trust the computer algorithms that drive it?

“People fear AI and machine learning because they think it’s about a shift of power from the human to machine,” Dr. Suelette Dreyfus, a lecturer in the University of Melbourne’s School of Computing and Information Systems, said in a panel discussion [see video] on artificial intelligence (AI) between academic and industry experts in November 2020.

“But actually, it’s also a shift in power between the individual human and the organisation. And that becomes very important, because you have to think about how we will make the organisation accountable, what transparency requirements are there, and what does that mean for the workers of the future?”

She gave the example of AI-driven keyboard behavioural analytics programs: they were originally developed as a cyber security measure, recognising the patterns in how an individual types to create a unique biometric signature that allows a network to distinguish between authorised and unauthorised users. But these same programs are now being used to also track a worker’s hours and determine how diligently they work.

Not only that, if algorithms make faulty decisions, they can be difficult to challenge. “In the past, you might have been able to pick up the phone, call someone in the organisation and ask that to be fixed,” she added. “Now, that is much harder … that decision is being made by an executive team in head office in California, about people who live from Manila to Melbourne.”

Prof. Jeannie Paterson, co-director of the University of Melbourne’s Centre for AI and Digital Ethics [University of Melbourne]

Prof. Jeannie Paterson of the Melbourne Law School and co-director of the university’s Centre for AI and Digital Ethics, said that AI was highlighting asymmetrical relationships, “where one person has all the power, and one doesn’t.”

In law, consumers and organisations have ‘relational contracts’ “built on a whole lot of understandings and conventions, that mean that the parties have confidence in their ability to work together, and that consumers have reasonable expectations about companies to be fulfilled.

“What’s happening now is that more and more of our relationship is mediated by algorithms. That can be useful, because it can lead to efficiency gains, but the difficulty is that consumers, quite frankly, don’t know this is happening,” Paterson added. “Consumers don’t have visibility of how their interactions are decided by algorithms, and are not aware of how they can contest decisions made by algorithms. [So] there’s an increasing lack of transparency, and an increasing lack of contestability.”

High expectations

AI is extremely useful, countered Prof. James Bailey, the university’s program lead for artificial intelligence, but too much can be expected of the technology. “Algorithms are completely trustworthy — they will do exactly what you tell them to do — [they] are flying our planes for us, we’ve got algorithms in our phones and they’re running our Zoom meetings. They are mostly doing a great job.”

However, algorithms rely on large data sets of past behaviour to make decisions about what’s likely to occur in the future. Problems occur when the future is not the same as the past, or there’s some shift behaviour or the environment, he added. “Simple algorithms can be okay. Where there is a well-defined, fairly narrow problem, very clear performance criteria about what the algorithm needs to do, and where there are low consequences — they’re good.”

But with more complex algorithms, created with deep learning or neural networks, it can be difficult to understand how a decision is made. Hence, where scenarios are more unpredictable, where conditions can change, or where bias against certain people may be embedded in past data, “you’re going to want to apply a lot more scrutiny. And we are [still] learning how to do that. We’re certainly not there yet, in terms of robust certification of how to validate and verify our machine learning algorithms,” he said.

Antony Ugoni, chief data officer of Australian health insurer Bupa [Statistical Society of Australia]

Antony Ugoni, chief data officer at Australian health insurance company Bupa, agreed, adding that while an algorithm may be “doing exactly what we wanted it to do, the question is, is the ecosystem around it set up to do the right thing by the algorithm?” Consequently, most companies take a cautious approach on how much decision-making is given over to algorithms.

As algorithms grow in use, documenting how each and every decision is made will be important — and a good model may be how the Australian Defence Forces operate, said Dr. Kate Devitt, chief scientist of the Trusted Autonomous Systems Defence Cooperative Research Centre in Brisbane, Australia. “Humans have to keep track of the decision making, they have to keep paper trails — what happened, why things happened, and who made each decision,” she said. Having systems to track decision-making produces accountability, she said.

“There are real challenges,” said Vanessa Toholka, head of digital transformation skills at PwC Australia. But there is good news — businesses are aware of the issue: a recent global survey found that 84% of chief executives believe AI-based decisions need to be explainable in order to be trusted.

“It’s really comforting to know that we are hearing the same sort of concerns around transparency and trust within organisations,” she said. Organisations need to be judicious about what problems they trust AI to help solve “because so many organisations are still learning in this space, it is still a piecemeal process,” she said.

Algorithmic bias

A major problem is that when an AI system makes a wrong or biased decision, it does so consistently.

This is very different for humans: we, on the other hand, know that our decisions are capable of error — we know we can have lapses of attention, can misinterpret information or fail to distinguish between what’s important and what’s trivial. Looking at the same data again, we are also capable of realising those errors and correcting them.

But for an AI system, it’s different: when an AI makes an error, that error can be repeated again and again, no matter how many times it looks at the same data under the same circumstances.

One of the reasons is that an AI system, created by machine learning, relies on large data sets of past behaviour to make predictions about future outcomes. Without an oversight mechanism to detect errors, it will slavishly apply what it has learnt, even if the future is not the same as the past, or there’s been some shift in behaviour or environment.

In addition, if the data sets the system was trained on conceal inherent biases in human decision-making, then that bias will also be codified and amplified.

This systematic behaviour of an AI system in treating one group worse than another without justification is often called ‘algorithmic bias’. It may arise due to differences in the quantity or quality of the data coming from different groups of people, or from poor design of the AI system.

An AI system that helps a bank decide whether to grant loans is typically trained on a large data set of previous loan decisions, as well as any other data to which the bank has access. This can help the bank establish the risk of loan default by reviewing an applicant’s financial and employment history, as well as demographic information. In this way, the AI system can identify ‘feature values’ associated with people who turn out to be profitable for the bank, and ‘feature values’ for people who turn out to be unprofitable.

Algorithmic bias arises due to differences in the quantity or quality of the data coming from different groups, or in an inappropriate design or configuration of the AI system [Pixabay]

When the bank considers new loan applicants, the AI system identifies the ‘feature values’ for each applicant and tries to predict whether they would be likely to pay back a loan reliably. But the mortgage applications data on which the algorithm is based may have insidious biases already baked in, perhaps due to decisions made by humans with their own prejudices. In the real world, only some loan managers may make biased decisions while many others will not; nevertheless, the sum total of those decisions become the framework for the algorithm, and can tip it toward a biased approval or rejection of a loan.

Take, for example, a bank that relies on an AI system to evaluate mortgages. If the historical data show mortgage approvals were lower for people of colour, single female applicants, the disabled, blue-collar workers, or young applicants, then the algorithm may be swayed by these decisions — by how much would depend on how much bias was already in the historical data. This creates two risks for the bank: missing out on a large pool of potential customers who might actually have been good clients; and exposing the bank to complaints under anti-discrimination laws.

The first risk for the bank is that it can undermine profit growth, as victims will be driven to its competitors. The second risk may be even greater, because if the AI system proceeds to apply that inherent bias in all future decisions, it will be easier for government or consumer groups to identify a systematic pattern of bias, exposing the bank to fines and costly penalties.

Scouring for bias

Australia’s Gradient Institute, an independent non-profit research group that designs and creates ethical AI systems, has developed an approach to identify algorithmic bias in AI systems, as well as take steps to mitigate the problem.

A paper in December 2020 by the Australian Human Rights Commission and produced with Gradient Institute and others, “Addressing the problem of algorithmic bias”, shows how decision-making systems can result in unfairness, and offers practical ways to ensure that when AI systems are used, their decisions are fair, accurate and comply with human rights and existing legislation.

“Algorithmic bias can cause real harm, leading to a person being unfairly treated or suffering unlawful discrimination, on the basis of characteristics such as race, age, sex or disability,” said Dr. Tiberio Caetano, chief scientist at the Gradient Institute.

Gradient Institute chief scientist, Tiberio Caetano [Gradient Institute]

In the paper — a collaboration between the institute and the commission along with consumer think-tank Consumer Policy Research Centre as well as the Australian consumer organisation CHOICE and the Data61 division of Australia’s national science agency, the CSIRO — simulations were run to test how algorithmic bias arises.

To demonstrate an everyday AI decision-making scenario in a business, they created a hypothetical electricity retailer that uses an AI system to decide how to offer products to customers, and on what terms. The AI system used data and machine learning algorithms to create mathematical models for prediction and decision making, training them on previous decisions to create ‘labels’ for discrete data as well as matching these with supporting data, or ‘features’. The system then searched for patterns within the data set, identifying common ‘feature values’, or ‘indicia’, that led to successful sales.

Using this simulated data, made up of many fictional individuals, the paper identified five approaches to correcting algorithmic bias in an AI system — a set of approaches that can be seen as a potential ‘toolkit’ of mitigation strategies. They were:

  • Acquire more appropriate data: Biases can be addressed by obtaining additional data points or new types of information on individuals, especially those under-represented or which appear inaccurately in the data set.
  • Pre-process the data: This strategy consists of editing the dataset to mask or delete information for a protected attribute — like race or gender — before training an AI system. This may prevent people from being treated differently on the basis of otherwise irrelevant attributes, lowering the risk of algorithmic bias.
  • Increase model complexity: A simple model can be easier to test, monitor and interrogate, but it can also be less accurate and lead to the generalisations that favour the majority against minorities.
  • Modify the AI system: An AI system can also be designed from the outset, or later modified, to correct for existing societal inequalities, as well as other inaccuracies (or known prejudice and favouritism) in data sets that cause algorithmic bias.
  • Changing the target: Finding fairer measures to use as variables can also help address algorithmic bias.

Based on the simulation created by Gradient Institute, the project collaborators put forward a number of recommendations to business and government seeking to employ AI systems in decision-making. Firstly, that human rights and general principles of fairness should be considered whenever new technology is used to make important decisions, to ensure that the decision-making process is both fair and lawful.

Unparalleled opportunity

Importantly, this needs to be done before the AI system is used in a live scenario. They also recommended that an AI system should be rigorously designed and tested to ensure it does not produce answers that are affected by algorithmic bias. Once the AI system is operating, it should be closely monitored throughout its lifecycle against evidence of algorithmic bias. And lastly, that using an AI system responsibly and ethically should extend beyond simply complying with the narrow letter of the law.

AI systems should be designed and tested for algorithmic bias before being launched [Pixabay]

With companies increasingly using AI for decision making in everything from pricing to recruitment, the principles and recommendations, when applied to the development of AI decision-making systems, would enhance the quality of decisions and reduce the likelihood that algorithmic bias would lead to unfairness or further entrenched disadvantage, the paper argued.

By identifying and removing from AI systems biases in historical data — whether created by an accumulation of personal prejudices or merely inherent in existing systems, such as the traditional credit reporting and scoring systems — not only can new markets be found, but inequity and inequality can be reduced, said Caetano.

“AI systems actually present unparalleled opportunity to remove the biases that have always plagued human decision-making,” he added. “Once an AI system is trained to behave ‘ethically’, it will do so at scale, no matter how many times it looks at the same data under the same circumstances.”

Like this story? Please click the ♥︎ below, or send me a tip. And thanks 😊

artificial intelligence
Like

About the Creator

Wilson da Silva

Wilson da Silva is a science journalist in Sydney | www.wilsondasilva.com | https://bit.ly/3kIF1SO

Reader insights

Be the first to share your insights about this piece.

How does it work?

Add your insights

Comments

There are no comments for this story

Be the first to respond and start the conversation.

Sign in to comment

    Find us on social media

    Miscellaneous links

    • Explore
    • Contact
    • Privacy Policy
    • Terms of Use
    • Support

    © 2024 Creatd, Inc. All Rights Reserved.