Human or algorithm: which makes the best decision?

Source: Management Team (04/11/2019); Author: Karlien Vanderheyden

You might expect that we would trust Google Maps blindly: after all, it’s been a household name for years now. And yet we regularly try out different shortcuts because we think our own solution will be quicker. It’s an example of what researchers call an ‘aversion to algorithms’. Although algorithms often score better than human judgement, we still tend to follow our feelings. And once an algorithm has made mistakes, our confidence tends to drop even further. Making mistakes is part of being human, so we find it easier to accept mistakes from people than from algorithms.

Human versus algorithm

Algorithms beat humans in most areas of life. People who make decisions about early release for prisoners perform worse than simple formulas. Doctors are less good at diagnosing breast cancer than image analysis software. Purchasers are worse at predicting the best suppliers than a simple algorithm. According to Berkeley Dietvorst, Professor of Marketing at the University of Chicago Booth School of Business, the literature shows that algorithms are 10 to 15% better at predicting human behaviour than individuals. But that doesn't mean they're perfect, of course.

Aversion versus appreciation

Our aversion to algorithms is often greater than our appreciation. There are various reasons for this:

  • We feel that algorithms don’t help us to achieve our challenging goals (even though they often perform better than we could ourselves).
  • We regard making mistakes as the exclusive right of humans.
  • The less well-versed people are in numbers and maths, the less confidence they have in algorithms.
  • Experts have more confidence in their own expertise than in the advice of algorithms.
  • Algorithms are often a mystery to us (a kind of ‘black box’, as it were). If we fail to understand exactly what it is that algorithms do and what potential prejudices they may be subject to, we are handing over the system to a group of programmers who are not responsible for the final decision. For example, people with a certain background are often treated unfairly when applying for loans. At LinkedIn, an algorithm used to ensure that advertisements for higher-paid jobs were shown more often to men than to women.
  • The data used by algorithms is not always up-to-date. In her book 'Weapons of Math Destruction', Cathy O'Neil conducts the following thought experiment. Suppose that Fox News were to use a machine learning algorithm to find new anchors for the channel. In the process, they define a successful anchor as someone who stays with Fox for five years and gets promoted twice. Historically, we know that women have been systematically excluded from this channel. If the algorithm were to use this data from the past, we would have good reason to suspect that women would be filtered out.
  • Only very rarely – as in ‘broken leg cases’ – do people outperform algorithms, because they have new information that has not yet been included by the algorithms. Psychologist Paul Meehl provides the following example. If you just found out that someone has broken their leg, it would be better not to rely on the predictions of the statistical model regarding their visit to the cinema. The model uses demographic and other variables to predict the likelihood of someone visiting a cinema, but does not have access to very recent or unexpected information.

Encouraging algorithms

As a leader, what can we do to ensure that people use algorithms to support their decisions?

  • People are more likely to trust algorithms if they have a certain amount of control over the output. As a leader, you need to give them some freedom to make changes. For example, if an algorithm would place certain candidates for a job in the top 10%, people will find this easier to accept if they can change the result to the top 15%.
  • You can also help your staff to ask the right question. Very often, people ask themselves: “Will the algorithm help me to achieve my goal?” This is often not the case, especially when challenging goals are involved. A better question would be: “Is the algorithm better than me?” This question increases the likelihood of people choosing the algorithm.
  • Make sure your employees understand what lies behind the algorithm and give them an insight into the variables used. When selecting potential candidates for a job, an algorithm may regard distance from the workplace as an important criterion, but it is a discriminatory variable that is often used wrongly.
  • Hire people who regularly audit your algorithms on the basis of criteria such as justice, legitimacy, discrimination etc.
  • Make sure your people have the skills they need to understand when new information comes into play (‘broken leg cases’) and when the algorithm does not have all the right information as a result.

Related news

  1. How can we learn from failure?

    Date: 02/07/2019
    Category: Opinions
    Many leaders find it difficult to respond constructively to failure. After all, if we are no longer allowed to blame an employee when something goes wrong, how can we ensure that employees will still do their best to perform as well as possible? Although many organisations consider learning from failure important, very few of them actually take a good approach to it. Why? Because most managers look at failure the wrong way.
All articles