.

What if we used behavioral science to power algorithms?

Algorithms are not exactly a new invention. They’ve been around quite long and have been helping us to learn languages, do math and even cook food! Yes, following a recipe is basically an algorithm – it’s a series of steps that need to be taken in an exact order in order to achieve a goal (=yummy food). But if it feels like you’re hearing about algorithms more and more, you’re not wrong. Due to all kinds of technological developments (hello machine learning), the span of what algorithms can do for us has increased, and they are now prevalent in all kinds of areas of our lives where they weren’t before.
Want some examples? Sure. Algorithms help us find our way in a new city (Google Maps), choose the right playlist for our mood (Spotify), determine what information we’re going to see about our friends (Facebook newsfeed) and even our date for Saturday night (Tinder). And not that all of these are not super important, they are, but algorithms are also involved in some decisions that are even more crucial. Algorithms help detect tumors, recommend medical treatment, determine credit score and even decide who gets to sit in jail while awaiting trial and who gets to walk away paying bail. Pretty heavy stuff.

 

Algorithm bias is a thing. So is human bias.

So how’s all that going for us? Not great. Mostly we have been hearing a lot of reports of cases when algorithms have messed up. Amazon had to abolish its hiring algorithm upon discovery that it was discriminating women. A social media storm erupted when a husband complained that Apple’s credit card algorithm gave his wife a credit card limit 20 times lower than his, despite the couple having joint finances for decades, and algorithms are repeatedly called out for being biased against the most vulnerable populations.
But before you lose faith in algorithms forever, consider the alternatives. As in humans. You know what? We’re not perfect either. Humans demonstrate all kinds of biases ALL THE TIME. There’s a whole field studying this and that’s kind of what we at Impactually like to nerd out on on a regular basis. Basically, we’ve known for decades that despite people’s best intentions, sometimes there’s just too much going on and we end up taking short-cuts in our thinking that can lead us to decisions that are less than optimal.

 

Algorithms are great. Too bad we don’t trust them

So neither algorithms nor humans are perfect, got it. But how do they compare to each other? Well, I’m afraid I have some bad news for all the humans in the crowd. When it comes to decision-making, algorithms mostly kick our butt. It’s a harsh reality to accept, I know. Consider the evidence: algorithms are better than doctors at predicting heart attacks, disease eruption and survival rates of patients. They are better at predicting which businesses go bankrupt, and which offenders will violate their parole. They also beat us at games we previously thought only humans can win.
But despite sooooo much evidence that algorithms are great in making decisions, we still don’t trust them. Sure, that job candidate may have impressed the algorithm, but we need to meet them and make sure, because there is simply no way the algorithm has our intuition, our gut feel, our expertise. Except, well, it does. And it’s better. When we see a human make a mistake, like hiring a candidate that ends up performing poorly and leaving the company, we chalk it up to an error in judgement, but we’re most likely to still give that hiring manager another chance. But when we get proof that an algorithm has made a mistake, that’s it for us. We conclude that this is a useless algorithm and we resolve to never rely on it again. Even if, on average, that algorithm makes much better predictions than the manager.
This mistrust we have for algorithms can have devastating results. An AI algorithm was designed to guide the excavation works in Flint, Michigan to locate lead pipes which were contaminating the water and replace them with copper pipes. Since the goal was to locate and replace as many lead pipes as possible, the algorithm was designed to predict which houses are most likely to have lead pipes, so that they can be dug up first. In 2017, the algorithm reached an accuracy of close to 80%, which is great. But the citizens of Flint complained that the digging works were not fair – the workers were (so it seemed) randomly skipping houses, leaving people worried and confused – why did they skip my house? Does that mean I still have lead pipes and they’re just too lazy to take care of it? It wasn’t that, of course. Houses were skipped because the algorithms had predicted they didn’t have lead pipes, and the risk was judged as low. But people didn’t get that, an uproar ensued, and the algorithms was quickly abandoned. In 2018, without the algorithm, the works now had a 15% accuracy rate of detecting lead. Mistrust in the algorithm meant that thousands of people still had lead pipes that could have been replaced. It was just a continuation of Flint’s tragedy. 

 

Why don’t we trust algorithms?

Several behavioral factors are at play. The first is that we simply do not trust their expertise – we believe that we can do better! Behavioral science have long documented the effect of overconfidence – also known as the mother of all biases. Generally, people tend to overestimate their skills and expertise, and when it comes to algorithms, we also overestimate our intuition and gut feel, which is why we think we will perform better.
Another potential explanation has to do with “the black box”. Algorithms analyze data and produce a result, without explaining to us fully how they got there. We don’t know what happens there, “under the hood”, and it is hard for us to let go from this feeling of control of not knowing. The lack of transparency makes us mistrust the algorithm.
Finally, sometimes algorithms are simply not that helpful with our own biases and faults. For example, Netflix boasts that its algorithm is designed to help us find a show or movie to enjoy “with minimal effort”. Well, if you’ve ever spent an entire evening trying to decide what to watch on Netflix, you know that even this super sophisticated algorithm has its shortcomings. What happens is that faced with a seemingly infinite number of choices, all relevant and tailored to our preferences, we actually have a harder time making a decision. This is called choice overload. We might even defer choice all together – we get paralyzed, we don’t want to make any decision, we fear that a better option is hiding beyond the next corner, or next screen, if only we scroll a little more.

 

What if could infuse behavioral science in the design of algorithms?

So we don’t trust algorithms because we think that we can do better (overconfidence), we worry about what’s behind their black box (transparency) and we are disappointed when they don’t help us with what they promise, like handle choice overload.
What if we could incorporate behavioral science when we design algorithms, to mitigate these factors? We can design algorithms that prove that they are experts, that inform us of how they make decisions, and that are tweaked to handle our biases. If this happens, we might just improve people’s trust in them.
Algorithms are becoming a bigger part of our daily life, whether we like it or not. And overall – that’s a good thing! They really can be very helpful. Let’s use behavioral science to design algorithms that people actually want to use and to rely on. It might help us handle better cases like the Flint lead search in the future. Or at least, we won’t have to spend another evening of finding something to watch on Netflix.

 

Nurit Nobel

Nurit.nobel@impactually.se

+46 76 191 71 34

 

I want to learn more!

We hear you. If you’ve already read our posts about  what behavioral economics is, how it can be used in practice, and how to apply it in organizations, then here’s a list of books, TED talks and other online resources. You are also welcome to sign up to our newsletter, where we give you relevant news and links on behavioral science. Want to learn even more about how to use behavioral science to build more effective organizations? Check out our online course “Designing nudges”, or our complementary guide for applied behavioral science.

 

I’m convinced that behavioral science can do wonders for me. Now what?

Contact us and let’s talk about how we can help you get going.

We are a management consultancy applying behavioral insights to create business and societal impact. We use our expertise in behavioral economics and social psychology to design evidence-based solutions to critical challenges. We leverage scientific methods to identify interventions that will have long lasting, measurable effects.

FOLLOW US

CONTACT

Email: info@impactually.se

Org. no.: 559167-0327

Web Design: Sharp Studio
Copyright Impactually 2024

 

Share This