Author: Eben Harrell 

When Leslie John, an associate professor at Harvard Business School, arrived at work on the morning of the U.S. presidential election between Donald Trump and Hillary Clinton, she was worried. John is an expert in behavioral decision research and studies the various innate flaws and biases that impede human reasoning. As a supporter of Clinton, she wondered whether the same cognitive traps that she studies in a laboratory could be leading to overconfidence about the likelihood of a Clinton victory. “Everyone I spoke with pointed me to Democratic and Republican pollsters, financial and prediction markets, essentially every forecaster in the public record was predicting a Clinton win,” John says. “Yet here we are.”

The morning after the election, I spoke with John to understand how insights from behavioral sciences can help explain one of the greatest upsets in the history of democratic elections — and the appeal of a candidate that few expert commentators believed could win.

HBR: Leslie, the pre-election polls and expert predictions weren’t just wrong. Most of them were wildly inaccurate. Yet we are told that we live in an age where data analytics is providing unprecedented insight into the future. What led to that disconnect?

John: It’s quite humbling, isn’t it? We tend to think that because we now routinely use algorithms and computer-generated predictions, the results will be unbiased. But there are two problems with that thinking. The first is that, at the end of the day, humans build the algorithms. And all sorts of biases can be introduced at the point of construction. It’s also possible that the inputs — in this case, the polling — was flawed. I could see Trump supporters who were also anti-establishment may have viewed polling officials as part of the establishment and refused to engage with them. Another factor that might lead to a response bias in the polling might be what behaviorists call “socially desirable responding” — you can imagine women being reluctant to admit that they were going to vote for Trump after the footage surfaced of his bragging about sexual assault, for example.

So the expert commentators — on both sides of the aisle — were working with bad polling information. What else might have clouded their vision?

Overconfidence comes to mind. There’s tons of research showing that people are overconfident in their beliefs. We think our prediction abilities are better than they are. And if you add to overconfidence a desire for certain outcomes — for instance, I think most elite commentators were anti-Trump — it magnifies the problem.

There’s a classical social psychology paper, “Biased Assimilation and Attitude Polarization,” that found that when you want to believe something and you are presented with evidence, you interpret that evidence as supporting your pre-established belief. In the study, researchers had participants sort into groups based on whether they supported the death penalty. They then showed both groups two pieces of evidence — one in support of capital punishment and one against it. People found the evidence that confirmed their belief to be far more convincing. In the end, the experiment just ended up polarizing both groups more, exactly the opposite of what you might expect when presenting “both sides” of the argument.

It’s interesting that during the campaign many commentators scorned Trump supporters for having blind spots, yet it turns out that those commentators were prone to the same cognitive biases.

Totally. And I also found it interesting that the more the media pointed out inconsistencies and lies in Trump’s statements, the more it seemed to spur the engagement of his followers. Academics have identified a phenomenon called “psychological reactance.” When we feel someone is trying to tell us what to think or do, we react in exactly the opposite of what we feel we are being told to do.

There’s a whole other strand of research that’s relevant here. Cameron Anderson and Don Moore at UC Berkeley have demonstrated that overconfidence leads people to look more competent to others and to be afforded higher status and influence and that even when overconfidence is exposed to others, people still are not socially punished. When you combine Trump’s confidence with his displays of dominance — for instance, his incessant interrupting of Clinton during the debates — you can understand why people would believe him.

Trustworthiness was a big issue for Clinton but not as much for Trump. Do you have any idea why?

I’ve done research that shows that people who reveal information are always seen as more trustworthy than people who decline to disclose information — even if they admit to wrongdoing. We have a paper where we show that job candidates who disclose the fact that they committed a crime when they are asked on a form are viewed as more trustworthy than people who opt not to answer the question on the form. With Clinton there were so many examples where she wasn’t forthcoming, so she came across as a hider, which I think explains in part why she was viewed as untrustworthy by so many Americans.

Meanwhile Trump was also extremely private about some things, such as his tax returns. But in his case he had a few key acts of proactive disclosure that perhaps made people forget about the situations where he declined to disclose. What’s more, the fact that people felt that he “told it like it is” — essentially, that he was forthcoming about beliefs that might garner him social stigma — enhanced his reputation for trustworthiness. Saying risqué things can actually give you great bang for your buck when it comes to trust — though of course, it also has its risks.

The example that put these two differing approaches together in my mind was when Clinton had pneumonia. She clearly was sick but just didn’t address the issue and denied being unwell until video emerged of her fainting. Trump, on the other hand, proactively released portions of his medical records.

Right, but those medical records were highly curated and incomplete.

Ah, but there’s another interesting cognitive flaw in play there. We don’t question the source of information when it is put in front of us, and we aren’t very sophisticated at understanding that we should consider the source of information. For example, the information from a biased source usually isn’t given the extra scrutiny it deserves. Instead, we’re prone to taking evidence at face value. This is part of a broader tendency to think narrowly when evaluating information and making decisions.

For instance, if Trump wanted to present an accurate presentation of his health, he would randomly sample bits of information from his entire health record and release those, or release his entire medical history. But that’s not what he did — he cherry-picked. He released what he called his health record, but it obviously wasn’t a complete record. But that’s not what people perceived. They figured he had been forthcoming and Clinton was a hider and thus not trustworthy.

So, what’s the message here for those humbled by this result? Can they avoid being blindsided in the future?

There is some good research about what makes for good forecasters and how to improve forecasting. But my general sense is that biases are very robust. It’s really hard to get rid of overconfidence. In one study, the researchers asked participants to answer trivia questions that required a specific answer — for example, “How many Americans have a passport?” The task was to specify confidence intervals — again and again, people’s confidence intervals are way too narrow. This is classic overconfidence bias. But in this experiment, they tried a heavy-handed intervention: They told people that their confidence intervals would probably be too narrow and they should make them way wider. Yet people still overestimated the accuracy of their answers. One extreme solution is to delegate decisions to people without a vested interest in the result.

What can Clinton and Trump supporters expect in the coming weeks as the results begin to sink in?

Research shows that bad things influence us more than good things — we feel greater despair at bad news than joy at good news. By that logic, this result will be more hurtful to Clinton supporters than joyful for Trump supporters. But there might be countervailing factors, such as the fact that people experience greater joy when they share happy experiences with others as opposed to alone. And Trump supporters obviously have a joyful experience to share. I think all we can say for sure is that many people, even many Trump supporters, will remain surprised by this result for some time. I feel like I saw it coming, but at the end of the day, I need to take my own postmortem on this with a grain of salt. I’m not immune to the very human tendency to believe you “knew it all along” — what behavioral scientists call hindsight bias.

Author: Eben Herrall, Senior Editor at Harvard Business Review
Title: Blindsided by Trump’s Victory? Behavorial Science Explains
Source: Harvard Business Review

Posted by

Hi, I’m Katie. I am a kiwi neuroscientist with a love for consuming and creating content. This site is where I share my personal thoughts and the thoughts of incredible minds from around the world. PhD in Neuroscience, University of Otago.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s