Beware of the Expert Fallacy, But Don’t Fall Into The Cynicism Trap

David Shaywitz

In December, a team led by two University of Pennsylvania scholars, psychologist Angela Duckworth (best known as author of Grit) and behavioral economist Katy Milkman published in Nature the results of a colossal study on behavior change.

The researchers evaluated the impact of a huge range of behavioral interventions – 54 – that were thought to potentially influence gym attendance after four weeks. 

The research – termed a “megastudy” – involved 61,293 participant subjects, and 30 scientists from 15 different U.S. universities; each of the 54 conditions were studied in groups of at least 455 participants; the average group size was over 1,000 participants.

The authors’ main area of emphasis in the paper was methodological. Successfully executing such massive a field experiment encourages other scientists to take advantage of this powerful approach, the authors contend. “By enabling direct comparisons of diverse intervention ideas,” they write, “megastudies can accelerate the generation and testing of new insights about human behavior and the relevance of these insights for public policy.”

Angela Duckworth, professor of psychology, University of Pennsylvania

What did they learn about their interventions? These data were somewhat disappointing. While nearly half the approaches worked better than the negative control, the effects, while statistically significant, were relatively small. Moreover, few approaches proved discernably better than what was essentially a standard-of-care (active) control, an approach consisting of three, evidence-based elements:

  • Planning prompts: Participants was encouraged to specifically plan the dates and times of their exercise; (this approach, called “implementation intentions” has also been used to nudge voter turnout, in work spearheaded by Todd Rodgers – see here).
  • Reminders – Participants were texted reminders at their scheduled times;
  • Microincentives – Participants received a credit, worth about 22c, for each visit.

The supplemental intervention that seems to perform the best was offering microincentives (worth 9c) to participants who had missed a workout, as an encouragement to return. This resulted in a 27% improvement over placebo (amounting to an extra 0.4 days of exercise per week), and a 16% improvement over the active control.

Unfortunately, even the relatively modest benefits of interventions dropped off rapidly after the study concluded; in other words, the interventions did not seem to impart to participants improved behavior in a particularly durable fashion.

Harvard business school professors Michael Luca and Max Bazerman summarized (prepublication) these results in their book The Power of Experiments (my WSJ review here): “behavioral interventions that led to short-term gains were less effective when looked at over a multiple-month span.”

As professor Duckworth succinctly told Luca and Bazerman: “Behavior changes are really *#$@ing hard.”

Expert Fallacy

Perhaps the most interesting aspect of the megastudy was the inclusion of predictions of intervention efficacy by a range of impartial assessors, including behavior science professors and practitioners. Not only were there “no robust correlations” between predicted and observed treatment effects, but the predictions anticipated a level of benefit that was “9.1 times too optimistic.” In other words: experts thought the interventions were going to work far, far better than was ultimately observed.

These results are consistent with the large and growing body of literature on the limitations of expert prediction (see this magnificent Louis Menand discussion of Philip Tetlock’s scholarship). The megastudy also supports expansive literature on the epidemic of overconfidence (one of Nobel laureate Daniel Kahneman’s many areas of contribution – see here). 

I’ve discussed expert fallacy and the challenge of overconfidence in a number of Wall Street Journal book reviews (including Rosenzweig’s Left Brain, Right Stuff, and The Invisible Gorilla, by Chabris and Simons). I’ve also explored these issues in the context of biopharma – see this Financial Times op-ed with Nassim Taleb, and this piece in Forbes

Between these biases, and the burgeoning literature on irreproducibility, it’s tempting to embrace the famous William Goldman observation about Hollywood, “Nobody knows anything,” and quickly find yourself in a very dark place.

But, it turns out, cynicism is probably not the right answer either.

The Cynicism Trap

You don’t have to spend much time on social media – or in late-night college bullshit sessions – to appreciate the power and appeal of cynicism, defined in an academic paper as “a negative appraisal of human nature, a belief that self-interest is the ultimate motive behind all human actions, even the seemingly good ones, and that people will go to any lengths to satisfy it.”

As Stanford psychologist Jamil Zaki reviews in a fabulous, short TED talk (“How to escape the cynicism trap” – here), while we may not enjoy the company of cynics, we tend to think they are smarter, and that their grim view of others makes them better at detecting dishonesty, for example, and hence less likely to get ripped off (not true, as Zaki reveals).

Jamil Zaki, associate professor of psychology, Stanford University

There’s also the contrasting concern that a less cynical, more hopeful and generally positive attitude, as “Freakonomics” host Steven Dubner notes, “can be seen as a sign of weakness,” and “as something that might be exploited.”

Considerable research has explored the notion of “depressive realism,” the idea that those with a negative view of the world are the ones who are seeing it most clearly, as Stephanie Bucklin nicely discusses here.

Fortunately – at least for the many optimists and aspiring optimists – the dismal view may not be right either. A fascinating 2018 study, for example, examining global data from over 200,000 people, found that while most of us “tend to believe in cynical individuals’ cognitive superiority,” the numbers don’t bear this out.

Rather, the data show that “cynical (vs. less cynical) individuals generally do worse on cognitive ability and academic competency tasks.” 

Moreover, across a range of cultures examined, “competent individuals held contingent attitudes and endorsed cynicism only if it was warranted in a given sociocultural environment. Less competent individuals embraced cynicism unconditionally.”

In other words, it’s the people who know less who tend to be reflexively cynical, while those who know more are selectively cynical. 

As the authors suggest, this could represent an adaptive posture, preventing those most vulnerable from getting taken advantage of. Causality might also work in the other direction, and those who are reflexively cynical might not open themselves up to opportunities that could expand their knowledge base.

It’s this last point – the receptivity to opportunity – that represents perhaps most compelling argument for resisting the “cynicism trap.” It’s something I’ve heard tech VCs, in particular, emphasize – the importance of being receptive to possibility, and to the outsized potential of radical new ideas. You might be wrong, they say, but you can’t lose more than the value of the investment; yet if you’re right, your upside can be almost unlimited, a return that reflexive cynics will never realize. 

As VC Balaji Srinivasan points out here, every significant advance in the technology revolution was originally dismissed by cynical industry experts. There’s a reason for the popular Silicon Valley aphorism, “pessimists sound smart; optimists make money.”

The advantages of an optimistic mindset extend far beyond the financial.

A more positive, less cynical view also enables us to derive greater joy from engagement with other people, to share in their happiness and embrace their possibility. 

Both cynical and hopeful stories, Zaki points out, can “can become self-fulfilling,” and he urges us to deliberately choose the more optimistic path. “We can be skeptical,” he says, “demanding evidence before we believe in people — but hopeful, knowing they can change for the better.”

Similarly, as Penn’s Duckworth notes in a (captivating) discussion with Steven Dubner (here), she isn’t advocating for reflexive optimism, sometimes called “toxic positivity.” 

Instead, she says, “It’s both possible to have a high center of gravity when it comes to positive emotion and to be pretty stable around that, and I think it’s possible to allow your attentional field to encompass these other things.” 

What’s critical for both optimists and pessimists to recognize, Duckworth argues, is that we have choices:

“There are virtuous and vicious cycles; being unhappy, being negative, and always being hyper-critical can really be a spiral downward. If you do pay attention to that, you could say, ‘Hey, look over here, it’s this virtuous upward spiral. I think I’m going to join that one.’”

In essence: happiness is a choice – so choose it.

You may also like

Hot Topics in Biopharma: Initial Impact of Digital, Data Dilemmas in Clinical Studies, and the Search for ‘New Normal’
ICYMI – Recommended Reading and Listening for Biotech Innovators
You Have Chosen … Poorly: Why Drug Developers Make Bad Decisions
A Glimpse Into the Adjacent Possible: Incorporating AI Into Medical Science