16
Mar
2020

The Value And Necessity Of Tinkering

David Shaywitz

This week, I reviewed for the Wall Street Journal a pair of books about the increasing use of experimentation by businesses and other organizations: Experimentation Works, by Harvard Business School professor Stefan Thomke, and The Power of Experiments, by Michael Luca and Max Bazerman, also of Harvard Business School. 

These books in some ways represent the sequel to one of my favorite books about experimentation (both its uses and limitations): Uncontrolled, by Jim Manzi (his recent TechTonics podcast here; additional useful links in show notes).

There are two related topics that I didn’t have the space to cover in the WSJ review, but which I thought would be of particular interest to the TR biopharma readership; both are connected to the necessity for, and value of tinkering.

Implementation Gap

A recurrent theme in of this column has been the challenge of implementation – the difficulty of ensuring a promising idea or technology finds meaningful real world expression.  We see a particularly striking instance in the history of experimentation, in a study that’s often cited as the first clinical trial: James Lind’s scurvy experiment, an example that’s cited in both books.

The year was 1747, and James Lind, a surgeon in the British Royal Navy, was desperately seeking a treatment for scurvy, a debilitating disease that killed an estimated 2 million sailors between 1500 and 1800, and which we now know is caused by vitamin C deficiency.  Lind selected 12 afflicted sailors, divided them into six pairs, and gave each pair a different dietary supplement – orange and lemons for one group, cider for another, seawater for a third.  The group receiving the citrus was protected from scurvy, leading to the inclusion of lemon juice in sailors’ daily rations – 50 years later. 

Why the delay?  As Thomke explains, Lind assumed his results reflected the acidity of the solution, and “tried to create a less perishable remedy by heating the citrus juice into a concentrate, which destroyed the vitamin C.”  

The dual lesson is Thomke’s throughline: experimentation can drive exceptional value for organizations, from the Royal Navy to Google, but it’s really hard to get right, and there are many opportunities to stumble along the way – especially when your conceptual model ostensibly explaining the results is uncertain or, as in this case, entirely incorrect.

For a more nuanced and fulfilling explanation of the experiment, and a deeper understanding of the historical context, check out two episodes of Dr. Adam Rodman’s unfailingly captivating “Bedside Rounds” podcast that tends to focus on the intersection of medicine, history, and culture: this episode, on the history of the randomized clinical trial, and this episode, focused on the four humors, especially relevant given that Lind attributed scurvy to an imbalance of humors, which influenced (for the worse) his interpretation of his data.

Incrementalism

The A/B experimentation discussed in both books isn’t meant to apply to all innovation – it “may not be the best way to evaluate a completely new product or a radically different business model,” I wrote, and can’t reliably anticipate or assess Clay Christensen-style disruptive innovation. 

That’s okay.  A remarkable amount of innovation and improved productivity stem not from an original innovation, but from all the work of front-line innovators seeking to make the product better – the “lead users” who von Hippel valorizes, practicing the “learning by doing” Bessen champions (see here and references therein).

A 2006 von Hippel paper, for example, revealed that 60% of novel indications for existing medications originated from practicing clinicians. (I suspect the percentage has subsequently gone down, since much of this exploration is now pursued more deliberately by the drugmakers themselves as part of a product’s so-called “life-cycle management.”) More generally, it’s been estimated 77% of economic growth is attributable to improvements in existing products.”

As I wrote in a 2011 tribute to incrementalism, it’s worthwhile to aim for revolutionary improvements – the polio vaccine is clearly much better than even the most refined iron lung — but:

“The unfortunate truth is that such revolutionary change is exceedingly rare, and I worry that in anticipating, expecting, and benchmarking our expectations against a magic bullet, we may be underestimating the value of incremental change, evolutionary advances that have nevertheless contributed in a significant and largely underappreciated way to our improved treatments of a range of ailments.”

Medicine, as a domain, is associated in the public mind with breakthrough innovations like antibiotics in World War II, yet it’s the less-heralded “incremental steps that produce sustained progress,” according to physician-author Atul Gawande, who Thomke cites. 

Incremental innovation is sometimes disparaged, wrongly, as nibbling around the edges.  People pursing such strategies are sometimes dismissed as excessively conservative, insufficiently bold.  But this critique ignores much of the history of where innovation actually comes from.

Among the most compelling examples of incremental innovation is the remarkable progress in pediatric leukemias, discussed with characteristic eloquence by pediatric cardiologist Darshak Sanghavi (now chief medical officer of UnitedHealthcare’s Medicare & Retirement, the largest U.S. commercial Medicare program) in Slate in 2008.  “Between the early 1970s and the late 1990s,” he writes, “the long-term survival rate of children with leukemia skyrocketed from less than 20 percent to around 80 percent.” 

How?

“The leukemia doctors saved lives simply by refining the use of old-school drugs like doxorubicin and asparaginase. Over the course of almost a dozen clinical trials, they painstakingly varied the doses of these older drugs, evaluated the benefit of continuing chemotherapy in some kids who appeared to be in remission, and tested the benefit of injecting drugs directly into the spinal column. The doctors gradually learned what drug combinations, doses, and sites of injection worked best. And they kept at it. With each small innovation, survival rates crept forward a bit—a few percent here and there every couple of years—and over decades those persistent baby steps added up to a giant leap.”

Sanghavi notes that while “we’re far more likely to hear exaggerated tales of breakthrough new drugs… it’s the leukemia story that’s the historical norm.”  He cites a 70% reduction in the mortality of tuberculosis the occurred in the pre-antibiotic era, “due largely to careful studies of nutrition and hygiene,” and a 50% reduction in deaths from heart disease between 1980 and 2000, “almost entirely from the use of existing medicines and surgical treatments.” (You can listen to our recent Tech Tonics interview with Sanghavi here.)

Bottom line

We’re entranced by the idea of achieving medical progress through magic bullets, disruptive innovations that appear on the scene and immediately change everything.  The reality is that progress usually occurs far more gradually.  The effective implementation of potentially transformative technologies (whether citrus fruit or high-throughput DNA sequencing) can take a while to really figure out and become widely accepted.  A remarkable amount of improvement in human health can come from, and has come from, the deliberate tinkering, of inquisitive front-line providers, relentlessly focused on improving, increment by increment, the care of their patients.

You may also like

Diagnostic Test Developer Points to Academic Blind Spot That Hampers Translation
Adjusting to Telemedicine: A First-Hand Account
Digital Tools in Clinical Trials Find Opportunity During Pandemic
Why Telehealth Champions Are Worried About Trust