21
Dec
2023

The Cultures of Large and Small Pharmas, plus: Can They Overcome The “Productivity Paradox” and Seize the AI Moment?

David Shaywitz

Spurred by several questions I’ve received from students and trainees, today’s year-end column examines some of the ways large biopharma companies are fundamentally different from small biotech companies and startups. 

We’ll also ask whether biopharma can overcome new technology’s dreaded “productivity paradox” and learn, quickly, how to apply AI to accelerate drug development.

Large Pharmas vs Smaller Companies (Including Startups)

Very large pharmas (to borrow from Fitzgerald) are different from the rest of us.  To appreciate these distinctions, it’s helpful to examine how large and small biopharma companies (including startups) approach key challenges facing the industry.

Challenge 1: Most drug candidates don’t turn into approved products, and only a tiny fraction of molecules entering phase I emerge at the other end as FDA approved medicines.
Advantage: Large biopharmas

Arguably, the single most important advantage large biopharmas have is that their size enables them to pursue a portfolio approach and absorb losses that tend to sink smaller companies – it’s that simple. If you are J&J or Roche, with a market cap in the hundreds of billions, you can absorb the inevitable program failures; if you are a startup or small biotech, it’s much more difficult.  (Note: this is also a key reason why drug development is so expensive – the calculations need to factor in and account for not only the cost of rare successful program but also the amalgamated cost of the many, many setbacks.)

Challenge 2: Drug development requires flawless execution across a huge number of disparate steps
Advantage: Large biopharmas

Another key advantage big pharmas have is that they tend to have deep expertise across a range of areas, from chemistry to statistics to clinical development to marketing.  Moreover, their large size (at least in theory) increases the likelihood that big pharma programs get both the attention of vendors (like CROs), and discount pricing (for the same reason a large hospital system can negotiate more effectively with insurers than can solo practitioners).

While many startups are founded on the idea that they’ve identified a key obstacle – for instance, a traditionally “undruggable” target that they’ve figured out how to attack – the startup still needs to do all the other block-and-tackle activities required to make a product.  While service providers like contract manufacturing organizations increasingly enable startups (as well as larger companies) to outsource much of this work, operationally, there’s just so much to get right.

One manifestation of the broad focus of large pharmas can be seen in their approach to technology innovation.  I learned this the hard way after I arrived at an R&D technology strategy role at a large pharma, spoke in depth to researchers across the organization, and identified a number of unusually precocious digital innovators. Delighted by the talent I identified, I proposed that the organization invest additional resources behind these stars and expand their individual efforts. 

Yet this turned out to be, from the perspective of top R&D leaders, including the head of R&D, exactly the wrong answer.  These innovators, I was told, were obviously on the right track and of course should be acknowledged, but the real strategic goal was to bring everyone in R&D up a notch.  A global improvement of even 5% in facility with emerging technologies, I learned, was considered far more useful to the organization than supercharging those who were already ahead. 

Incredulous, I asked the brilliant founder of a leading AI-driven biotech startup for a second opinion, and I was surprised to hear a similar perspective.  The founder told me that in the tech industry, “a single individual, or even a very small team, can leverage technology to move very quickly… and make huge amounts of progress.” 

But drug discovery, the founder continued, “is very much a team sport. A single individual or even a small team are rate limited by the pace at which they can do biology or chemistry experiments. Conversely, if you accelerate the entire organization…then that can be very value creating.” 

This mindset (which I understand, though not yet fully embraced as I continue to believe in the value of investing behind pioneers) may also explain why large pharmas are increasingly moving towards shared, enterprise platforms (“foundational enterprise capabilities,” in the words of consultants Lamare, Smaje, and Zemmel), and are leery of isolated tech solutions.

Challenge 3: Need to “pick winners”
Advantage: Neither

Biopharma remains an exception-based, hit-driven business, largely living off of infrequent, outsized successes. This is a domain ruled by the power law, not the normal “bell curve” distribution.

To the consternation of all, our ability to identify the rare “winners” seems as elusive as ever (see here, here); many blockbusters have come from products that weren’t initially recognized as especially promising. Examples here include Merck’s pembrolizumab (Keytruda) (as I discussed at length here), and also Millenium’s bortezomib (Velcade), acquired in the LeukoSite transaction that was focused primarily on a different product, Campath (see here, also here).

On the other hand, the GLP-1 obesity products so much in the news these days emerged from decades of meticulous and deliberate work in both academia and leading diabetes companies, Lilly and Novo Nordisk. Notably, these pharmas also had the resources and conviction to conduct the essential but often daunting long-term cardiovascular outcome studies that discouraged many other companies (large and small) from investing in the field at all.

Even so, the magnitude of the drug effect – both in terms of weight loss and in terms of cardiovascular benefit – is likely well beyond what most optimists probably imagined.  Not surprisingly, many pharmas who largely shunned obesity are urgently now trying to acquire their way into this market.

Given the importance of identifying “winners,” I’ve been struck by how many senior drug developers with whom I’ve spoke have confided to me that they think R&D strategy (in terms of what to go after) tends to be overrated. 

One veteran told me that while a strategy can be useful for attracting early investors to a startup, or facilitating communications in a larger organization, in practice, success tends to depend less on any particular strategy, and more on how astutely you respond to what you encounter (see also Challenge 5, below).

This skeptical and pragmatic attitude to strategy may represent the pharma equivalent of U.K. Prime Minster Harold Macmillian’s famous response when asked about “the most troubling problem” he faced during his tenure.  Macmillan’s answer: “Events, my dear boy, events.”

Challenge 4: Navigation of nascent science & initial prosecution of promising molecules
Advantage: Smaller biotechs (though less so in down market)

A key advantage that belongs to (well-funded) startups and small biotechs is their exceptional focus and agility.  Because their aperture is typically so narrow, small companies tend to be exquisitely attuned to challenges their programs face, and can generally respond more rapidly, and adjust more nimbly, than large biopharmas.  There also tends to be a remarkable degree of organizational alignment – it’s much easier to get everyone to row in the same direction, since everyone is palpably invested in the same outcome.

Startups and small biotechs often have a relative flat organizational structure, conducive to fluid communication and fast decisions.  In the presence of sufficient funding (obviously not a given in the current difficult environment), startup scientists can pursue novel science, and biotech development teams can respond to unforeseen challenges, with an urgency and flexibility that tends to be far more difficult to come by in large companies, with their elaborate decision procedures and rigid processes. 

In contrast, large biopharmas are astonishingly complex organizations.  They are unimaginably, almost anachronistically hierarchical. Information, like authority, cascades down, rather than diffuses across. Because of their size, there is extensive reliance upon, and deep reverence for process (“trust the process” tends to be an earnest aspiration), and decisions often require not just consensus but also a stultifying number of preliminary meetings to ensure all proposals are thoroughly socialized, and all senior stakeholders are suitably aligned. 

One consequence: in large companies, decision-making tends to be both painfully slow and incredibly risk-adverse, as Safi Bahcall in particular has documented (see here, here).

Effectively navigating intricate corporate structures also requires a facility with the sort of organizational power politics that authors such as Stanford’s Jeffrey Pfeffer and USC’s Kathleen Kelley Reardon astutely describe. 

Challenge 5: Exploitation of winners
Advantage: Large biopharmas

As difficult as it can ordinarily be for anything to get real momentum in sprawling bureaucratic biopharmaceutical companies, their ability to execute effectively on a global scale when they actually hit upon someone promising is extraordinary. Pfizer’s development of the COVID vaccine is one compelling example; Merck’s exploitation of pembrolizumab (Keytruda) is another. 

In these and other cases, once a large biopharma decides to go “all in” on something, and the opportunity seems authentically compelling (rather than desperate), the ability of these massive organizations to execute on a global scale is extraordinary to behold.  Everyone in the organization understands the opportunity and the imperative, and the result can be mind-blowing. 

Of course, the pursuit of promising data motivates and energizes biopharma companies of all sizes. The difference is that large pharmas are uniquely positioned to drive these programs forward at scale.

AI and the Biopharma Productivity Paradox

I couldn’t have asked for a better way to wrap up 2023 than to listen to Microsoft’s Peter Lee discuss GPT-4, and generative AI more generally, earlier this week at a Dean’s Lecture at Harvard Medical School.

Peter Lee, Corporate Vice President,
Microsoft Research

Lee, readers will recall, co-wrote the book The AI Revolution in Medicine: GPT-4 and Beyond, together with Harvard professor Zak Kohane and veteran journalist Carey Goldberg, who were both in attendance. 

A year or so into the GPT-4 era, Lee seemed as excited by the promise of GPT-4, and as mystified by its mechanism, as he was when he first wrote the book (and when all three authors discussed it with me in May at Harvard’s Countway Library – video here, transcript here).  It’s abundantly clear that although we’re still in the earliest days of generative AI, the technology holds exceptional promise, and of course significant risk. 

Perhaps Lee’s most enduring message was one of the last points he made, citing a poignant and personal example that Kohane offered in the book.

“My first patient died in my arms,” Kohane wrote. “I was a freshly-minted doctor in a newborn intensive care unit, and despite maximal efforts with the best that medicine had to offer at the time, I had to hand a baby boy’s lifeless body to his parents within 24 hours of his birth.”

Kohane observed that “At the time, the death was an unavoidable tragedy.” Yet within a year, a new treatment approach was found to be effective in similar patients. 

“It became standard practice a year later in the very same nursery where my first patient died,” Kohane writes. “He would likely have survived if he had been born just a little later.”

Or if the therapy arrived a year earlier.

Kohane acknowledged the many different steps required to bring a therapy forward. If AI can be applied productively to even a few of them, he wondered, how big a difference might that make in accelerating a treatment’s evaluation and approval? 

The story of the baby boy who was one year away from a lifesaving intervention is a highly resonant example. It points to the importance of all the different tasks that medical product approval requires – and accordingly, all the opportunities for optimization and improvement.  It reminds us of the importance of saving time – months matter, and a year or more of process improvement can be the difference between life and death.

Phrased differently: we tend to hope AI somehow comes up with new brilliant treatments.  But even if AI “just” accelerates paperwork and increases process efficiency, that boost could still meaningfully hasten the delivery of improved therapeutics to patients. 

Given the many areas of opportunities for improvement in both healthcare and biopharma, the pressing question is whether AI will actually drive rapid improvements in productivity?  

Top management consultants, naturally, tend to say “Yes, and leading companies have already demonstrated this, why are you lagging?”  

In biopharma at least, these assertions lack credibility.  

For example, when consultants enthuse aspirationally that “a GenAI model can be applied to a massive pharma molecule database that can identify likely cancer cures,” most experienced drug hunters and scientists will just roll their eyes.

Those with a historical perspective on technology remind us of the “productivity paradox” and say it’s always taken longer to achieve technology’s promised benefits than anticipated – i.e. think about Kahneman and consult your priors.

With this in mind, I’ve explicitly discussed why, based on previous experience, we should cautiously manage our expectations for AI in the context of biopharma. 

Nevertheless, many experts hope and expect that this time will be different. Such earnest optimism was expressed for AI in healthcare delivery by UCSF’s Robert Wachter and Stanford’s Erik Brynjolfsson in the latest JAMA.

These authors argue that “the ability of the digital tools to rapidly improve and the capacity of organizations to implement complementary innovations that allow IT tools to reach their potential—are more advanced than in the past.” 

They also emphasize (as I’ve described in detail here and here) the importance of reinventing processes, noting that “great gains will only come when implementation is coupled with significant changes in the design of the work.” 

Lamare, Smaje, and Zemmel also explicitly emphasize the need for companies to “fundamentally rewire” how they operate.

I appreciate the optimism of Wachter and Brynjolfsson, and recognize the extraordinary promise and rapid improvements in AI. At the same time, I am mindful of the magnitude of the intrinsic biologic, human, and organizational complexities that must be addressed in biomedicine.

In biopharma, a question of particular interest in whether AI can help us become not only fail more efficiently but succeed more frequently – i.e. increase our probability of success by improving how we select targets, indications, and patient populations.  Already, there are seemingly hundreds of startups all claiming they can help with this (I’ve spoken with several in just the last few days). 

These assertions – that an algorithm or model can impact the overall probability of success – can be tricky to evaluate. Given the many ways a drug can fail, it’s going to be challenging for early adopters of AI methodologies to critically assess the impact (if any) the AI is having. 

Yet, how exciting to consider the possibility that at least in some cases, it might be possible to leverage existing data to make better decisions than the typical eminence-based approach.

More generally, the challenge and opportunity for R&D leaders of today is figuring how to effectively integrate emerging biological modalities with powerful but still nascent digital and data tools, in a fashion that leverages these methods without fetishizing them.

Amy Abernethy

A final note on the challenge of developing health technology solutions: the brilliant Amy Abernethy (well-known to regular readers of this column) announced this week that she’ll be stepping away from her role as the president of product development and chief medical officer at Verily, essentially to approach the challenge of evidence generation from a different perspective. 

The departure of Abernethy represents a tremendous, possibly catastrophic loss for Verily and their aspirations to demonstrate the ability to deliver concrete solutions in healthcare, including biopharma. Despite a preponderance of super smart engineers, the company just can’t seem to covert this brilliance into tangible commercial healthcare products.

As one health tech leader tartly told me, “Verily is such a hot mess. Never has a company been so well funded for so long with no clear mission as to why it even exists.”

And now it feels like the key experiment has been done, seeing if the transplantation a new visionary nucleus — Abernethy — into the existing structure could help the organization at last become a competitive health product company.  

The answer, sadly, seems to be: No.

Nevertheless, both Verily and Abernethy are right to recognize the promise of emerging technology to address enduring challenges in healthcare delivery and drug development. 

Let’s hope that in 2024, we spend less time fantasizing, catastrophizing, and rhapsodizing about the extent to which AI ethereally “changes everything,” and instead use our energy to develop more tangible examples of AI palpably improving something in the way new medicines are discovered, developed, and delivered to patients.

Best wishes for a creative, joyful, peaceful, and impactful 2024!

You may also like

And Just Like That: What the Viral Adoption of a Clinical AI App Means for Pharma R&D 
Yes We Can: My Response To Skeptical Readers
Can We Pick Winners With Causal Human Biology? Vertex Makes the Case
What If You Can’t Pick Winners in R&D?