17
Jun
2023

Learning From History How to Think About the Technology of the Moment

David Shaywitz

Generative AI, the transformative technology of the moment, exploded onto the scene with the arrival in late 2022 of chatGPT, an AI-powered chatbot developed by the company OpenAI. 

After only five days, a million users had tried the app; after two months: 100 million, the fastest growth ever seen for a consumer application. TikTok, the previous record holder, took nine months to reach a 100 million users; Instagram had taken 2.5 years.

Optimists thrill to the potential AI offers humanity (“Why AI Will Save The World”), while doomers catastrophize (“The Only Way To Deal With The Threat From AI? Shut It Down”). Consultants and bankers offer frameworks and roadmaps and persuade anxious clients they are already behind. Just this week, McKinsey predicted that generative AI could add $4.4 trillion in value to global economy. Morgan Stanley envisions a $6 trillion opportunity in AI as a whole, while Goldman Sachs says 7% of jobs in the U.S. could be replaced by AI.

The one thing everyone — from Ezra Klein at the New York Times to podcasters at the Harvard Business Review — seems to agree on is that generative AI “changes everything.”

But if we’ve learned anything from previous transformative technologies, it’s that at the outset, nobody has any real idea how these technologies will evolve, much less change the world.  When Edison invented the phonograph player, he thought it might be used to record wills. The internet arose from a government effort to enable decentralized communication in case of enemy attack.  

As we start to contemplate – and are thrust into – an uncertain future, we might take a moment to see what we can learn about technology from the past.

***

Yogi Berra, of course, observed that “it is difficult to make predictions, especially about the future,” and forecasts about the evolution of technology bear him out. 

In 1977, Ken Olsen, President of Digital Equipment Corporation (DEC), told attendees of the World Futures Conference in Boston that “there is no reason for any individual to have a computer in their home.” In 1980, the management consultants at McKinsey projected that by 2000, there might be 900,000 cell phone users in the U.S.; they were off by over 100-fold; the actual number was above 119 million. 

On the other hand, much-hyped technologies like 3D TV, Google Glass, and the Segway never really took off. For others, like cryptocurrency and virtual reality, the verdict is still out. 

AI itself has been notoriously difficult to predict. For example, in 2016, AI expert Geoffrey Hinton declared:

“Let me start by just saying a few things that seem obvious. I think if you work as a radiologist, you’re like the coyote that’s already over the edge of the cliff but hasn’t yet looked down, so doesn’t realize there’s no ground underneath him. People should stop training radiologists now. It’s just completely obvious that within five years, deep learning is going to do better than radiologists because it’s going to get a lot more experience.  It might be 10 years, but we’ve got plenty of radiologists already.”

Writing five years after this prediction, in his book The New Goliaths (2022), Boston University economist Jim Bessen observes that “no radiology jobs have been lost” to AI, and in fact, “there’s a worldwide shortage of radiologists.”

James Bessen, Executive Director of the Technology & Policy Research Initiative, Boston University.

As Bessen notes, we tend to drastically overstate job losses due to new technology, especially in the near term. He calls this the “automation paradox,” and explains that new technologies (including AI) are “not so much replacing humans with machines as they are enhancing human labor, allowing workers to do more, provide better quality, and do new things.” 

Following the introduction of the ATM machine, the number of bank tellers employed actually increased, Bessen reports. Same for cashiers after the introduction of the bar code scanner, and for paralegals after the introduction of litigation-focused software products.

The explanation, Bessen explains, is that as workers are more productive, the costs of what their making tends to go down, which often unleashes greater consumer demand – at least up to a point. 

For instance, automation in textiles enabled customers to afford not just a single outfit, but an entire wardrobe. Consequently, from the mid-nineteenth century to the mid-twentieth century, “employment in the cotton textile industry grew alongside automation, even as automation was dramatically reshaping the industry,” Bessen writes. 

Yet after around 1940, automation continued to improve the efficiency of textile manufacturing, but consumer demand was largely sated; consequently, he says, employment in the U.S. cotton textile industry has decreased dramatically, from around 400,000 production workers in the 1940s to less than 20,000 today.

Innovation image created on DALL-E.

The point is that if historical precedent is a guide, the introduction of a new technology like generative AI will be accompanied by grave predictions of mass unemployment, as well as far more limited, but real examples of job loss, as we’ve seen in recent reporting. In practice, generative AI is likely to alter far more jobs than it eliminates and will likely create entirely new categories of work.

For example, Children’s Hospital in Boston recently advertised for the role “AI prompt engineer,” seeking a person who skilled at effectively interacting with chatGPT.

More generally, while it can be difficult to predict exactly how a new technology will evolve, we can learn from the trajectories previous technological revolutions have followed, as economist Carlota Perez classically described in her 2002 book, Technological Revolutions and Financial Capital.

Carlota Perez, Honorary Professor at the Institute for Innovation and Public Purpose (IIPP) at University College London.

Among Perez’s most important observations is how long it takes to realize “the full fruits of technological revolutions.” She notes that “two or three decades of turbulent adaptation and assimilation elapse from the moment when the set of new technologies, products, industries, and infrastructures make their first impact to the beginning of a ‘golden age’ or ‘era of good feeling’ based on them.” 

The Perez model describes two broad phases of technology revolutions: installation and deployment. 

The installation phase begins when a new technology “irrupts,” and the world tries to figure out what it means and what to do with it. She describes this as a time of “explosive growth and rapid innovation,” as well as what she calls “frenzy,” characterized by “flourishing of the new industries, technology systems, and infrastructures, with intensive investment and market growth.” There’s considerable high-risk investment into startups seeking to leverage the new technology; most of these companies fail, but some achieve outsized, durable success.

It isn’t until the deployment phase that the technology finally achieves wide adoption and use. This period is characterized by the continued growth of the technology, and “full expansion of innovation and market potential.” Ultimately, the technology enters the “maturity” stage, where the last bits of incremental improvement are extracted. 

As Perez explained to me, “A single technology, however powerful and versatile is not a technological revolution.” While she describes AI as “an important revolutionary technology … likely to spawn a whole system of uses and innovations around it,” she’s not yet sure whether it will evolve into the sort of full-blown technology revolution she has previously described. 

One possibility, she says, is that AI initiates a new “major system” – AI and robotics – within an ongoing information and communication technology revolution.  

At this point, it seems plausible to imagine we’re early in the installation stage of AI (particularly generative AI), where there’s all sorts of exuberance, and an extraordinary amount of investing and startup activity. At the same time, we’re frenetically struggling to get our heads around this technology and figure out how to most effectively (and responsibly) use it.

This is normal. 

Technology, as I wrote in 2019, “rarely arrives on the scene fully formed—more often it is rough-hewn and finicky, offering attractive but elusive potential.”

As Bessen has pointed out, “invention is not implementation,” and it can take decades to work out how best to use something novel. “Major new technologies typically go through long periods of sequential innovation,” Bessen observes, adding, “Often the person who originally conceived a general invention idea is forgotten.”

The complex process associated with figuring out how to best utilize a new technology may account, at least in part, for what’s been termed the “productivity paradox” – the frequent failure of a new technology to impart significant productivity improvement. We think of this frequently in the context of digital technology; economist Robert Solow wryly observed in a 1987 New York Times book review that “You can see the computer age everywhere but in the productivity statistics.”

However, as Paul A. David, an economic historian at Stanford noted in his classic 1990 paper, “The Dynamo and the Computer,” a remarkably similar gap was present a hundred years earlier, in the history of electrification. David writes that the dawn of the 20th century, two decades after the invention of the incandescent light bulb (1879) and the installation of Edison central generating stations in New York and London (1881), there was very little economic productivity to show for it.

David goes on to demonstrate that the simple substitution of electric power for steam power in existing factories didn’t really improve productivity very much. Rather, it was the long subsequent process of iterative reimagination of factories, enabled by electricity, that allowed the potential of this emerging technology to be fully expressed.  

A similar point is made by Northwestern economic historian Robert Gordon in his 2016 treatise The Rise and Fall of American Growth. Describing the evolution of innovation in transportation, Gordon observes that “most of the benefits to individuals came not within a decade of the initial innovation, but over subsequent decades as subsidiary and complementary sub-inventions and incremental improvements became manifest.”

As Bessen documents in Learning by Doing (2015), using examples ranging from the power loom (where efficiency improved by a factor of twenty), to petroleum refinement, to the generation of energy from coal, remarkable improvements occurred during the often-lengthy process of implementation, as motivated users figured out how to do things better — “learning by doing.”

Eric von Hippel, professor, MIT Sloan School of Management

Many of these improvements (as I’ve noted) are driven by what Massachusetts Institute of Technology professor Eric von Hippel calls “field discovery,” involving frontline innovators motivated by a specific, practical problem they’re trying to solve.

Such innovative users—the sort of people who Judah Folkman had labeled “inquisitive physicians”—play a critical role in discovering and refining new products, including in medicine; a 2006 study led by von Hippel of new (off-label) applications for approved new molecular entities revealed that nearly 60% were originally discovered by practicing clinicians.

***

What does this history of innovation mean for the emerging technology of the moment, generative AI? 

First, we should take a deep breath, and recognize that we are in the earliest days of technology evolution, and nobody knows how it’s going to play out. Not the experts developing it, not the critics bemoaning it, not the consultants trying to sell work off our collective anxiety around it.

Second, we should acknowledge that the full benefits of the technology will take some time to appear. Expectations of immediate gains in productivity by plugging in AI seem as naïve, the dynamo-for-steam substitution all over again. While there are clearly some immediate uses for generative AI, the more substantial benefits will likely require continued evolution of both technology and workflow processes.

Innovation image created on DALL-E.

Third, it’s unlikely that AI will replace most workers, but it will require many of us to change how we get our jobs done – an exciting opportunity for some, an unwelcome obligation for others.  AI will also create new categories of work, and introduce new challenges for governance, ethics, regulation, and privacy.

Fourth, and perhaps most importantly: As mind-blowing as generative AI is, the technology is not magic. It doesn’t descend from the heavens (or Silicon Valley), deus ex machina, with the ability to resolve sticky ethical challenges, untangle complex biological problems, and generally ease the woes of humanity. 

But while technology isn’t a magic answer, it’s proved a historically valuable tool, driving profound improvements in the human condition, and enabling tremendous advances in science. The discovery of the microscope, telescope, and calculus all allowed us to better understand nature, and to develop more impactful solutions.

Technology changes the world in utterly unexpected and unpredictable ways. 

How exciting to live in this moment, and to have the opportunity — and responsibility — to observe and shape the evolution of a remarkable technology like generative AI. 

Yes, there are skeptics. I have a number of friends and colleagues who have decided to sit this one out, reflexively dismissing the technology because they’ve heard it hallucinates (it does), or because of privacy concerns (a real worry), or because they’re turned off by the relentless hype (I agree!)

But I would suggest that we owe it to ourselves to engage with this technology, familiarize ourselves, through practice, with its capabilities and limitations. 

We can be the lead users generative AI — like all powerful but immature transformative technologies — requires to evolve from promise to practice.

Recent Astounding HealthTech columns on Generative AI
14
Jun
2023

Immunotherapies for Cancer and More: Aaron Ring on The Long Run

Aaron Ring is today’s guest on The Long Run.

Aaron is an associate professor of immunobiology at Yale University for a little while longer. He’s moving his lab to the Fred Hutchinson Cancer Center in Seattle in the summer of 2023.

Aaron Ring, Fred Hutchinson Cancer Center; founder, Simcha Therapeutics, Seranova Bio, Stipple Bio

Early in his scientific career, Aaron has done some fascinating work in protein engineering and immunology. He has founded three startup companies to translate the research from his lab – Simcha Therapeutics, Seranova Bio, and Stipple Bio. Simcha is working on an engineered form of IL-18 for the treatment of cancer, while Seranova Bio is using technology to identify auto-antibodies that might point the way to new approaches to treat people with autoimmune diseases, cancer, and perhaps neurological diseases.

Timmerman Report subscribers can go back and read a startup profile I did of Simcha back in January 2022 to get the gist. The engineered IL-18 has shown comparable monotherapy efficacy in animals to PD-1 inhibitors, and it has been able to raise the bar in combination with those standard cancer therapies. SR One led a $40 million Series B financing of the company in 2022, and was joined by BVF Partners, Samsara BioCapital, Rock Springs Capital, ArrowMark Partners, and Logos Capital among others. Foresite Capital and A16Z have backed Aaron’s other ventures.

In this conversation we talked about how Aaron developed his interest in science, how he thinks about which problems to go after, and using the new tools of biology and the data they throw off to develop better therapies.

Now for a word from the sponsor of The Long Run.

Tired of spending hours searching for the exact research products and services you need? Scientist.com is here to help. Their award-winning digital platform makes it easy to find and purchase life science reagents, lab supplies and custom research services from thousands of global laboratories.  Scientist.com helps you outsource everything but the genius! 

Save time and money and focus on what really matters, your groundbreaking ideas. 

Learn more at:

Scientist.com/LongRun

Now, please join me and Aaron Ring on The Long Run.

4
Jun
2023

Pharma R&D Execs Offer Extravagant Expectations for AI But Few Proof Points

David Shaywitz

As the excitement around generative AI sweeps across the globe, biopharma R&D groups (like most everyone else) are actively trying to figure out how to leverage this powerful but nascent technology effectively, and in a responsible fashion.

In separate conversations, two prominent pharma R&D executives recently sat down with savvy healthtech VCs to discuss how generative AI specifically, and emerging digital technologies more generally, are poised to transform the ways new medicines are discovered, developed, and delivered.

The word “poised” is doing quite a lot of work in the sentence above. Both conversations seamlessly and rather expertly blend what’s actually been accomplished (a little bit) with the vision of what might be achieved (everything and then some).

The first conversation, from the a16z “Bio Eats World” podcast, features Greg Meyers, EVP and Chief Digital and Technology Officer of Bristol Myers Squibb (BMS), and a16z Bio+Health General Partner Dr. Jorge Conde.  The second discussion, from the BIOS community, features Dr. Frank Nestle, Global Head of Research and CSO, Sanofi, and Flagship Pioneering General Partner and Founder and CEO of Valo Health, Dr. David Berry.  (Readers may recall our discussion of a previous BIOS-hosted interview with Dr. Nestle, here.)

Greg Meyers, chief digital and technology officer, Bristol Myers Squibb

Rather than review each conversation individually, I thought it would be more useful to discuss common themes emerging from the pair of discussions.

Theme 1: How Pharma R&D organizations are meaningfully using AI today

AI has started to contribute meaningfully to the design of small molecules in the early stages of drug development.  “A few years ago,” Meyers says, BMS started “to incorporate machine learning to try to predict whether or not a certain chemical profile would have the bioreactivity you’re hoping.”  He says this worked so well (producing a “huge spike” in hit rate) that they’ve been trying to scale this up.

Meyers also says BMS researchers “are currently using AI pretty heavily in our protein degrader program,” noting “it’s been very helpful” in enabling the team to sort through different types of designs.

Nestle also highlights the role of AI in developing novel small compounds.  “AI-empowered models” are contributing to the design of modules, he says, and are starting to “shift the cycle times” for the industry.

Frank Nestle, chief scientific officer, Sanofi

AI is also now contributing to the development of both digital and molecular biomarkers.  For example, Meyers described the use of AI to analyze a routine 12-lead ECG to identify patients who might have undiagnosed hypertrophic cardiomyopathy.  (Readers may recall a very similar approach used by Janssen to diagnose pulmonary artery hypertension, see here.)

Nestle offered an example from digital pathology.  He described a collaboration with the healthtech company Owkin, whose AI technology, he says, can help analyze the microscope slides with classically stained tissue samples.

Depending on your perspective, these use cases are either pretty slim pickings or an incredibly promising start. 

I’ve not included what seemed to me as still exploratory efforts involving two long-standing industry aspirations:

  • Integrating multiple data sources to improve target selection for drug development;
  • Integrating multiple data sources to improve patient selection for clinical trials.

We’ll return to these important but elusive ambitions later, in our discussion of “the magic vat.”

I’ve also not included examples of generative AI, because I didn’t hear much in the way of specifics here, probably because it’s still such early days.  There was clearly excitement around the concept that, as Meyers put it, “proteins are a lot like the human language,” and hence, large language models might be gainfully applied to this domain.

Theme 2: Grand Vision

The aspiration for AI in in biopharma R&D were as expansive as the established proof points were sparse.  The lofty idea seems to be that with enough data points and computation, it will eventually be possible to create viable new medicines entirely in silico.  VC David Berry described an “aspiration to truly make drug discovery and development programmable from end to end.”  Nestle wondered about developing an effective antibody drug “virtually,” suggesting it may be possible in the future.  Also possible, he suggests: “the ability to approve a safe and effective drug in a certain indication, without running a single clinical trial.”

Both Nestle and Meyers cited the same estimate – 10^60 – as the size of “chemical space,” the number of different drug-like molecular structures that are theoretically possible.  It’s a staggering number, more than the stars in the universe, and likely far beyond our ability to meaningfully comprehend.  The point both executives were making is that if we want to explore this space productively, we’re going to get a lot further using sophisticated computation than relying on the traditional approaches of intuition, trial and error.

The underlying aspiration here strikes a familiar chord for those of us who remember some of the more extravagant expectations driving the Human Genome Project. For instance, South African biologist Sydney Brenner reportedly claimed that if he had “a complete sequence of DNA of an organism and a large enough computer” then he “could compute the organism.”   While the sequencing of the genome contributed enormously to biomedical science, our understanding of the human organism remains woefully incomplete, and largely uncomputed.  It’s easy to imagine that our hubris – and our overconfidence in our ability to domesticate scientific research, as Taleb and I argued in 2008 – may be again deceiving us.

Theme 3: Learning Drug Development Organization

For years, healthcare organizations have strived towards the goal of establishing a “learning health system (LHS),” where knowledge from each patient is routinely captured and systematically leveraged to improve the care of future patients.  As I have discussed in detail (see here), the LHS is an entity that appears to exists only as ideal with the pages of academic journals, rather than embodied in the physical world.

Many pharma organizations (as I’ve discussed previously) aspire towards a similar vision, and seek to make better use of all the data they generate.  As Meyers puts it, you “want to make sure that you never run the same experiment twice,” and you want to capture and make effective use of the digital “exhaust” from experiments, in part by ensuring it’s able to be interpreted by computers.

Berry emphasized that a goal of the Flagship company Valo (where he now also serves as CEO) is to “use data and computation to unify how… data is used across all of the steps [of drug development], how data is shared across the steps.”  Such integration, Berry argues, “will increase probably of success, will help us reduce time, will help reduce cost.”

The problem – as I’ve discussed, and as Berry points out, is that “drug discovery and development has historically been a highly siloed industry.  And the challenge is it’s created data silos and operational silos.” 

The question, more generally is how to unlock the purported value associated with, as Nestle puts it, the “incredible treasure chest of data” that “large pharmaceutical companies…sit on.”

Historically, pharma data has been collected with a single, high-value use in mind.  The data are generally not organized, identified, and architected for re-use.  Moreover, as Nestle emphasizes, the incentives within pharma companies (the so-called key performance indicators or “KPIs”) are “not necessarily in the foundational space, and that not where typically the resourcing goes.”  In other words, what companies value and track are performance measures like speed of trial recruitment; no one is really evaluating data fluidity, and unless you can directly tie data fluidity to a traditional performance measure, it will struggle to be prioritized.

In contrast, companies like Valo; other Flagship companies like Moderna; and some but not all emerging biopharma companies are constructed (or reconstructed — eg Valo includes components of both Numerate and Forma Therapeutics, as well as TARA biosystems) with the explicit intention of avoiding data silos.  This concept, foundational to Amazon in the context of the often-cited 2002 “Bezos Memo,” was discussed here

In contrast, pharmas have entrenched silos; historically, data were collected to meet the specific needs of a particular functional group, responsible for a specific step in the drug development process.  Access to these data (as I recently discussed) tends to be tightly controlled.

Data-focused biotech startups tend to look at big pharma’s traditional approach to data and see profound opportunities for disruption.  Meanwhile, pharmas tend to look at these data-oriented startups and say, “Sure, that sounds great.  Now what have you got to show for all your investment in this?” 

The result is a standoff of sorts, where pharmas try to retrofit their approach to data yet are typically hampered by the organizational and cultural silos that have very little interest in facilitating data access.  Meanwhile, data biotech startups are working towards a far more fluid approach to data, yet have produced little tangible and compelling evidence to date that they are more effective, or are likely to be more effective, at delivering high impact medicines to patients. 

Theme 4: Partnerships and External Innovation

Both BMS and Sanofi are exploring emerging technologies through investments and partnerships with a number of healthtech startups, even as both emphasize that they are also building internal capabilities. 

“We have over 200 partnerships,” Meyers notes, “including several equity positions with other companies that really come from the in silico, pure-play sort of business.  And we’ve learned a ton from them.”

Similarly, Nestle (again – see here) emphasized key partnerships, including the Owkin relationship and digital biomarker work with MIT Professor Dina Katabi.

Meanwhile, Pfizer recently announced an open innovation competition to source generative AI solutions to a particular company need: creating clinical study reports.

In addition to these examples, I’ve become increasingly aware of a number of other AI-related projects attributed to pharma companies that upon closer inspection, turn out to represent discrete engagements with external partners or vendors who reportedly are leveraging AI.

Theme 5: Advice for Innovators

One of the most important lessons from both discussions was the challenge for aspiring innovators and startups.

Berry, for example, explained why it’s so difficult for AI approaches to gain traction.  “If I want to prove, statistically, that AI or an AI component is doing a better job, how many Phase Two clinical readouts does one actually need to believe it on a statistical basis?  If you’re a small company and you want to do it one by one, it’s going to take a few generations.  That’s not going to work.”

On the other hand, he suggested “there are portions of the drug discovery and development cascade where we’re starting to see insights that are actionable, that are tangible, and the timelines of them and the cost points of them are so quickly becoming transformative that it opens up the potential for AI to have a real impact.”

Meyers, for his part, offered exceptionally relevant advice for AI startups pitching to pharma (in fact, the final section of the episode should be required listening for all biotech AI founders). 

Among the problems Meyers highlights – familiar to readers of this column – are the need “for companies that are focused on solving a real-world problem,” rather than solutions in search of a problem.  He also emphasized that “this is an industry that will not adopt something unless it is, really 10x better than the way things are historically done.” 

This presents a real barrier to the sort of incremental change that may hard be appreciate in the near term but can deliver appreciable value over time.  Even “slight improvements” in translational predictive models, as we recently learned from Jack Scannell, can deliver outsized impact, significantly elevating the probability of success while reducing the many burdens of failure.

Meyers also reminded listeners of the challenges of finding product-market fit because healthcare “is the only industry where the consumer, the customer, and the payor are all different people and they don’t always have incentives that are aligned.”  (See here.)

David Berry, CEO, Valo Health

On a more optimistic note, Berry noted that one of the most important competitive advantages a founder has is recognizing that “a problem is solvable, because that turns out to be one of the most powerful pieces of information.”  For Berry, the emergence of AI means that “we can start seeing at much larger scales problems that are solvable that we didn’t previously know to be solvable.”  Moreover, he argues, once we realize a problem is solvable, we’re more likely to apply ourselves to this challenge.

Paths Forward

In thinking about how to most effectively leverage AI, and digital and data more generally, in R&D, I’m left with two thoughts which are somewhat in tension.

“Pockets of Reducibility”

The first borrows (or bastardizes) a phrase from the brilliant Stephen Wolfram: look for pockets of reducibility.  In other words – focus your technology not on fixing all of drug development, but on addressing a specific, important problem that you can meaningfully impact. 

For instance, I was speaking earlier this week with one of the world’s experts on data standards.  I asked him how generative AI as “universal translator” (to use Peter Lee’s term) might obviate the need for standards.  While the expert agreed conceptually, his immediate focus was on figuring out how to pragmatically apply generative AI tools like GPT-4 to standard generation so that it could be done more efficiently, potentially with people validating the output rather than generating it.

On the one hand, you might argue this is disappointingly incremental.  On the other hand, it’s implementable immediately, and seems likely to have a tangible impact. 

(In my own work, I am spending much of my time focused on identifying and enabling such tangible opportunities within R&D.)

The Magic Vat

There’s another part of me, of course, that both admires and deeply resonates with the integrated approach that companies like Valo are taking: the idea and aspiration that if, from the outset, you deliberately collect and organize your data in a thoughtful way, you can generate novel insights that cross functional silos (just as Berry says).  These insights, in principle, have the potential to accelerate discovery, translation (a critical need that this column has frequently discussed, and that Conde appropriately emphasized), and clinical development.

Magic Vat. Image by DALL-E.

Integrating diverse data to drive insights has captivated me for decades; it’s a topic I’ve discussed in a 2009 Nature Reviews Drug Discovery paper I wrote with Eric Schadt and Stephen Friend.  The value of integrating phenotypic data with genetic data was also a key tenet I brought to my DNAnexus Chief Medical Officer role, and a lens through which I evaluated companies when I subsequently served as corporate VC.   

Consequently, I am passionately rooting for Berry at Valo – and for Daphne Koller’s insitro and Chris Gibson’s Recursion.  I’m rooting for Pathos, a company founded by Tempus that’s focused on “integrating data into every step of the process and thereby creating a self-learning and self-correcting therapeutics engine,” and that has recruited Schadt to be the Chief Science Officer.   I’m also rooting for Aviv Regev at Genentech, and I am excited by her integrative approach to early R&D.

Daphne Koller, founder and CEO, insitro

But throughout my career, I’ve also seen just how challenging it can be to move from attractive integrative ambition to meaningful drugs.  I’ve seen so many variations of the “magic vat,” where all available scientific data are poured in, a dusting of charmed analysis powder is added (network theory, the latest AI, etc), the mixture is stirred, and then – presto! – insights appear. 

Or, more typically, not.  But (we’re invariably told) these insights would arrive (are poised to arrive) if only there was more funding/more samples/just one more category of ‘omics data, etc. — all real examples by the way. 

It’s possible that this time will be the charm – we’ve been told, after all, that generative AI “changes everything” — but you can also understand the skepticism.

Chris Gibson, co-founder and CEO, Recursion

My sense is that legacy pharmas are likely to remain resistant to changing their siloed approach to data until they see compelling evidence that data integration approaches are, if not 10x better, then at least offer meaningful and measurable improvement.   In my own work, I’m intensively seeking to identify and catalyze transformative opportunities for cross-silo integration of scientific data across at least some domains, since effective translation absolutely requires it.

For now, big pharmas are likely to remain largely empires of silos – and will continue to do the step-by-step siloed work comprising drug development at a global scale better than anyone.  Technology, including AI, may help to improve the efficiency of specific steps (eg protocol drafting, an example Meyers cites). Technology may also improve the efficiency of sequential data handoffs, critical for drug development, and help track operational performance, providing invaluable information to managers, as discussed here.

But foundationally integrating scientific knowledge across organizational silos?  Unless a data management organization already deeply embedded within many pharmas – perhaps a company like Veeva or Medidata – enables it, routine integration of scientific knowledge across long-established silos, in the near to medium term, seems unlikely.  It may take a visionary, persistent and determined startup (Valo? Pathos?) to capture persuasively the value that must be here.

Bottom Line:

Biopharma companies are keenly interested in leveraging generative AI, and digital and data technologies more generally, in R&D.  To date, meaningful implementations of AI in large pharmas seem relatively limited, and largely focused on small molecule design, and biomarker analysis (such as identifying potential patients through routine ECGs).  Nevertheless, the ambitions for AI in R&D seem enormous, perhaps even fanciful, envisioning virtual drug development and perhaps even in silico regulatory approvals.  More immediately, pharmas aspire to make more complete use of the data they collect but are likely to continue to struggle with long-established functional silos.  External partnerships provide access to emerging technologies, but it can be difficult for healthtech startups to find a permanent foothold with large pharmas.  Technology focused on alleviating important, specific problems – “pockets of reducibility” – seems most likely to find traction in the near term.  Ambitious founders continue to pursue the vision of more complete data integration.

30
May
2023

From Structural Biology to Structuring Companies: Deb Palestrant on The Long Run

Today’s guest on The Long Run is Deb Palestrant.

Deb is a partner with 5AM Ventures and the executive chair of the 4:59 Initiative. 5AM invests in early-stage startups working on a variety of novel biological targets and some of the emerging new treatment modalities – gene therapy, gene editing, oligonucleotides. As the name suggests, it’s not afraid to get involved in companies in very early days, when they are high-risk/high-reward propositions.

Deb Palestrant, partner, 5AM Ventures; executive chair, 4:59

Deb comes to this venture work with a deep scientific background, and significant hands-on operating experience. She got her PhD in structural biology at Columbia University, and made the move to industry at the Novartis Institutes of Biomedical Research in the mid-2000s. She found her way into the Boston biotech startup world in the 2010s, and was a part of building a series of ambitious companies – Blueprint Medicines, Editas Medicine, and Relay Therapeutics included.

We talk in this episode about Deb’s career journey, about how she and her partners think about creating companies, and what areas of opportunity she sees in science and medicine.

And now for a word from the sponsor of The Long Run.

 

 

Occam Global is an international professional services firm focusing on executive recruitment, organizational development and board construction. The firm’s clientele emphasize intensely purposeful and broadly accomplished entrepreneurs and visionary investors in the Life Sciences. Occam Global augments such extraordinary and committed individuals in building high performing executive teams and assembling appropriate governance structures. Occam serves such opportune sectors as gene/cell therapy, neuroscience, gene editing, the intersection of AI and Machine Learning and drug discovery and development

Connect with Occam at:

www.occam-global.com/longrun

 

Now, please join me and Deb Palestrant on The Long Run.

21
May
2023

Big, If True: Opportunities and Obstacles Facing AI (Plus: Summer Reading)

David Shaywitz

Today, we’ll begin with a consideration of the promise for AI some experts see in healthcare and biopharma.

Next, we’ll look at some of the obstacles – some technical, some organizational – and re-visit the eternal “data parasite” debate.

Finally, we’ll conclude with a few suggestions for summer reading.

The AI Opportunity: Elevating Healthcare for All

Earlier this month, I moderated a conversation about AI and healthcare (video here, transcript here) at Harvard’s historic Countway Library of Medicine, in a room just down the hall from a display of Phineas Gage’s skull and the tamping iron that lanced it on September 13, 1848, famously altering his behavior but sparing his life.  The episode soon became part of neurology history and lore.

With less overt drama, but addressing a topic of perhaps even greater biomedical importance, the panelists – Harvard’s Dr. Zak Kohane, Microsoft’s Peter Lee, and journalist Carey Goldberg (all co-authors of the recently published The AI Revolution in Medicine: GPT-4 and Beyond, discussed here), addressed their subject.

A key opportunity for AI in health that Kohane emphasized was the chance to elevate care across the board by improving consistency. He told the story of a friend whose spouse was dealing with a series of difficult health issues.

Innovation image created on DALL-E.

Kohane said his friend described “how delightful it was to have a doctor who really understood what was going on, who understood the plan. The light was on.”

However, Kohane continued, the friend would then “go talk to another doctor and another doctor, and the light was not on. And there was huge unevenness.”

The story, Kohane reflected, “reminds me of my own intuition just from experiencing medical training and medical care, which is there are huge variations. There are some brilliant doctors. But there are some also non-brilliant doctors and some doctors who might have been brilliant but then are harried, squished by the forces that propel modern medicine.”

Kohane says he saw Chat-GPT as a potential response to physician inconsistency. For Kohane, generative AI represented a disruptive force that “was going to happen, whether or not medicine and the medical establishment were going to pick up the torch.” Why? Because “patients were going to use it.”

Goldberg, too, recognized the opportunities for patients, and spoke to the urgent need she felt to access the technology:

“Okay, we get it. It has inaccuracies, it hallucinates. Just give it to me. Like, I just want it. I just want to be able to use it for my own queries, my own medically related queries. And I think that what I came away from working on this book with was an understanding of just the incredible usefulness that this can have for patients.”

Goldberg also shared a story of a nurse who suffered from hand pain and was evaluated by a series of specialists who were unable to identify the cause. Desperate, the nurse typed her symptoms into Chat-GPT, and learned that one of her medications could be causing the pain.  When this was changed, the pain resolved.   

Kohane sees the ready availability of a savvy second-opinion a tremendous resource for physicians. When he was training, he said, the physicians used to convene after clinic and review all the patients. “Invariably,” he notes, “we changed the management” of a handful “because of what someone else said. That went away. There’s no time for it.”

Innovation image created on DALL-E.

The lack of review represents a real loss, Kohane points out, because “even the best doctors will not remember everything all the time.” Kohane says he is convinced that generative AI will restore this capability and enable it serve a co-pilot function, providing real-time assistance to busy providers.

Another opportunity to make physicians’ lives better, the panelists suggested, was in the area of paperwork and documentation, such as the dreaded pre-authorization letters, often required to beseech payors for reimbursement. 

Since Lee contributed an entire chapter about the impact on paperwork reduction in healthcare, I asked him whether we’re just going to see AI’s battling with each other: provider AI’s writing pre-authorization letters, and payor AI’s writing justifications for rejection. 

Lee responded that this was very similar to a scenario Bill Gates has mentioned, where an email starts as three bullet points you want to share, GPT-4 translates this into a well-composed email, then GPT-4 at other end reduces it back to three bullet points for the reader.  

I told Lee this reminded me of Peter Thiel’s famous quote: “We wanted flying cars, instead we got 140 characters.” Surely, I asked, generative AI must offer healthcare something more profound than more efficient paperwork? 

In response, Lee highlighted the opportunities associated with the ability to better connect and learn from data – perhaps getting us closer to at long last fulfilling the elusive promise of a “learning healthcare system” (see here). In particular, Lee highlighted the potential of AI serving as a “universal translator of healthcare information,” allowing for the near-effortless extraction and exchange of information. 

For more perspectives on how AI could benefit healthcare and the life sciences, I’d recommend:

  1. A recent Atlantic piece by Matteo Wong emphasized the opportunities for leveraging multi-modal data – a topic Eric Topol and colleagues have consistently highlighted.
  2. This short Nature Biotechnology essay, by Jean-Philippe Vert, a data science expert who now leads R&D at the health data company Owkin.  Vert describes four ways AI may impact drug discovery.  Of particular interest: his suggestion that generative AI might provide a “framework for the seamless integration of heterogeneous data and concepts.”  However, he acknowledges that “How exactly to implement this idea and how effective it will be largely remain open research questions.”
  3. This recent Nature Communications paper from M.I.T. and Takeda (disclosure: I work at Takeda, but wasn’t involved in this collaboration), demonstrating an application of AI in manufacturing. This operational area seems especially amenable to AI-driven improvements, in part because the richness and completeness of data capture (see also here).
Pesky Obstacles to AI implementation

The inconvenient truth is that while generative AI and other emerging technologies have captivated us with their promise, we’re still figuring out how to use them.

Innovation image created with DALL-E.

Even user-friendly applications like ChatGPT and GPT-4-enabled-Bing are not always plug-and-play. For example, in preparation for an upcoming workshop I’m leading for a particular corporate function highlighting the capabilities of GPT-4, I tried out some of the team’s most elementary use cases with Bing-GPT.  The results were disappointing and included a number of basic mistakes. Often, Bing-GPT seemed to perform worse than Bing or Google search alone.  The results generated seemed unlikely to inspire corporate colleagues to urgently adopt the technology.

These challenges are hardly limited to GPT-4 or Bing. From the perspective of a drug development organization, technology issues seem to surface in every area of digital and data.  Far more often than not, the hype and promise touted by eager startups seem at odds with the capabilities these nascent companies demonstrably can deliver. In fairness, the difficulty many legacy biopharma companies have figuring out how to work in new ways with these healthtech startups probably also contributes to the challenge. 

To understand the issues better, let’s consider one example, outside of biopharma, recently discussed by University of North Carolina Vice Chair and Professor of Medicine Spencer Dorn.  His focus: the adoption of AI in radiology.

Dorn notes that while AI expert Geoffrey Hinton predicted in 2016 that AI would obviate the need for radiologists within five years, this hasn’t happened. In fact, Dorn says, only a third of radiologists use AI at all, “usually for just a tiny fraction of their work.” 

Dorn cites several reasons for AI’s limited adoption in clinical radiology:

  • Inconsistent performance of AI in real-world settings, compared to test data;
  • AI “may be very good a specific tasks (e.g. identifying certain lesions)…but not most others”;
  • “Embedding AI into diagnostic imaging workflows require time, effort, and money,” and basically, the juice doesn’t seem to be worth the squeeze;

Dorn warns that generative AI “in healthcare will need to overcome these same hurdles. Plus, several more.”

Similar issues apply to the adoption, for high-stakes use-cases, of a range of emerging technologies, including digital pathology, decentralized trials, and “the nightmare” of digital biomarkers – challenges this column has frequently discussed.

Innovation image created on DALL-E.

But remarkably, technology problems are probably not the most difficult issue for healthtech innovators to solve. Technology tends to improve dramatically over time (think about the camera on your smartphone). No, the most difficult sticking point may well be organizational behavior.  Essentially, the return of the eternal, dreaded “Data Parasite” debate (as I discussed in 2016 in a three-part series in Forbes, starting here.) 

In most large organizations, both academic and corporate (I am unaware of many exceptions), there is a constant battle between those who effectively own the data and those who want to analyze the data. In theory, of course, and depending upon the situation, data belong to: patients / the organization / taxpayers, or some combination of the three. Researchers, meanwhile, are just “stewards” or “trustees” of the data. Yet in practice, someone always seems to control and zealously guard the access to any given data set within an organization.  

Typically, those who “own” the data (whether an academic clinical investigator or a pharma clinical development team) are using the data to pursue a defined, high-value objective. Others who want access to these data tend to have more exploratory tasks in mind. Theoretically, there’s a huge amount of value that can be obtained by enabling data exploration. Once again, in practice, the theoretical value is often difficult to demonstrate, and is often viewed as offering little upside – and a fair amount of perceived downside risk, as well as gratuitous aggravation – to the data “owners.” Much of this perceived risk relates to the concern about sloppy or ill-informed analyses that generates, essentially, “false positive” concerns, as I allude to here.

I’ve seen very few examples where the data “analyzers” have sufficient leverage to win here.  In general, the data “owners” tend to hire data scientists of their own and say “let us know what you want to know, and we’ll have our people run the analysis for you.” This has the effect of slowing down integrative exploratory analyses to a trickle, particularly given the degree of pre-specification the data “owners” tend to require.

If you are a data owner, you probably view this as an encouraging result, since analyses are only done by people who ostensibly have a feel for how the data were generated and understand the context and the limitations. As discussed in a previous column, “data empathy” is vitally important. 

But if you are a data analyzer not working directly with a data “owner,” you are constantly frustrated by the near-impossibility of obtaining access to data you’d like to explore. Perhaps most strikingly, many researchers who fiercely defend their own data from external analyses are often fiercely critical of others for not sharing data the same researchers hope to explore. As Miles famously observed, “where you stand depends on where you sit.”

Innovation image created on DALL-E.

Of course, it’s possible that technology could help ease sharing. Even so, it’s really difficult to envision the tight hold on data changing, so long as so much power in organizations clearly rests with those in control of the data. Perhaps, as Lakhani and others suggest, this can be addressed in new companies who have a fundamentally different view of data (Amazon – driven by the “Bezos Mandate” — is the canonical example), and can readily monetize data fluidity.  Alternatively, the demonstrated utility of exploratory integrated analyses across multiple data silos and “owners” in legacy organizations could potentially facilitate more consistent access. 

For now, in both academia and biopharma, virtuous stated preferences to the contrary, this revealed tension remains very much alive.

Briefly Noted Summer Reading

A must-read for all biotechies, For Blood and Money, by Marketwatch’s Nathan Vardi, tells the captivating story of two cancer medicines targeting the BTK kinase: ibrutinib and acalabrutinib.  A decade ago, for Forbes, I wrote about the beginning of the ibrutinib story. 

It was thrilling to read Vardi’s account of the medicine’s complete journey – and the journey of its competitor, acalabrutinib (which, fun fact, was originally discovered by the same company in the Netherlands that discovered the product that became the blockbuster Keytruda, see here). As Jerome Groopman’s thoughtful review in the New York Review of Books suggests, Vardi’s book also raises difficult questions about the role of luck vs skill in drug development, as well as the role of capital vs labor, since the investors appeared to make out far better than the scientists who did the lion’s share of the work. This pithy review by Adrian Woolfson, in Science, also provides a good summary.

Less essential but fascinating for readers who recall the rise of companies like Gawker and Buzzfeed, is Traffic, by Ben Smith. He describes how emerging media companies – and the young men and women who contributed the content – desperately chased reader traffic, with important consequences both for them and society. See here for an excellent review of the book by the Bulwark’s Sonny Bunch.

Also intriguing, if a bit uneven: Beyond Measure, a book about the history of measurement, written by James Vincent, Senior Reporter at The Verge. See here for a thoughtful review of Vincent’s book by Jennifer Szalai in The New York Times.

Finally, a few recommended posts. On the concerning side, this piece about the devolution of clinical medicine captures what I seem to be hearing from nearly every single physician I know.  Even doctors who were once so excited about taking care of patients now seem abjectly miserable, trapped in a system that has reduced them to widgets. (See also here, here, here.)

On the innovation front, several comments about the wildly popular GLP-1 medicines tirzepatide and semaglutide caught my eye (see also my last piece, here). On the one hand, it’s clear the development of these powerful and promising medicines was, as Dr. Michael Albert of Accomplish Health suggests, clearly the result of deliberate, meticulous effort, both by companies like Lilly and Novo Nordisk, and pioneering academics like physician-scientist Daniel Drucker (who also maintains this authoritative website on the evolving science). On the other hand, it’s interesting that (as Sarah Zhang writes in The Atlantic), these medicines may have entirely unanticipated application in the management of addictions and compulsions.

Bottom Line

Generative AI offers the possibility of elevating the quality of healthcare patients receive.  However, the implementation of AI and other digital technologies may be impaired both by the growing pains of nascent technology and, more significantly, by the territoriality of those who control access to data silos within large organizations — although this may also ensure that the data are more likely to be analyzed by those who have a greater feel for the context in which they were generated). Finally, For Blood and Money, by Nathan Vardi, Traffic, by Ben Smith, and Beyond Measure, by James Vincent are all good additions to your summer reading list.

18
May
2023

Anthony Mancini of Genmab on Growing Your Company Together with Partners

Vikas Goyal, Managing Partner, Trekk Venture Partners

Anthony Mancini is the chief operating officer of Denmark-based Genmab, one of the leading innovators of antibody therapies for patients living with cancer and other serious diseases.

After many years working as a behind-the-scenes innovator, Genmab is now becoming a significant commercial entity. In 2021, the company began marketing its first commercial product, Tidvak® (tisotumab vedotin) for the treatment of advanced cervical cancer, in partnership with Seagen (acquisition by Pfizer pending).

In 2022, Genmab and AbbVie submitted an application for regulatory approval to start marketing epcoritamab, a CD3 and CD20-directed bispecific antibody T-cell engager for non-Hodgkin’s lymphoma. It’s the lead program in AbbVie and Genmab’s multi-faceted cancer collaboration that was announced in June 2020. It is a massive $3.9 billion collaboration that sees the two companies working together in a true 50:50 collaboration to research, develop, and commercialize new therapies around the world.

Anthony was instrumental in executing the deal and now in managing the collaboration. He leads several functions at Genmab, including commercialization, IT & Digital. Anthony’s mission is to help lead Genmab’s evolution to become a best-in-class fully integrated biotech company.

Anthony Mancini, chief operating officer, Genmab

Before joining Genmab three years ago, Anthony had strategic and operational leadership roles over a 24-year career at Bristol Myers Squibb (BMS) including the leadership of BMS’ US Innovative Medicines Unit. While at BMS, he was an integral member of the multiyear Cardiovascular Alliances with Pfizer, as well as partnerships with Sanofi, AstraZeneca and Otsuka.

Anthony has stellar insights around how companies can work with partners to develop and launch amazing medicines, while building their own capabilities at the same time. He sat down with me a few months ago to share his excitement about Genmab’s future, as well as his advice on how to build successful commercial partnerships.

Q: What was the situation at Genmab when you joined? What opportunities did you see for the company, the pipeline, and patients?

AM: [I joined Genmab in 2020] at a pivotal moment. During our first 20 years , which I’ll call phase one of Genmab, we outlicensed our programs to others for them to further develop and commercialize. By 2020, Genmab had helped invent more than 20 therapeutic candidates that were approved or in active clinical development. We were known throughout the industry for creating novel and differentiated antibodies and had started to build a foundation of capabilities in drug development.

2020 was a pivot point in our strategy. Our objective during this phase two of Genmab was to transition from a purely R&D focused organization into a fully integrated biotech company. I was excited to come in at such an important inflection.

Developing a medicine and then throwing it over the fence to commercialize often just leads to wasted investment. You really need to integrate deep insights around patient journeys, regulatory frameworks, healthcare systems, and reimbursement [environments] across different countries. Getting that input early in program development requires commercialization and R&D teams to work together. And so, in this phase of our strategy, the 50-50 phase, we chose to work with partners like Seagen, BioNTech, and now AbbVie.

And we believe our investigational compound epcoritamab may have potential in blood cancers. Epcoritamab is a bispecific antibody designed to target both CD3 on T cells and CD20 on B cells, has shown highly effective killing of CD20+ positive tumors in nonclinical studies, and shown encouraging clinical responses in early studies in large B-Cell lymphoma (LBCL). If cleared by health authorities, we believe it has the potential to transform the treatment of B cell malignancies.

We sought a partner with scale to leverage that potential and help us appropriately develop and commercialize the drug. And we were excited to land a great partner like AbbVie. Our AbbVie collaboration is a broad oncology collaboration where we jointly make strategic decisions – it is a true 50:50 partnership. In November, the FDA accepted for Priority Review the Biologics License Application (BLA) for epcoritamab (DuoBody®-CD3xCD20) for the treatment of relapsed/refractory LBCL.

Q: Why did Genmab want to pursue a co-development and co-commercialization partnership?

AM: Our priorities were very clear. We believe, together with the right partners, we can leverage each other’s strengths with the goal to bring our medicines to patients even faster. Our CEO, Jan van de Winkel, publicly discussed working with a partner for epcoritamab. [The treatment of] B cell malignancies can involve complex regimens and it can be difficult to recruit patients. We have built a stellar research and development organization and could have done it on our own, but with AbbVie as a partner, we add speed and scale to our expertise. In a competitive marketplace, you really need to move quickly. AbbVie also has deep hematologic cancer experience.

We wanted to leverage the talent and experience we were already bringing [as a company] as well as expand capabilities in development and build them in commercialization. In the first instance, Genmab is focused on the US and Japan markets. AbbVie adds to our strength in these countries, and they also add the scale of their global footprint.

[And ultimately,] both Genmab and AbbVie wanted something more collaborative. So, in addition to jointly developing and commercializing epcoritamab, we also entered into a discovery research collaboration to create additional antibody therapeutics for cancer. We believe we have set up a win-win partnership with AbbVie.

Q: How do you negotiate and structure such a broad collaboration?

AM: It’s not easy [laughs]. It’s a marriage for the long haul and it takes a huge effort across many different functions. Deals of this breadth require close collaboration across BD, R&D, Legal, Finance and Commercialization.

While you should think through as many scenarios as possible upfront, you can’t sort everything out in the agreement. You should also have clarity on each other’s negotiation ‘must haves’ and clarity on how each party can achieve those goals. It is also important to build a governance process as to how the two parties will sort unforeseen things out.

How will the committees make fast decisions? How will the partners work together to make decisions as well or better than they could on their own? Do the governance committees have the right people on them? Are those people appropriately empowered to make decisions? How will we adequately allocate resources to each program, and regularly revisit those FTE and resource allocations? How will the commercialization teams apportion roles & responsibilities?

There are many aspects to consider depending on the collaboration construct, the number of programs, as well as their stages of development.

There are also many activities to map out in a co-development / co-commercialization structure, including marketing, manufacturing, regulatory, market access and pricing, publications, medical affairs, distribution, etc. All these functions are critical to launch success and best practices vary country to country. Which partner is responsible for which activity in each geography? What overall product strategy will resonate across different countries? How should we optimally sequence launches?

Q: How do you manage a successful commercial partnership?

AM: We have three 50:50 partnerships at Genmab — AbbVie, Seagen, and BioNTech. We will hopefully have products commercialized with all three partners.

I’ve spent several years working inside and leading co-development and co-commercialization partnerships and it was great to bring that experience to bear [in our discussions with AbbVie, Seagen and BioNTech.] It’s important to get the teams aligned on the ambition and the strategy and work on being crystal clear on the decision-making processes.

It’s also OK to rethink strategy. I was a small part of a large partnership between Bristol Myers Squibb and AstraZeneca. It was a successful alliance that included the launch of many products; the portfolio generated multibillion-dollar revenues. After several years of a well-functioning alliance there was a strategic shift that led to a transaction.  BMS sold the franchise to AZ in order to simplify its operating model as a specialty biopharma company. For AZ, it was able to focus even more on one of its key growth platforms. Ultimately, it was a win-win evolution of the strategy.

Q: What comes next for Genmab?

AM: We will continue to focus on delivering our vision to transform the treatment of cancer through our knock your socks off (KYSO) antibodies. We will continue to build on our capabilities across R&D, Commercialization and enabling functions to become a best-in-class end to end biotech company.

Once our commercialization execution in the US and Japan is up and running smoothly, it creates a platform to move to the third phase of Genmab, the >50% phase. We will discover, develop and launch more medicines on our own. Partnerships will, of course, still be part of that. Over the past 3 years we have signed many earlier stage deals which are more often structured in such a way that gives Genmab worldwide commercialization rights.

We recently announced that we are entering the therapeutic area of immunology and inflammation. While oncology will remain as our primary focus, ultimately our aspiration is that our KYSO innovations can help make a meaningful difference to as many people as possible.

15
May
2023

Biopharma Innovation – Beyond The Breathless Headlines

David Shaywitz

Biopharma relies on innovation to stay in business. Success depends on our collective ability to discover, develop, and deliver new products that cure or meaningfully mitigate disease over and over again.

Patents allow for innovators to be rewarded, for a while. When patents expire, allowing us to purchase powerful generic medications like atorvastatin for pennies, manufacturers must come up with something new to support the enterprise.

The pressure to discover and develop the big new thing is intense.

We have seen remarkable advances in many areas, including cystic fibrosis and of course the rapid development of COVID vaccines. We routinely contemplate a range of modalities that a decade ago would have been considered fanciful (see here). We also acknowledge that, tragically, many dreadful conditions like glioblastoma multiforme, pancreatic cancer, and amyotrophic lateral sclerosis remain largely resistant to our efforts – so far.

While we recognize the value of innovation, we also appreciate that often, there seems to be a lot more heat than light, at lot more self-congratulatory social media posts than real evidence of progress.

The Harm of Innovation Theater

Writing this month in Forbes, Dr. Sachin Jain, a physician-executive with experience across all of healthcare, from academic medicine to pharma to payors, plaintively expressed his frustration with the excessive celebration of innovation. He called out the dichotomy between the triumphant characterization of innovation by many healthcare and biopharma organizations, and the often far less impressive reality.

Sachin Jain

“I was struck by the difference between what I read and [what] I was seeing on the ground in practice,” he writes, noting, for example, that many highly-touted advances were only small pilot programs, and never actually scaled (or planned to scale).   

He’s previously described the difference between what he calls the “change layer” – “the cloud in which visionary ideas about transforming healthcare resides” – and the “reality layer,” the place “where most care is delivered.” While both layers are necessary, he writes, he’s observed “little mixing between them.”

Moreover, he suggests, the change layer perversely may insulate organizations from real change by providing a conspicuous, dynamic narrative around innovation and disruption, even though these innovations and disruptions rarely meaningfully permeate into the day-to-day business of the company. He cites several examples of prominent healthcare demonstration projects that persist (if at all) only as isolated examples.

Jain is hardly the first to note the distinction between speaking and acting. Aesop, born more than two and half millennia ago, reportedly observed, “when all is said and done, more is said than done.” More recently, University of Chicago economist John List has examined, in The Voltage Effect, some of the reasons why promising pilots often fail to scale. 

But Jain is making an important, somewhat more provocative point: that our relentless celebration of innovation sustains a false illusion of progress, enabling incumbents to highlight their commitment to change while continuing to practice business as usual.

While his current focus is on healthcare, he’s also discussed some of the challenges he observed when he worked in pharma, writing:

“I watched with curiosity as the industry launched countless initiatives to move ‘beyond the pill’ to build services and solutions business to enhance patient outcomes, only to undercapitalize them and quietly shut down without notice. The industry was unable to sustainably think about a future outside of high margin molecules – just as many hospitals are unable to think of a future without fee-for-service.”

Here, of course, we immediately think of the famous observation by Upton Sinclair in 1934, “It is difficult to get a man to understand something, when his salary depends upon his not understanding it.”

Jain, to be sure, acknowledges that disruptive innovation is, by definition, difficult. But what worries him is that the gap between the innovation we trumpet and the innovation we implement seems to be growing, cultivating an abiding sense of deep cynicism – call it disruptive innovation fatigue — in the trenches, which makes true change even more challenging and less likely. 

I can think of several examples from digital and data: we constantly hear about triumph of distributed clinical trials, which bring clinical trials to the patient.  This is truly a worthy and important goal. Yet the success of these endeavors has been far more limited, the logistics far more difficult, and the impact far less profound, than the constant publicity would suggest.  It is perhaps not surprising, for instance, to hear that CVS is shutting down its nascent clinical trial business.

Similarly, we are constantly hearing about the great success of AI drug discovery. I am extremely optimistic about the ability of AI to dramatically improve aspects of the process. Nevertheless, the realized impact to date has been far less than publicity would suggest. I recently read in a prominent publication about a supposed triumph of AI-based drug discovery by a VC-backed startup, leading to an attractive licensing deal around a promising molecule. 

Perplexed by this “retconning” – a term from cinema and politics that refers to “retroactive continuity,” revising an established narrative to align with a new storyline — I pinged one of the founding venture capitalists. The VC was also amused by what the investor termed “revisionist history.”  Instead – and far more credibly — the VC attributed the success to the team of “smart people” doing structure-based drug discovery. 

Nevertheless, the investor shrugged, AI is “the buzzword of the day.”

At the far extreme of cynicism, I think about an assertion I once heard from a senior management consultant, who argued that at its core, big pharma is about clinical trial orchestration and product commercialization, rather than about innovative early research.  The consultant argued that the work in pharma labs essentially serves as a public relations distraction while corporations seek new products to license from biotechs.  As this consultant saw it, pharmas excel at orchestration at scale, rather than organic scientific innovation.  The key competencies of pharma, in this view, are successfully managing the incredibly complex processes required for global clinical development, international regulatory approvals, and worldwide commercialization. 

(I’ve also heard some suggest that the most significant contribution of discovery research teams in big pharma is understanding a field in enough detail to enable rigorous evaluation of in-licensing candidates.)

Efficiency Matters, Even For Innovators

Before we consider a more sanguine view of pharma innovation, it’s important to recognize that for all large organizations, even the most innovative, it’s critical not only to develop new products, but to ensure this is done with ruthless efficiency.

As World War II General Omar Bradley reportedly said, “Amateurs talk strategy; professionals talk logistics.” (Or, if you prefer Frederick the Great: “An army, like a serpent, goes upon its belly.”)  True, the vision for Apple’s success was developed by Steve Jobs – but the ability to make it happen required the supply chain management led by Tim Cook, who later became CEO. 

The Wall Street Journal recently profiled Zach Kirkhorn, the CFO of Tesla, who the Journal says performs a behind-the-scenes role similar to the one Cook played for years at Apple.  “While Mr. Musk revolutionized the auto industry by taking often risky bets that upended the status quo,” the Journal writes, “Mr. Kirkhorn earned a reputation for fine-tuning operations.”

The Journal quotes Tesla’s former Chief Technology Officer, JB Straubel, who says, “It’s probably the hundreds and thousands of hours of slaving away to make things incrementally better where he left the biggest mark and is leaving the biggest mark.”

Adds former Tesla board member Steve Westley, “Predictability is everything with a CFO. What you can’t do is surprise people, and he has not surprised people.”

Thus, while it’s exciting to imagine AI helping us come up with important new drugs, it’s not surprising that many of the earliest uses have been focused on improving process efficiencies (see here).

Pharma Innovation: Making The Elephant Dance

Efficiency may be necessary, but it’s hardly sufficient. A tight supply chain may be critical for the commercial success of Apple and Tesla, but only if these companies are producing innovative products that customers want to buy. 

For big pharma, innovation often means in-licensing the right products or acquiring the right biotechs, typically in oncology. Such transactions were critical to the recent success of Gilead (Kite, Immunomedics) and AstraZeneca (Acerta Pharma).

Encouragingly, several of big pharma’s most promising medicines of the moment were developed entirely in house. For example, Lilly discovered and developed both donanemab (for Alzheimer’s disease – see here) and tirzepatide (Mounjaro, FDA-approved for type 2 diabetes and likely soon, weight loss). (Notably, Novo Nordisk’s semaglutide [Wegovy/Ozempic], already FDA-approved for both diabetes and weight loss, was also developed internally.)

A recent, in-depth Wall Street Journal article by Peter Loftus examined Lilly’s R&D, and described a culture that underwent a profound change after a key acquisition – in this case, in the person of physician-scientist Daniel Skovronsky, the CEO of Avid Pharmaceuticals, a neuro-biomarker company acquired by Lilly in 2010. 

As he experienced the big pharma’s culture, Loftus writes, Skovronsky “was frustrated with Lilly’s slow pace. ‘Let me understand this,’ he recalled saying at a committee meeting setting timetables for getting experimental drugs to market. ‘Our goal is to be slower than average, and we’re failing at that goal? This can’t be the way to do things.’”

Consequently, in 2015, according to the Loftus, Lilly’s board asked Skovronsky (then senior vice president of clinical and product development), to “help analyze Lilly’s research flops over the prior 10 years and figure out how to do R&D better.”

Daniel Skovronsky

Skovronsky’s big conclusion: key decisions were being driven by commercial needs, rather than the best science.  Marginal products were advanced (only to later fail) because they targeted a specific commercial need.

According to the Loftus, Skovronsky recommended that “Lilly pursue drug projects where it best understood the science and lean less on commercial sales estimates. Lilly was not very good at predicting a drug’s sales over time anyway, he concluded, but could better predict the scientific probability of a drug’s success.” (I’ve discussed the challenge of predicting drug sales here, and also, in collaboration with Nassim Taleb, here.)

Skovronksy was soon promoted to Chief Science Officer and Chief Medical Officer, where he pushed to address another challenge he observed, endemic to large organizations (and described in excruciating detail by Safi Bahcall in Loonshots – see here, also here). 

As Loftus writes:

“One internal committee after another second-guessed every recommendation to advance a promising drug candidate. ‘The decisions got revisited every step of the way,’ recalled J. Anthony Ware, who led product development at Lilly before retiring in 2017.  The committees were intended to ensure thorough vetting, but in practice became a limiting process that squeezed out bold ideas, according to Dr. Skovronsky.”

To address this, Skovronsky “reorganized to move more quickly.”

Loftus continues:

“To stop the second-guessing of decisions, Lilly established independent internal units operating like biotech companies—with less bureaucracy and faster decision-making—to manage each of its high-priority drug projects,” including the one that would lead to Mounjaro.  Each unit “had its own board of directors, made up of senior researchers and executives from Lilly’s diabetes business unit. They were given a budget, and charged with making quick decisions on their own.”

For example, according to Loftus, “after a Lilly researcher proposed a last-minute change to the design of the second phase of human testing” for a study of tirzepatide, the review board “met within 24 hours and approved the change so the study could start on time.”

Lilly’s agility may be familiar to colleagues at smaller biotechs and also to those familiar with Pfizer’s CEO-led development of the COVID-19 vaccine (see here) but is otherwise not representative of how most big pharmas go about their business, as Bahcall trenchantly observes.

Loftus’s narrative about Lilly is also shared by several colleagues either at Lilly or who have deep familiarity with the company.

Bernard Munos

According one to colleague, the innovation expert Bernard Munos who spent 30 years at the company, Lilly’s CEO Dave Ricks (who took the job in 2017) played a critical role: 

“He understood that Eurekas cannot be scheduled, and that innovation is a byproduct of culture, not the outcome of a process – even if some amount of process is clearly necessary. He realigned his leadership team with like-minded executives and let Lilly’s talented scientists (and there were many), free from bureaucracy, return to what they loved doing: cutting-edge science and translation.”

Munos adds:

“In short, there was no magic recipe. Lilly’s scientists had innovation in their DNA but could not express it under the culture that swamped the company for a couple of decades. Lilly was not alone in its predicament. The whole industry got caught in the same warp. This was the heyday of Six Sigma and its black belts. In the 1990s, the scientists had lost the leadership of the industry to non-scientists, and the idea that you could de-risk drug R&D by codifying work into processes, optimized by efficiency experts, that would deliver innovation on demand, that idea really resonated with non-scientist leaders — as harebrained as it was to most scientists. Today, the pendulum has swung back.”

I suspect it may be reasonable to offer two cheers for Lilly here, for the success of their innovative mindset and agile approach. My reservation is that every success quickly finds a narrative. I don’t know of any biopharmas that have not conspicuously adopted a “biotech” approach and mindset and might well attribute any success to this structure. 

In other words, maybe Lilly’s recent success is attributable to their adoption of a more nimble approach, or maybe their products happened to work, and then the organizational characteristics – which may not be all that unique – are suddenly elevated.

(In the same way culture is said to eat strategy for breakfast, you could argue, especially in biopharma, that good luck, aggressively pursued [e.g. Merck’s Keytruda – see here] eats both.)

Consider Lilly’s decision to de-emphasize commercial influence. On the one hand, the observation resonates, at every level of R&D. For example, an industry colleague recently shared an example where translational oncology researchers evaluating early-stage compounds felt pressure to interpret coarse biomarker data in a fashion that would support the advancement of a compound into one of the company’s priority indications.

On the other hand, the deliberate and successful expansion into areas of high commercial value (e.g. oncology) are critical to the elevated stock prices now enjoyed by companies like Gilead. 

History, according to the old saw, is written by the victors. Unfortunately, this often leads to the most-repeated, least-actionable strategic advice in our industry: pick winners, our equivalent to “buy low, sell high.” Moreover, since the selection of winners often feels like a crapshoot, it’s not surprising that management tends focus on seemingly more tractable parameters, like improving operational efficiencies.

In the same way culture is said to eat strategy for breakfast, you could argue, especially in biopharma, that good luck, aggressively pursued [e.g. Merck’s Keytruda] – eats both

Bottom Line

The biopharmaceutical industry relies upon innovation to develop new products to replace the medicines whose patent protection has expired. As Sachin Jain observes, relentlessly hyping innovation, particularly early pilot projects that never scale, generates harmful cynicism.  We also recognize that even the most innovative companies, like Apple and Tesla, still need to pay attention to the unsexy details of supply chain optimization – and big pharmas must focus on improving process efficiencies as well.  Even so, efficiencies won’t generate the new products pharma needs (though it might help them develop promising products faster).  Pharmas might learn from Lilly’s recent re-organization that seems to have liberated the innate creativity of company scientists.

8
May
2023

Creating a New Class of Medicines: John Maraganore on The Long Run

Today’s guest on The Long Run is John Maraganore.

John Maraganore

John is best known as the former CEO of Alnylam Pharmaceuticals, the RNA interference drug developer. He spent 19 years there as CEO, before stepping down at the end of 2021. Alnylam figured out how to make a new therapeutic modality — gene-silencing with double-stranded oligonucleotide therapies.

Alnylam’s technology has now been translated into five marketed medicines. The company has more than 2,000 employees, and a market value that exceeds $26 billion.

Since leaving Alnylam, John has taken on a sort of senior statesman role in biotech – wired in with investors such as Arch Venture Partners, Atlas Venture, RTW Investments and Blackstone. He serves on a variety of public company boards, such as Agios Pharmaceuticals, Beam Therapeutics, Kymera Therapeutics and Takeda Pharmaceuticals. He advises a number of young scientific entrepreneurs. He seems to be everywhere there’s some cool translational science work to be done. I joke with him that he’s the Dos Equis Man of biotech – the beer commercial that features the supposedly most interesting man in the world.

This conversation was recorded live in Seattle on Apr. 25 in front of an audience at the Life Science Innovation Northwest conference. We talk about John’s early life, key early career experiences, a few major events at Alnylam, and a bit of his views on science and policy.

And now for a word from the sponsor of The Long Run.

Tired of spending hours searching for the exact research products and services you need? Scientist.com is here to help. Their award-winning digital platform makes it easy to find and purchase life science reagents, lab supplies and custom research services from thousands of global laboratories. Scientist.com helps you outsource everything but the genius! 

Save time and money and focus on what really matters, your groundbreaking ideas. 

Learn more at:

Scientist.com/LongRun

Now, please join me and John Maraganore on The Long Run.