Get In-depth Biotech Coverage with Timmerman Report.
I’m delighted to share some good news.
The Timmerman Traverse for Life Science Cares has hit its goal in 2023. Together, we have raised more than $1 million to fight poverty in five biotech hubs around the US.
The funds will help fulfill basic human needs like food and shelter. They will also go toward education and job training to help people get on a path to fulfill their dreams.
I’d like to thank the 20 biotech executives and investors who committed to this cause early in 2023. They are training to hike the Presidential Traverse in New Hampshire in August.
I also want to thank our 50 corporate sponsors. You can see the list at lifesciencecares.org. A shout out goes to top sponsors HSBC and Fenwick & West, and to Jeb and Sonia Keiper for an exceptionally generous donation.
This is a special milestone.
Staying fit. Making friends. Enjoying nature. Giving back. Impact in our communities.
That’s what these campaigns are all about.
Want to be a part of it? Email me at firstname.lastname@example.org.
Today’s guest on The Long Run is Kevin Conroy.
Kevin is the chairman and CEO of Madison, Wis.-based Exact Sciences.
Exact Sciences has grown over the past decade into a success story for cancer screening and diagnosis. It’s best known for marketing the noninvasive Cologuard test that screens people for colorectal cancer.
It also markets the Oncotype DX test that’s used to predict the likelihood that a patient with breast cancer will have a recurrence, and whether a preventive round of chemotherapy is likely to be beneficial. Exact is also developing a blood-based screening test that it hopes will be able to detect early signs of many, many types of cancer that aren’t routinely detected until the disease has already caused a lot of damage.
The Cologuard test has now been run 12 million times.
Kevin has a fascinating personal story, starting with the environment where he grew up — Flint, Michigan. He joined Exact Sciences as CEO in 2009 when the company was on the ropes. Kevin and his colleagues set audacious goals, and persevered to build a company that now has a market value of more than $16 billion.
In this conversation, Kevin shares this company story, along with some of his insights into building a company in the Upper Midwest, the importance of partnerships, and where cancer screening and diagnosis is heading.
And now for a word from the sponsor of The Long Run.
Tired of spending hours searching for the exact research products and services you need? Scientist.com is here to help. Their award-winning digital platform makes it easy to find and purchase life science reagents, lab supplies and custom research services from thousands of global laboratories.Scientist.com helps you outsource everything but the genius!
Save time and money and focus on what really matters, your groundbreaking ideas.
Learn more at
It’s summertime, which means it’s time to plan for the next biotech team expedition. The Timmerman Traverse for Damon Runyon Cancer Research Foundation is scheduled for Feb. 7-18, 2024. I’m assembling a team of biotech executives and investors to hike to the summit of Mt. Kilimanjaro. If you are up for this trip of a lifetime and want to be part of a team that raises $1 million to support bright young cancer researchers all over the US, send me a note: email@example.com.
Now, please join me and Kevin Conroy on The Long Run.
Since 2018, about 90 biotech companies have been acquired for more than $500 million. That’s only about 15 to 20 companies per year, a tiny fraction of the approximately 10,000 biotechnology companies around the world.
I was privileged to be part of one of those transactions, the sale of Albireo Pharma to Ipsen in January for $952 million upfront, plus contingent value rights that could push the value to well over $1 billion.
The acquisition will help more patients around the world gain faster access to the innovative medicines Albireo developed. It’s a win for those patients – mostly children with rare diseases – as well as their families, our investors, and the people at both companies.
That’s the triumphant narrative common in M&A press releases. But a lot of things happened behind the scenes to get there. Here are a few lessons I picked up along the way.
Albireo Pharma started as a spinoff from AstraZeneca in Sweden. When I joined the company in 2015, we were privately held with 10 employees, we had great science and an ambitious goal: to design breakthrough drugs for children and bring hope to families.
But like many other biotechs, the development cycle was ahead of the funding cycle. That’s one way of saying our product candidates were advancing rapidly in the clinic, gathering the evidence needed to succeed in the marketplace, but investors weren’t yet convinced to provide the support we needed.
Out of necessity, we decided to divest or shelve some of our R&D programs and put most of our remaining resources behind odevixibat for rare pediatric liver diseases.
In the early days in the mid-2010s, there were some weeks where we nearly ran out of cash. At times, some people questioned our unconventional strategies such as seeking feedback on our phase III trial from FDA and EMA before the phase II study was completed and conducting a single Phase III study with two different primary endpoints. And even though the biotech financial markets were generally trending upward most of these years, the markets always pressured us to be at our absolute best to secure every dollar of funding.
By 2022, we’d cleared many of the hurdles inherent to development-stage biotech. We’d grown to ~200 employees globally. We created four different medications for liver diseases. The first to receive regulatory clearance in the US and Europe was odevixibat (brand name Bylvay) a bile acid transport inhibitor. It was approved and marketed in the U.S. and Europe for a rare pediatric liver disease for which there was no other FDA-approved medical treatment.
That was a publicly visible triumph, but behind the scenes, 2022 was a tumultuous year. We were focused on launching Bylvay globally and advancing two new clinical stage assets when we were first approached to sell the company in February.
We rejected multiple initial offers either because the price was too low or we didn’t think the timing was right for the Company. We finally agreed to undergo a period of diligence and finalized a counterproposal. We thought this was a good match and were hopeful the deal would go through. But in May, the potential acquiror’s board decided to walk away.
It was a shock and big disappointment on many levels.
After this exhausting process which ultimately led nowhere, we decided to get back to focusing on what we do best — developing and delivering important medicines for liver disease that would increase value for Albireo shareholders.
Surprisingly, it didn’t take long for multiple different suitors to come calling again. We went through an intensive due diligence and negotiation with three of these companies up until just before the JP Morgan Healthcare Conference in January 2023.
Our board ultimately determined Ipsen’s global R&D and commercial capabilities, together with the terms Ipsen provided, presented the best option for all stakeholders. We announced the merger agreement and offer on the opening morning of the JP Morgan event in San Francisco. The sale was formally completed on March 3, 2023.
It was a tough decision to sell the company. It meant breaking up a highly committed A-Team and changing the nature of the close ties we had developed with ourselves, patients, families and their clinicians. But we knew that Ipsen could accelerate access to Bylvay and ensure that the three new product candidates would be maximized.
Biotech is not for the faint of heart. Ditto for commercializing medicines globally, taking a company public and navigating a complicated acquisition process It takes extreme dedication, and grit, as a company to get through these challenges. Culture gets tested by these normal events in corporate life – a strong culture stays together, while weaker ones can come unglued.
We had a few things going for us that helped us through the hard times. We believed in the science. We believed in our team. And we believed in our unrelenting commitment to bring hope to the patients and families we served.
Did we do everything right? Absolutely not. In the first potential deal, I engaged too many people too early and failed to adequately discern alignment with the acquiring firm’s philosophy. (More on that later.)
Everyone makes mistakes, but I wanted our team to learn from ours and not to repeat the same ones over and over. When something was clearly working well, we sought to turn it into a standard procedure, a hallmark of our culture.
While no transaction is the same, here are seven strategies I’d repeat if I sold another company:
If you’re exploring or navigating a biotech sale, I hope these lessons learned can help. The journey isn’t easy, but when you find the right acquiring partner, it’s worth it. We know our drug will able to reach the vast majority of the approximately 100,000 patients out there around the world who might benefit from it. Our shareholders, employees, and our community are sharing in the financial rewards.
I would not trade my Albireo experience for anything. It’s been a true gift to build an amazing team and work with wonderful parents, clinicians, investors and bankers to serve our patients. And with Ipsen’s acquisition, I’m confident that the work will continue. A large percentage of the Albireo employees plan to stay with Ipsen, while the ones who leave will have freedom to pursue other life interests and career opportunities on their own terms.
I look forward to my next leadership role and if I’m fortunate, I may get to apply these lessons learned again. In the meantime, I am taking a six-month “sabbatical” to enjoy my family, catch up on life and get ready for my next challenge.
Today’s guest on The Long Run is Jodie Morrison.
Jodie is the acting CEO at Waltham, Mass.-based Q32 Bio. It’s a company developing treatments for autoimmune and inflammatory diseases. It has an antibody in development with Horizon Therapeutics aimed at IL-7 receptor alpha, in Phase II for the treatment of atopic dermatitis. It also has wholly-owned programs aimed at the complement system of the innate immune system, with the intent of making treatments that are tissue-targeted.
She came to this position after a series of executive roles and board positions. Her first stint as a CEO, at Tokai Pharmaceuticals, didn’t end well. She dusted herself off and came back to play a role in back-to-back successful outcomes at Syntimmune, Keryx, and Cadent Therapeutics.
In this episode, we talk about how Jodie developed the confidence to lead from some of her early career experiences, how she thinks about hiring, and, at the end of the conversation, she provides some advice to young women seeking to grow and advance in the biotech industry.
And now for a word from the sponsor of The Long Run.
Occam Global is an international professional services firm focusing on executive recruitment, organizational development and board construction. The firm’s clientele emphasize intensely purposeful and broadly accomplished entrepreneurs and visionary investors in the Life Sciences. Occam Global augments such extraordinary and committed individuals in building high performing executive teams and assembling appropriate governance structures. Occam serves such opportune sectors as gene/cell therapy, neuroscience, gene editing, the intersection of AI and Machine Learning and drug discovery and development.
Connect with Occam:
Now, please join me and Jodie Morrison on The Long Run.
Generative AI, the transformative technology of the moment, exploded onto the scene with the arrival in late 2022 of chatGPT, an AI-powered chatbot developed by the company OpenAI.
After only five days, a million users had tried the app; after two months: 100 million, the fastest growth ever seen for a consumer application. TikTok, the previous record holder, took nine months to reach a 100 million users; Instagram had taken 2.5 years.
Optimists thrill to the potential AI offers humanity (“Why AI Will Save The World”), while doomers catastrophize (“The Only Way To Deal With The Threat From AI? Shut It Down”). Consultants and bankers offer frameworks and roadmaps and persuade anxious clients they are already behind. Just this week, McKinsey predicted that generative AI could add $4.4 trillion in value to global economy. Morgan Stanley envisions a $6 trillion opportunity in AI as a whole, while Goldman Sachs says 7% of jobs in the U.S. could be replaced by AI.
But if we’ve learned anything from previous transformative technologies, it’s that at the outset, nobody has any real idea how these technologies will evolve, much less change the world. When Edison invented the phonograph player, he thought it might be used to record wills. The internet arose from a government effort to enable decentralized communication in case of enemy attack.
As we start to contemplate – and are thrust into – an uncertain future, we might take a moment to see what we can learn about technology from the past.
Yogi Berra, of course, observed that “it is difficult to make predictions, especially about the future,” and forecasts about the evolution of technology bear him out.
In 1977, Ken Olsen, President of Digital Equipment Corporation (DEC), told attendees of the World Futures Conference in Boston that “there is no reason for any individual to have a computer in their home.” In 1980, the management consultants at McKinsey projected that by 2000, there might be 900,000 cell phone users in the U.S.; they were off by over 100-fold; the actual number was above 119 million.
On the other hand, much-hyped technologies like 3D TV, Google Glass, and the Segway never really took off. For others, like cryptocurrency and virtual reality, the verdict is still out.
AI itself has been notoriously difficult to predict. For example, in 2016, AI expert Geoffrey Hinton declared:
“Let me start by just saying a few things that seem obvious. I think if you work as a radiologist, you’re like the coyote that’s already over the edge of the cliff but hasn’t yet looked down, so doesn’t realize there’s no ground underneath him. People should stop training radiologists now. It’s just completely obvious that within five years, deep learning is going to do better than radiologists because it’s going to get a lot more experience. It might be 10 years, but we’ve got plenty of radiologists already.”
Writing five years after this prediction, in his book The New Goliaths (2022), Boston University economist Jim Bessen observes that “no radiology jobs have been lost” to AI, and in fact, “there’s a worldwide shortage of radiologists.”
As Bessen notes, we tend to drastically overstate job losses due to new technology, especially in the near term. He calls this the “automation paradox,” and explains that new technologies (including AI) are “not so much replacing humans with machines as they are enhancing human labor, allowing workers to do more, provide better quality, and do new things.”
Following the introduction of the ATM machine, the number of bank tellers employed actually increased, Bessen reports. Same for cashiers after the introduction of the bar code scanner, and for paralegals after the introduction of litigation-focused software products.
The explanation, Bessen explains, is that as workers are more productive, the costs of what their making tends to go down, which often unleashes greater consumer demand – at least up to a point.
For instance, automation in textiles enabled customers to afford not just a single outfit, but an entire wardrobe. Consequently, from the mid-nineteenth century to the mid-twentieth century, “employment in the cotton textile industry grew alongside automation, even as automation was dramatically reshaping the industry,” Bessen writes.
Yet after around 1940, automation continued to improve the efficiency of textile manufacturing, but consumer demand was largely sated; consequently, he says, employment in the U.S. cotton textile industry has decreased dramatically, from around 400,000 production workers in the 1940s to less than 20,000 today.
The point is that if historical precedent is a guide, the introduction of a new technology like generative AI will be accompanied by grave predictions of mass unemployment, as well as far more limited, but real examples of job loss, as we’ve seen in recent reporting. In practice, generative AI is likely to alter far more jobs than it eliminates and will likely create entirely new categories of work.
For example, Children’s Hospital in Boston recently advertised for the role “AI prompt engineer,” seeking a person who skilled at effectively interacting with chatGPT.
More generally, while it can be difficult to predict exactly how a new technology will evolve, we can learn from the trajectories previous technological revolutions have followed, as economist Carlota Perez classically described in her 2002 book, Technological Revolutions and Financial Capital.
Among Perez’s most important observations is how long it takes to realize “the full fruits of technological revolutions.” She notes that “two or three decades of turbulent adaptation and assimilation elapse from the moment when the set of new technologies, products, industries, and infrastructures make their first impact to the beginning of a ‘golden age’ or ‘era of good feeling’ based on them.”
The Perez model describes two broad phases of technology revolutions: installation and deployment.
The installation phase begins when a new technology “irrupts,” and the world tries to figure out what it means and what to do with it. She describes this as a time of “explosive growth and rapid innovation,” as well as what she calls “frenzy,” characterized by “flourishing of the new industries, technology systems, and infrastructures, with intensive investment and market growth.” There’s considerable high-risk investment into startups seeking to leverage the new technology; most of these companies fail, but some achieve outsized, durable success.
It isn’t until the deployment phase that the technology finally achieves wide adoption and use. This period is characterized by the continued growth of the technology, and “full expansion of innovation and market potential.” Ultimately, the technology enters the “maturity” stage, where the last bits of incremental improvement are extracted.
As Perez explained to me, “A single technology, however powerful and versatile is not a technological revolution.” While she describes AI as “an important revolutionary technology … likely to spawn a whole system of uses and innovations around it,” she’s not yet sure whether it will evolve into the sort of full-blown technology revolution she has previously described.
One possibility, she says, is that AI initiates a new “major system” – AI and robotics – within an ongoing information and communication technology revolution.
At this point, it seems plausible to imagine we’re early in the installation stage of AI (particularly generative AI), where there’s all sorts of exuberance, and an extraordinary amount of investing and startup activity. At the same time, we’re frenetically struggling to get our heads around this technology and figure out how to most effectively (and responsibly) use it.
This is normal.
Technology, as I wrote in 2019, “rarely arrives on the scene fully formed—more often it is rough-hewn and finicky, offering attractive but elusive potential.”
As Bessen has pointed out, “invention is not implementation,” and it can take decades to work out how best to use something novel. “Major new technologies typically go through long periods of sequential innovation,” Bessen observes, adding, “Often the person who originally conceived a general invention idea is forgotten.”
The complex process associated with figuring out how to best utilize a new technology may account, at least in part, for what’s been termed the “productivity paradox” – the frequent failure of a new technology to impart significant productivity improvement. We think of this frequently in the context of digital technology; economist Robert Solow wryly observed in a 1987 New York Times book review that “You can see the computer age everywhere but in the productivity statistics.”
However, as Paul A. David, an economic historian at Stanford noted in his classic 1990 paper, “The Dynamo and the Computer,” a remarkably similar gap was present a hundred years earlier, in the history of electrification. David writes that the dawn of the 20th century, two decades after the invention of the incandescent light bulb (1879) and the installation of Edison central generating stations in New York and London (1881), there was very little economic productivity to show for it.
David goes on to demonstrate that the simple substitution of electric power for steam power in existing factories didn’t really improve productivity very much. Rather, it was the long subsequent process of iterative reimagination of factories, enabled by electricity, that allowed the potential of this emerging technology to be fully expressed.
A similar point is made by Northwestern economic historian Robert Gordon in his 2016 treatise The Rise and Fall of American Growth. Describing the evolution of innovation in transportation, Gordon observes that “most of the benefits to individuals came not within a decade of the initial innovation, but over subsequent decades as subsidiary and complementary sub-inventions and incremental improvements became manifest.”
As Bessen documents in Learning by Doing (2015), using examples ranging from the power loom (where efficiency improved by a factor of twenty), to petroleum refinement, to the generation of energy from coal, remarkable improvements occurred during the often-lengthy process of implementation, as motivated users figured out how to do things better — “learning by doing.”
Many of these improvements (as I’ve noted) are driven by what Massachusetts Institute of Technology professor Eric von Hippel calls “field discovery,” involving frontline innovators motivated by a specific, practical problem they’re trying to solve.
Such innovative users—the sort of people who Judah Folkman had labeled “inquisitive physicians”—play a critical role in discovering and refining new products, including in medicine; a 2006 study led by von Hippel of new (off-label) applications for approved new molecular entities revealed that nearly 60% were originally discovered by practicing clinicians.
What does this history of innovation mean for the emerging technology of the moment, generative AI?
First, we should take a deep breath, and recognize that we are in the earliest days of technology evolution, and nobody knows how it’s going to play out. Not the experts developing it, not the critics bemoaning it, not the consultants trying to sell work off our collective anxiety around it.
Second, we should acknowledge that the full benefits of the technology will take some time to appear. Expectations of immediate gains in productivity by plugging in AI seem as naïve, the dynamo-for-steam substitution all over again. While there are clearly some immediate uses for generative AI, the more substantial benefits will likely require continued evolution of both technology and workflow processes.
Third, it’s unlikely that AI will replace most workers, but it will require many of us to change how we get our jobs done – an exciting opportunity for some, an unwelcome obligation for others. AI will also create new categories of work, and introduce new challenges for governance, ethics, regulation, and privacy.
Fourth, and perhaps most importantly: As mind-blowing as generative AI is, the technology is not magic. It doesn’t descend from the heavens (or Silicon Valley), deus ex machina, with the ability to resolve sticky ethical challenges, untangle complex biological problems, and generally ease the woes of humanity.
But while technology isn’t a magic answer, it’s proved a historically valuable tool, driving profound improvements in the human condition, and enabling tremendous advances in science. The discovery of the microscope, telescope, and calculus all allowed us to better understand nature, and to develop more impactful solutions.
Technology changes the world in utterly unexpected and unpredictable ways.
How exciting to live in this moment, and to have the opportunity — and responsibility — to observe and shape the evolution of a remarkable technology like generative AI.
Yes, there are skeptics. I have a number of friends and colleagues who have decided to sit this one out, reflexively dismissing the technology because they’ve heard it hallucinates (it does), or because of privacy concerns (a real worry), or because they’re turned off by the relentless hype (I agree!)
But I would suggest that we owe it to ourselves to engage with this technology, familiarize ourselves, through practice, with its capabilities and limitations.
We can be the lead users generative AI — like all powerful but immature transformative technologies — requires to evolve from promise to practice.
Aaron Ring is today’s guest on The Long Run.
Aaron is an associate professor of immunobiology at Yale University for a little while longer. He’s moving his lab to the Fred Hutchinson Cancer Center in Seattle in the summer of 2023.
Early in his scientific career, Aaron has done some fascinating work in protein engineering and immunology. He has founded three startup companies to translate the research from his lab – Simcha Therapeutics, Seranova Bio, and Stipple Bio. Simcha is working on an engineered form of IL-18 for the treatment of cancer, while Seranova Bio is using technology to identify auto-antibodies that might point the way to new approaches to treat people with autoimmune diseases, cancer, and perhaps neurological diseases.
Timmerman Report subscribers can go back and read a startup profile I did of Simcha back in January 2022 to get the gist. The engineered IL-18 has shown comparable monotherapy efficacy in animals to PD-1 inhibitors, and it has been able to raise the bar in combination with those standard cancer therapies. SR One led a $40 million Series B financing of the company in 2022, and was joined by BVF Partners, Samsara BioCapital, Rock Springs Capital, ArrowMark Partners, and Logos Capital among others. Foresite Capital and A16Z have backed Aaron’s other ventures.
In this conversation we talked about how Aaron developed his interest in science, how he thinks about which problems to go after, and using the new tools of biology and the data they throw off to develop better therapies.
Now for a word from the sponsor of The Long Run.
Tired of spending hours searching for the exact research products and services you need? Scientist.com is here to help. Their award-winning digital platform makes it easy to find and purchase life science reagents, lab supplies and custom research services from thousands of global laboratories. Scientist.com helps you outsource everything but the genius!
Save time and money and focus on what really matters, your groundbreaking ideas.
Learn more at:
Now, please join me and Aaron Ring on The Long Run.
As the excitement around generative AI sweeps across the globe, biopharma R&D groups (like most everyone else) are actively trying to figure out how to leverage this powerful but nascent technology effectively, and in a responsible fashion.
In separate conversations, two prominent pharma R&D executives recently sat down with savvy healthtech VCs to discuss how generative AI specifically, and emerging digital technologies more generally, are poised to transform the ways new medicines are discovered, developed, and delivered.
The word “poised” is doing quite a lot of work in the sentence above. Both conversations seamlessly and rather expertly blend what’s actually been accomplished (a little bit) with the vision of what might be achieved (everything and then some).
The first conversation, from the a16z “Bio Eats World” podcast, features Greg Meyers, EVP and Chief Digital and Technology Officer of Bristol Myers Squibb (BMS), and a16z Bio+Health General Partner Dr. Jorge Conde. The second discussion, from the BIOS community, features Dr. Frank Nestle, Global Head of Research and CSO, Sanofi, and Flagship Pioneering General Partner and Founder and CEO of Valo Health, Dr. David Berry. (Readers may recall our discussion of a previous BIOS-hosted interview with Dr. Nestle, here.)
Rather than review each conversation individually, I thought it would be more useful to discuss common themes emerging from the pair of discussions.
AI has started to contribute meaningfully to the design of small molecules in the early stages of drug development. “A few years ago,” Meyers says, BMS started “to incorporate machine learning to try to predict whether or not a certain chemical profile would have the bioreactivity you’re hoping.” He says this worked so well (producing a “huge spike” in hit rate) that they’ve been trying to scale this up.
Meyers also says BMS researchers “are currently using AI pretty heavily in our protein degrader program,” noting “it’s been very helpful” in enabling the team to sort through different types of designs.
Nestle also highlights the role of AI in developing novel small compounds. “AI-empowered models” are contributing to the design of modules, he says, and are starting to “shift the cycle times” for the industry.
AI is also now contributing to the development of both digital and molecular biomarkers. For example, Meyers described the use of AI to analyze a routine 12-lead ECG to identify patients who might have undiagnosed hypertrophic cardiomyopathy. (Readers may recall a very similar approach used by Janssen to diagnose pulmonary artery hypertension, see here.)
Nestle offered an example from digital pathology. He described a collaboration with the healthtech company Owkin, whose AI technology, he says, can help analyze the microscope slides with classically stained tissue samples.
Depending on your perspective, these use cases are either pretty slim pickings or an incredibly promising start.
I’ve not included what seemed to me as still exploratory efforts involving two long-standing industry aspirations:
We’ll return to these important but elusive ambitions later, in our discussion of “the magic vat.”
I’ve also not included examples of generative AI, because I didn’t hear much in the way of specifics here, probably because it’s still such early days. There was clearly excitement around the concept that, as Meyers put it, “proteins are a lot like the human language,” and hence, large language models might be gainfully applied to this domain.
The aspiration for AI in in biopharma R&D were as expansive as the established proof points were sparse. The lofty idea seems to be that with enough data points and computation, it will eventually be possible to create viable new medicines entirely in silico. VC David Berry described an “aspiration to truly make drug discovery and development programmable from end to end.” Nestle wondered about developing an effective antibody drug “virtually,” suggesting it may be possible in the future. Also possible, he suggests: “the ability to approve a safe and effective drug in a certain indication, without running a single clinical trial.”
Both Nestle and Meyers cited the same estimate – 10^60 – as the size of “chemical space,” the number of different drug-like molecular structures that are theoretically possible. It’s a staggering number, more than the stars in the universe, and likely far beyond our ability to meaningfully comprehend. The point both executives were making is that if we want to explore this space productively, we’re going to get a lot further using sophisticated computation than relying on the traditional approaches of intuition, trial and error.
The underlying aspiration here strikes a familiar chord for those of us who remember some of the more extravagant expectations driving the Human Genome Project. For instance, South African biologist Sydney Brenner reportedly claimed that if he had “a complete sequence of DNA of an organism and a large enough computer” then he “could compute the organism.” While the sequencing of the genome contributed enormously to biomedical science, our understanding of the human organism remains woefully incomplete, and largely uncomputed. It’s easy to imagine that our hubris – and our overconfidence in our ability to domesticate scientific research, as Taleb and I argued in 2008 – may be again deceiving us.
For years, healthcare organizations have strived towards the goal of establishing a “learning health system (LHS),” where knowledge from each patient is routinely captured and systematically leveraged to improve the care of future patients. As I have discussed in detail (see here), the LHS is an entity that appears to exists only as ideal with the pages of academic journals, rather than embodied in the physical world.
Many pharma organizations (as I’ve discussed previously) aspire towards a similar vision, and seek to make better use of all the data they generate. As Meyers puts it, you “want to make sure that you never run the same experiment twice,” and you want to capture and make effective use of the digital “exhaust” from experiments, in part by ensuring it’s able to be interpreted by computers.
Berry emphasized that a goal of the Flagship company Valo (where he now also serves as CEO) is to “use data and computation to unify how… data is used across all of the steps [of drug development], how data is shared across the steps.” Such integration, Berry argues, “will increase probably of success, will help us reduce time, will help reduce cost.”
The problem – as I’ve discussed, and as Berry points out, is that “drug discovery and development has historically been a highly siloed industry. And the challenge is it’s created data silos and operational silos.”
The question, more generally is how to unlock the purported value associated with, as Nestle puts it, the “incredible treasure chest of data” that “large pharmaceutical companies…sit on.”
Historically, pharma data has been collected with a single, high-value use in mind. The data are generally not organized, identified, and architected for re-use. Moreover, as Nestle emphasizes, the incentives within pharma companies (the so-called key performance indicators or “KPIs”) are “not necessarily in the foundational space, and that not where typically the resourcing goes.” In other words, what companies value and track are performance measures like speed of trial recruitment; no one is really evaluating data fluidity, and unless you can directly tie data fluidity to a traditional performance measure, it will struggle to be prioritized.
In contrast, companies like Valo; other Flagship companies like Moderna; and some but not all emerging biopharma companies are constructed (or reconstructed — eg Valo includes components of both Numerate and Forma Therapeutics, as well as TARA biosystems) with the explicit intention of avoiding data silos. This concept, foundational to Amazon in the context of the often-cited 2002 “Bezos Memo,” was discussed here.
In contrast, pharmas have entrenched silos; historically, data were collected to meet the specific needs of a particular functional group, responsible for a specific step in the drug development process. Access to these data (as I recently discussed) tends to be tightly controlled.
Data-focused biotech startups tend to look at big pharma’s traditional approach to data and see profound opportunities for disruption. Meanwhile, pharmas tend to look at these data-oriented startups and say, “Sure, that sounds great. Now what have you got to show for all your investment in this?”
The result is a standoff of sorts, where pharmas try to retrofit their approach to data yet are typically hampered by the organizational and cultural silos that have very little interest in facilitating data access. Meanwhile, data biotech startups are working towards a far more fluid approach to data, yet have produced little tangible and compelling evidence to date that they are more effective, or are likely to be more effective, at delivering high impact medicines to patients.
Both BMS and Sanofi are exploring emerging technologies through investments and partnerships with a number of healthtech startups, even as both emphasize that they are also building internal capabilities.
“We have over 200 partnerships,” Meyers notes, “including several equity positions with other companies that really come from the in silico, pure-play sort of business. And we’ve learned a ton from them.”
Similarly, Nestle (again – see here) emphasized key partnerships, including the Owkin relationship and digital biomarker work with MIT Professor Dina Katabi.
Meanwhile, Pfizer recently announced an open innovation competition to source generative AI solutions to a particular company need: creating clinical study reports.
In addition to these examples, I’ve become increasingly aware of a number of other AI-related projects attributed to pharma companies that upon closer inspection, turn out to represent discrete engagements with external partners or vendors who reportedly are leveraging AI.
One of the most important lessons from both discussions was the challenge for aspiring innovators and startups.
Berry, for example, explained why it’s so difficult for AI approaches to gain traction. “If I want to prove, statistically, that AI or an AI component is doing a better job, how many Phase Two clinical readouts does one actually need to believe it on a statistical basis? If you’re a small company and you want to do it one by one, it’s going to take a few generations. That’s not going to work.”
On the other hand, he suggested “there are portions of the drug discovery and development cascade where we’re starting to see insights that are actionable, that are tangible, and the timelines of them and the cost points of them are so quickly becoming transformative that it opens up the potential for AI to have a real impact.”
Meyers, for his part, offered exceptionally relevant advice for AI startups pitching to pharma (in fact, the final section of the episode should be required listening for all biotech AI founders).
Among the problems Meyers highlights – familiar to readers of this column – are the need “for companies that are focused on solving a real-world problem,” rather than solutions in search of a problem. He also emphasized that “this is an industry that will not adopt something unless it is, really 10x better than the way things are historically done.”
This presents a real barrier to the sort of incremental change that may hard be appreciate in the near term but can deliver appreciable value over time. Even “slight improvements” in translational predictive models, as we recently learned from Jack Scannell, can deliver outsized impact, significantly elevating the probability of success while reducing the many burdens of failure.
Meyers also reminded listeners of the challenges of finding product-market fit because healthcare “is the only industry where the consumer, the customer, and the payor are all different people and they don’t always have incentives that are aligned.” (See here.)
On a more optimistic note, Berry noted that one of the most important competitive advantages a founder has is recognizing that “a problem is solvable, because that turns out to be one of the most powerful pieces of information.” For Berry, the emergence of AI means that “we can start seeing at much larger scales problems that are solvable that we didn’t previously know to be solvable.” Moreover, he argues, once we realize a problem is solvable, we’re more likely to apply ourselves to this challenge.
In thinking about how to most effectively leverage AI, and digital and data more generally, in R&D, I’m left with two thoughts which are somewhat in tension.
The first borrows (or bastardizes) a phrase from the brilliant Stephen Wolfram: look for pockets of reducibility. In other words – focus your technology not on fixing all of drug development, but on addressing a specific, important problem that you can meaningfully impact.
For instance, I was speaking earlier this week with one of the world’s experts on data standards. I asked him how generative AI as “universal translator” (to use Peter Lee’s term) might obviate the need for standards. While the expert agreed conceptually, his immediate focus was on figuring out how to pragmatically apply generative AI tools like GPT-4 to standard generation so that it could be done more efficiently, potentially with people validating the output rather than generating it.
On the one hand, you might argue this is disappointingly incremental. On the other hand, it’s implementable immediately, and seems likely to have a tangible impact.
(In my own work, I am spending much of my time focused on identifying and enabling such tangible opportunities within R&D.)
There’s another part of me, of course, that both admires and deeply resonates with the integrated approach that companies like Valo are taking: the idea and aspiration that if, from the outset, you deliberately collect and organize your data in a thoughtful way, you can generate novel insights that cross functional silos (just as Berry says). These insights, in principle, have the potential to accelerate discovery, translation (a critical need that this column has frequently discussed, and that Conde appropriately emphasized), and clinical development.
Integrating diverse data to drive insights has captivated me for decades; it’s a topic I’ve discussed in a 2009 Nature Reviews Drug Discovery paper I wrote with Eric Schadt and Stephen Friend. The value of integrating phenotypic data with genetic data was also a key tenet I brought to my DNAnexus Chief Medical Officer role, and a lens through which I evaluated companies when I subsequently served as corporate VC.
Consequently, I am passionately rooting for Berry at Valo – and for Daphne Koller’s insitro and Chris Gibson’s Recursion. I’m rooting for Pathos, a company founded by Tempus that’s focused on “integrating data into every step of the process and thereby creating a self-learning and self-correcting therapeutics engine,” and that has recruited Schadt to be the Chief Science Officer. I’m also rooting for Aviv Regev at Genentech, and I am excited by her integrative approach to early R&D.
But throughout my career, I’ve also seen just how challenging it can be to move from attractive integrative ambition to meaningful drugs. I’ve seen so many variations of the “magic vat,” where all available scientific data are poured in, a dusting of charmed analysis powder is added (network theory, the latest AI, etc), the mixture is stirred, and then – presto! – insights appear.
Or, more typically, not. But (we’re invariably told) these insights would arrive (are poised to arrive) if only there was more funding/more samples/just one more category of ‘omics data, etc. — all real examples by the way.
It’s possible that this time will be the charm – we’ve been told, after all, that generative AI “changes everything” — but you can also understand the skepticism.
My sense is that legacy pharmas are likely to remain resistant to changing their siloed approach to data until they see compelling evidence that data integration approaches are, if not 10x better, then at least offer meaningful and measurable improvement. In my own work, I’m intensively seeking to identify and catalyze transformative opportunities for cross-silo integration of scientific data across at least some domains, since effective translation absolutely requires it.
For now, big pharmas are likely to remain largely empires of silos – and will continue to do the step-by-step siloed work comprising drug development at a global scale better than anyone. Technology, including AI, may help to improve the efficiency of specific steps (eg protocol drafting, an example Meyers cites). Technology may also improve the efficiency of sequential data handoffs, critical for drug development, and help track operational performance, providing invaluable information to managers, as discussed here.
But foundationally integrating scientific knowledge across organizational silos? Unless a data management organization already deeply embedded within many pharmas – perhaps a company like Veeva or Medidata – enables it, routine integration of scientific knowledge across long-established silos, in the near to medium term, seems unlikely. It may take a visionary, persistent and determined startup (Valo? Pathos?) to capture persuasively the value that must be here.
Biopharma companies are keenly interested in leveraging generative AI, and digital and data technologies more generally, in R&D. To date, meaningful implementations of AI in large pharmas seem relatively limited, and largely focused on small molecule design, and biomarker analysis (such as identifying potential patients through routine ECGs). Nevertheless, the ambitions for AI in R&D seem enormous, perhaps even fanciful, envisioning virtual drug development and perhaps even in silico regulatory approvals. More immediately, pharmas aspire to make more complete use of the data they collect but are likely to continue to struggle with long-established functional silos. External partnerships provide access to emerging technologies, but it can be difficult for healthtech startups to find a permanent foothold with large pharmas. Technology focused on alleviating important, specific problems – “pockets of reducibility” – seems most likely to find traction in the near term. Ambitious founders continue to pursue the vision of more complete data integration.