10
Jul
2023

Detecting Cancer Early When It’s Most Treatable: Kevin Conroy on The Long Run

Today’s guest on The Long Run is Kevin Conroy.

Kevin is the chairman and CEO of Madison, Wis.-based Exact Sciences.

Kevin Conroy, chairman and CEO, Exact Sciences

Exact Sciences has grown over the past decade into a success story for cancer screening and diagnosis. It’s best known for marketing the noninvasive Cologuard test that screens people for colorectal cancer.

It also markets the Oncotype DX test that’s used to predict the likelihood that a patient with breast cancer will have a recurrence, and whether a preventive round of chemotherapy is likely to be beneficial. Exact is also developing a blood-based screening test that it hopes will be able to detect early signs of many, many types of cancer that aren’t routinely detected until the disease has already caused a lot of damage.

The Cologuard test has now been run 12 million times.

Kevin has a fascinating personal story, starting with the environment where he grew up — Flint, Michigan. He joined Exact Sciences as CEO in 2009 when the company was on the ropes. Kevin and his colleagues set audacious goals, and persevered to build a company that now has a market value of more than $16 billion.

In this conversation, Kevin shares this company story, along with some of his insights into building a company in the Upper Midwest, the importance of partnerships, and where cancer screening and diagnosis is heading.

And now for a word from the sponsor of The Long Run.

Tired of spending hours searching for the exact research products and services you need? Scientist.com is here to help. Their award-winning digital platform makes it easy to find and purchase life science reagents, lab supplies and custom research services from thousands of global laboratories.Scientist.com helps you outsource everything but the genius! 

Save time and money and focus on what really matters, your groundbreaking ideas. 

Learn more at

Scientist.com/LongRun

It’s summertime, which means it’s time to plan for the next biotech team expedition. The Timmerman Traverse for Damon Runyon Cancer Research Foundation is scheduled for Feb. 7-18, 2024. I’m assembling a team of biotech executives and investors to hike to the summit of Mt. Kilimanjaro. If you are up for this trip of a lifetime and want to be part of a team that raises $1 million to support bright young cancer researchers all over the US, send me a note: luke@timmermanreport.com

Now, please join me and Kevin Conroy on The Long Run.

10
Jul
2023

Lessons Learned from the Intense Back-and-Forth Over a $1B Acquisition

Ron Cooper, former CEO, Albireo; board member, Generation Bio

Since 2018, about 90 biotech companies have been acquired for more than $500 million. That’s only about 15 to 20 companies per year, a tiny fraction of the approximately 10,000 biotechnology companies around the world. 

I was privileged to be part of one of those transactions, the sale of Albireo Pharma to Ipsen in January for $952 million upfront, plus contingent value rights that could push the value to well over $1 billion.  

The acquisition will help more patients around the world gain faster access to the innovative medicines Albireo developed. It’s a win for those patients – mostly children with rare diseases – as well as their families, our investors, and the people at both companies.

That’s the triumphant narrative common in M&A press releases. But a lot of things happened behind the scenes to get there. Here are a few lessons I picked up along the way.

Brief Background

Albireo Pharma started as a spinoff from AstraZeneca in Sweden. When I joined the company in 2015, we were privately held with 10 employees, we had great science and an ambitious goal: to design breakthrough drugs for children and bring hope to families.

But like many other biotechs, the development cycle was ahead of the funding cycle. That’s one way of saying our product candidates were advancing rapidly in the clinic, gathering the evidence needed to succeed in the marketplace, but investors weren’t yet convinced to provide the support we needed.

Out of necessity, we decided to divest or shelve some of our R&D programs and put most of our remaining resources behind odevixibat for rare pediatric liver diseases.

In the early days in the mid-2010s, there were some weeks where we nearly ran out of cash. At times, some people questioned our unconventional strategies such as seeking feedback on our phase III trial from FDA and EMA before the phase II study was completed and conducting a single Phase III study with two different primary endpoints. And even though the biotech financial markets were generally trending upward most of these years, the markets always pressured us to be at our absolute best to secure every dollar of funding.

By 2022, we’d cleared many of the hurdles inherent to development-stage biotech. We’d grown to ~200 employees globally. We created four different medications for liver diseases. The first to receive regulatory clearance in the US and Europe was odevixibat (brand name Bylvay) a bile acid transport inhibitor. It was approved and marketed in the U.S. and Europe for a rare pediatric liver disease for which there was no other FDA-approved medical treatment.

That was a publicly visible triumph, but behind the scenes, 2022 was a tumultuous year. We were focused on launching Bylvay globally and advancing two new clinical stage assets when we were first approached to sell the company in February.

We rejected multiple initial offers either because the price was too low or we didn’t think the timing was right for the Company. We finally agreed to undergo a period of diligence and finalized a counterproposal. We thought this was a good match and were hopeful the deal would go through. But in May, the potential acquiror’s board decided to walk away.

It was a shock and big disappointment on many levels.

After this exhausting process which ultimately led nowhere, we decided to get back to focusing on what we do best — developing and delivering important medicines for liver disease that would increase value for Albireo shareholders.  

Surprisingly, it didn’t take long for multiple different suitors to come calling again. We went through an intensive due diligence and negotiation with three of these companies up until just before the JP Morgan Healthcare Conference in January 2023.

Our board ultimately determined Ipsen’s global R&D and commercial capabilities, together with the terms Ipsen provided, presented the best option for all stakeholders. We announced the merger agreement and offer on the opening morning of the JP Morgan event in San Francisco. The sale was formally completed on March 3, 2023.

It was a tough decision to sell the company. It meant breaking up a highly committed A-Team and changing the nature of the close ties we had developed with ourselves, patients, families and their clinicians. But we knew that Ipsen could accelerate access to Bylvay and ensure that the three new product candidates would be maximized.

A Wild Ride

Biotech is not for the faint of heart. Ditto for commercializing medicines globally, taking a company public and navigating a complicated acquisition process It takes extreme dedication, and grit, as a company to get through these challenges. Culture gets tested by these normal events in corporate life – a strong culture stays together, while weaker ones can come unglued.

We had a few things going for us that helped us through the hard times. We believed in the science. We believed in our team. And we believed in our unrelenting commitment to bring hope to the patients and families we served. 

Did we do everything right? Absolutely not. In the first potential deal, I engaged too many people too early and failed to adequately discern alignment with the acquiring firm’s philosophy. (More on that later.)  

Everyone makes mistakes, but I wanted our team to learn from ours and not to repeat the same ones over and over. When something was clearly working well, we sought to turn it into a standard procedure, a hallmark of our culture.

While no transaction is the same, here are seven strategies I’d repeat if I sold another company:

Lessons Learned from Albireo’s Sale
  • Balance Stakeholder Impact – Selling a company requires satisfying multiple constituencies who might, at times, have conflicting goals. An acquisition won’t work unless each stakeholder sees and understands the benefit. For Albireo, that meant taking into consideration the desires of patients, investors, employees and board members. At the right time in the process, I engaged with each stakeholder directly; listening, explaining, listening some more, and making course corrections along the way. Another stakeholder to consider, which often doesn’t get as much attention, is the local community. I wanted our transaction to benefit our local community of Boston, which supported our success in many ways. While it is difficult to benefit a local community within the context of a standard merger deal, I used the Life Science Cares Shares Program to donate a portion of my personal proceeds to outstanding local nonprofits that make Boston a better place to live and work. Thinking carefully about each constituency resulted in a more detailed gameplan for negotiations, and ensured strong support from each party who had a stake in the outcome. Result: A better gameplan + bolstered buy-in.

 

  • Seek and Heed Advice – We ensured our banking, legal and financial modeling advisors were top-notch, and engaged consultants for expertise in targeted areas. Their advice was invaluable every step of the way. Result: Better decisions, fewer mistakes.

 

  • Keep Your Circle Small – As the original suitor sized us up, and we considered a sale to them, I can see in hindsight that I involved too many players too early. This created a large distraction while we needed to focus on our patients and business. Our team members were focused on what was the right thing to do for Albireo but they are human beings and it was challenging not to think about the potential personal impact. When the first deal did not work out, there was a significant toil from the emotional rollcoaster. I learned this lesson the second time around, and kept a tighter circle on who was privy to information. When Ipsen and others approached us, I only informed a handful of key leaders. It was hard for these senior executives to manage a dual workload – doing their normal day jobs to advance the work of Albireo, while also doing all the required work to prepare for a potential acquisition in secret — but it allowed the majority of employees to focus on executing their personal deliverables. We kept our eye on the ball the second time around. Result: More patients got access to our important medicines, even as we navigated the sale.

 

  • Play Your Position – The initial small circle of internal people handling the competitive bidding process for Albireo included me, the CEO, and our C-level executives overseeing finance, legal, business and science. We agreed on our roles, both formal and informal. And we stuck to them. Result: We moved fast, collaborated well, and didn’t get in each other’s way.

 

  • Ensure Alignment – With the first potential transaction in the first quarter of 2022, I didn’t engage deeply with leaders high enough in the potential acquiror’s organization to discern alignment in business philosophy. But I learned from that mistake. In the second process, I got to know the top leaders at the bidders, including Ipsen, with a focus on connecting business beliefs and practices to help ensure a good fit. Result: A sale that benefits both organizations, patients, employees and investors.

 

  • Focus the Team –During the due diligence period, the potential acquirer will bombard your company with literally thousands of information requests, sometimes haphazardly with emails at all hours of the day and night. We protected the team by creating a process to manage incoming requests, and push back on unreasonable inquiries, timelines and duplication. Result: Minimized burn-out and effective information flows.

 

  • Have Fun Along the Way – The biotech business is intense – especially when your team cares so much about patients. Long days, long weeks, long months. The added work required to navigate a sale only increases the pressure. In this context, I learned the added value of personal relationships, humor and even a bit of silliness. Examples: I sent personal cards for their birthday to each team member, or for seemingly small, but important, achievements. I regularly reached out to each diligence team member just to check in. And our town halls became more than just performance updates – we had crazy costumes, fun music videos and lots of shout-outs to bring a little levity to taxing times. Result: Extreme focus, loyalty and talent retention, despite the challenging work.
Grateful for the Experience

If you’re exploring or navigating a biotech sale, I hope these lessons learned can help. The journey isn’t easy, but when you find the right acquiring partner, it’s worth it. We know our drug will able to reach the vast majority of  the approximately 100,000 patients out there around the world who might benefit from it. Our shareholders, employees, and our community are sharing in the financial rewards.

I would not trade my Albireo experience for anything. It’s been a true gift to build an amazing team and work with wonderful parents, clinicians, investors and bankers to serve our patients. And with Ipsen’s acquisition, I’m confident that the work will continue. A large percentage of the Albireo employees plan to stay with Ipsen, while the ones who leave will have freedom to pursue other life interests and career opportunities on their own terms.

I look forward to my next leadership role and if I’m fortunate, I may get to apply these lessons learned again. In the meantime, I am taking a six-month “sabbatical” to enjoy my family, catch up on life and get ready for my next challenge.

27
Jun
2023

Becoming a Biotech CEO: Jodie Morrison on The Long Run

Today’s guest on The Long Run is Jodie Morrison.

Jodie is the acting CEO at Waltham, Mass.-based Q32 Bio. It’s a company developing treatments for autoimmune and inflammatory diseases. It has an antibody in development with Horizon Therapeutics aimed at IL-7 receptor alpha, in Phase II for the treatment of atopic dermatitis. It also has wholly-owned programs aimed at the complement system of the innate immune system, with the intent of making treatments that are tissue-targeted.

Jodie Morrison, acting CEO, Q32 Bio

She came to this position after a series of executive roles and board positions. Her first stint as a CEO, at Tokai Pharmaceuticals, didn’t end well. She dusted herself off and came back to play a role in back-to-back successful outcomes at Syntimmune, Keryx, and Cadent Therapeutics.

In this episode, we talk about how Jodie developed the confidence to lead from some of her early career experiences, how she thinks about hiring, and, at the end of the conversation, she provides some advice to young women seeking to grow and advance in the biotech industry.

And now for a word from the sponsor of The Long Run.

 

 

Occam Global is an international professional services firm focusing on executive recruitment, organizational development and board construction. The firm’s clientele emphasize intensely purposeful and broadly accomplished entrepreneurs and visionary investors in the Life Sciences. Occam Global augments such extraordinary and committed individuals in building high performing executive teams and assembling appropriate governance structures. Occam serves such opportune sectors as gene/cell therapy, neuroscience, gene editing, the intersection of AI and Machine Learning and drug discovery and development

Connect with Occam:

www.occam-global.com/longrun

Now, please join me and Jodie Morrison on The Long Run.

17
Jun
2023

Learning From History How to Think About the Technology of the Moment

David Shaywitz

Generative AI, the transformative technology of the moment, exploded onto the scene with the arrival in late 2022 of chatGPT, an AI-powered chatbot developed by the company OpenAI. 

After only five days, a million users had tried the app; after two months: 100 million, the fastest growth ever seen for a consumer application. TikTok, the previous record holder, took nine months to reach a 100 million users; Instagram had taken 2.5 years.

Optimists thrill to the potential AI offers humanity (“Why AI Will Save The World”), while doomers catastrophize (“The Only Way To Deal With The Threat From AI? Shut It Down”). Consultants and bankers offer frameworks and roadmaps and persuade anxious clients they are already behind. Just this week, McKinsey predicted that generative AI could add $4.4 trillion in value to global economy. Morgan Stanley envisions a $6 trillion opportunity in AI as a whole, while Goldman Sachs says 7% of jobs in the U.S. could be replaced by AI.

The one thing everyone — from Ezra Klein at the New York Times to podcasters at the Harvard Business Review — seems to agree on is that generative AI “changes everything.”

But if we’ve learned anything from previous transformative technologies, it’s that at the outset, nobody has any real idea how these technologies will evolve, much less change the world.  When Edison invented the phonograph player, he thought it might be used to record wills. The internet arose from a government effort to enable decentralized communication in case of enemy attack.  

As we start to contemplate – and are thrust into – an uncertain future, we might take a moment to see what we can learn about technology from the past.

***

Yogi Berra, of course, observed that “it is difficult to make predictions, especially about the future,” and forecasts about the evolution of technology bear him out. 

In 1977, Ken Olsen, President of Digital Equipment Corporation (DEC), told attendees of the World Futures Conference in Boston that “there is no reason for any individual to have a computer in their home.” In 1980, the management consultants at McKinsey projected that by 2000, there might be 900,000 cell phone users in the U.S.; they were off by over 100-fold; the actual number was above 119 million. 

On the other hand, much-hyped technologies like 3D TV, Google Glass, and the Segway never really took off. For others, like cryptocurrency and virtual reality, the verdict is still out. 

AI itself has been notoriously difficult to predict. For example, in 2016, AI expert Geoffrey Hinton declared:

“Let me start by just saying a few things that seem obvious. I think if you work as a radiologist, you’re like the coyote that’s already over the edge of the cliff but hasn’t yet looked down, so doesn’t realize there’s no ground underneath him. People should stop training radiologists now. It’s just completely obvious that within five years, deep learning is going to do better than radiologists because it’s going to get a lot more experience.  It might be 10 years, but we’ve got plenty of radiologists already.”

Writing five years after this prediction, in his book The New Goliaths (2022), Boston University economist Jim Bessen observes that “no radiology jobs have been lost” to AI, and in fact, “there’s a worldwide shortage of radiologists.”

James Bessen, Executive Director of the Technology & Policy Research Initiative, Boston University.

As Bessen notes, we tend to drastically overstate job losses due to new technology, especially in the near term. He calls this the “automation paradox,” and explains that new technologies (including AI) are “not so much replacing humans with machines as they are enhancing human labor, allowing workers to do more, provide better quality, and do new things.” 

Following the introduction of the ATM machine, the number of bank tellers employed actually increased, Bessen reports. Same for cashiers after the introduction of the bar code scanner, and for paralegals after the introduction of litigation-focused software products.

The explanation, Bessen explains, is that as workers are more productive, the costs of what their making tends to go down, which often unleashes greater consumer demand – at least up to a point. 

For instance, automation in textiles enabled customers to afford not just a single outfit, but an entire wardrobe. Consequently, from the mid-nineteenth century to the mid-twentieth century, “employment in the cotton textile industry grew alongside automation, even as automation was dramatically reshaping the industry,” Bessen writes. 

Yet after around 1940, automation continued to improve the efficiency of textile manufacturing, but consumer demand was largely sated; consequently, he says, employment in the U.S. cotton textile industry has decreased dramatically, from around 400,000 production workers in the 1940s to less than 20,000 today.

Innovation image created on DALL-E.

The point is that if historical precedent is a guide, the introduction of a new technology like generative AI will be accompanied by grave predictions of mass unemployment, as well as far more limited, but real examples of job loss, as we’ve seen in recent reporting. In practice, generative AI is likely to alter far more jobs than it eliminates and will likely create entirely new categories of work.

For example, Children’s Hospital in Boston recently advertised for the role “AI prompt engineer,” seeking a person who skilled at effectively interacting with chatGPT.

More generally, while it can be difficult to predict exactly how a new technology will evolve, we can learn from the trajectories previous technological revolutions have followed, as economist Carlota Perez classically described in her 2002 book, Technological Revolutions and Financial Capital.

Carlota Perez, Honorary Professor at the Institute for Innovation and Public Purpose (IIPP) at University College London.

Among Perez’s most important observations is how long it takes to realize “the full fruits of technological revolutions.” She notes that “two or three decades of turbulent adaptation and assimilation elapse from the moment when the set of new technologies, products, industries, and infrastructures make their first impact to the beginning of a ‘golden age’ or ‘era of good feeling’ based on them.” 

The Perez model describes two broad phases of technology revolutions: installation and deployment. 

The installation phase begins when a new technology “irrupts,” and the world tries to figure out what it means and what to do with it. She describes this as a time of “explosive growth and rapid innovation,” as well as what she calls “frenzy,” characterized by “flourishing of the new industries, technology systems, and infrastructures, with intensive investment and market growth.” There’s considerable high-risk investment into startups seeking to leverage the new technology; most of these companies fail, but some achieve outsized, durable success.

It isn’t until the deployment phase that the technology finally achieves wide adoption and use. This period is characterized by the continued growth of the technology, and “full expansion of innovation and market potential.” Ultimately, the technology enters the “maturity” stage, where the last bits of incremental improvement are extracted. 

As Perez explained to me, “A single technology, however powerful and versatile is not a technological revolution.” While she describes AI as “an important revolutionary technology … likely to spawn a whole system of uses and innovations around it,” she’s not yet sure whether it will evolve into the sort of full-blown technology revolution she has previously described. 

One possibility, she says, is that AI initiates a new “major system” – AI and robotics – within an ongoing information and communication technology revolution.  

At this point, it seems plausible to imagine we’re early in the installation stage of AI (particularly generative AI), where there’s all sorts of exuberance, and an extraordinary amount of investing and startup activity. At the same time, we’re frenetically struggling to get our heads around this technology and figure out how to most effectively (and responsibly) use it.

This is normal. 

Technology, as I wrote in 2019, “rarely arrives on the scene fully formed—more often it is rough-hewn and finicky, offering attractive but elusive potential.”

As Bessen has pointed out, “invention is not implementation,” and it can take decades to work out how best to use something novel. “Major new technologies typically go through long periods of sequential innovation,” Bessen observes, adding, “Often the person who originally conceived a general invention idea is forgotten.”

The complex process associated with figuring out how to best utilize a new technology may account, at least in part, for what’s been termed the “productivity paradox” – the frequent failure of a new technology to impart significant productivity improvement. We think of this frequently in the context of digital technology; economist Robert Solow wryly observed in a 1987 New York Times book review that “You can see the computer age everywhere but in the productivity statistics.”

However, as Paul A. David, an economic historian at Stanford noted in his classic 1990 paper, “The Dynamo and the Computer,” a remarkably similar gap was present a hundred years earlier, in the history of electrification. David writes that the dawn of the 20th century, two decades after the invention of the incandescent light bulb (1879) and the installation of Edison central generating stations in New York and London (1881), there was very little economic productivity to show for it.

David goes on to demonstrate that the simple substitution of electric power for steam power in existing factories didn’t really improve productivity very much. Rather, it was the long subsequent process of iterative reimagination of factories, enabled by electricity, that allowed the potential of this emerging technology to be fully expressed.  

A similar point is made by Northwestern economic historian Robert Gordon in his 2016 treatise The Rise and Fall of American Growth. Describing the evolution of innovation in transportation, Gordon observes that “most of the benefits to individuals came not within a decade of the initial innovation, but over subsequent decades as subsidiary and complementary sub-inventions and incremental improvements became manifest.”

As Bessen documents in Learning by Doing (2015), using examples ranging from the power loom (where efficiency improved by a factor of twenty), to petroleum refinement, to the generation of energy from coal, remarkable improvements occurred during the often-lengthy process of implementation, as motivated users figured out how to do things better — “learning by doing.”

Eric von Hippel, professor, MIT Sloan School of Management

Many of these improvements (as I’ve noted) are driven by what Massachusetts Institute of Technology professor Eric von Hippel calls “field discovery,” involving frontline innovators motivated by a specific, practical problem they’re trying to solve.

Such innovative users—the sort of people who Judah Folkman had labeled “inquisitive physicians”—play a critical role in discovering and refining new products, including in medicine; a 2006 study led by von Hippel of new (off-label) applications for approved new molecular entities revealed that nearly 60% were originally discovered by practicing clinicians.

***

What does this history of innovation mean for the emerging technology of the moment, generative AI? 

First, we should take a deep breath, and recognize that we are in the earliest days of technology evolution, and nobody knows how it’s going to play out. Not the experts developing it, not the critics bemoaning it, not the consultants trying to sell work off our collective anxiety around it.

Second, we should acknowledge that the full benefits of the technology will take some time to appear. Expectations of immediate gains in productivity by plugging in AI seem as naïve, the dynamo-for-steam substitution all over again. While there are clearly some immediate uses for generative AI, the more substantial benefits will likely require continued evolution of both technology and workflow processes.

Innovation image created on DALL-E.

Third, it’s unlikely that AI will replace most workers, but it will require many of us to change how we get our jobs done – an exciting opportunity for some, an unwelcome obligation for others.  AI will also create new categories of work, and introduce new challenges for governance, ethics, regulation, and privacy.

Fourth, and perhaps most importantly: As mind-blowing as generative AI is, the technology is not magic. It doesn’t descend from the heavens (or Silicon Valley), deus ex machina, with the ability to resolve sticky ethical challenges, untangle complex biological problems, and generally ease the woes of humanity. 

But while technology isn’t a magic answer, it’s proved a historically valuable tool, driving profound improvements in the human condition, and enabling tremendous advances in science. The discovery of the microscope, telescope, and calculus all allowed us to better understand nature, and to develop more impactful solutions.

Technology changes the world in utterly unexpected and unpredictable ways. 

How exciting to live in this moment, and to have the opportunity — and responsibility — to observe and shape the evolution of a remarkable technology like generative AI. 

Yes, there are skeptics. I have a number of friends and colleagues who have decided to sit this one out, reflexively dismissing the technology because they’ve heard it hallucinates (it does), or because of privacy concerns (a real worry), or because they’re turned off by the relentless hype (I agree!)

But I would suggest that we owe it to ourselves to engage with this technology, familiarize ourselves, through practice, with its capabilities and limitations. 

We can be the lead users generative AI — like all powerful but immature transformative technologies — requires to evolve from promise to practice.

Recent Astounding HealthTech columns on Generative AI
14
Jun
2023

Immunotherapies for Cancer and More: Aaron Ring on The Long Run

Aaron Ring is today’s guest on The Long Run.

Aaron is an associate professor of immunobiology at Yale University for a little while longer. He’s moving his lab to the Fred Hutchinson Cancer Center in Seattle in the summer of 2023.

Aaron Ring, Fred Hutchinson Cancer Center; founder, Simcha Therapeutics, Seranova Bio, Stipple Bio

Early in his scientific career, Aaron has done some fascinating work in protein engineering and immunology. He has founded three startup companies to translate the research from his lab – Simcha Therapeutics, Seranova Bio, and Stipple Bio. Simcha is working on an engineered form of IL-18 for the treatment of cancer, while Seranova Bio is using technology to identify auto-antibodies that might point the way to new approaches to treat people with autoimmune diseases, cancer, and perhaps neurological diseases.

Timmerman Report subscribers can go back and read a startup profile I did of Simcha back in January 2022 to get the gist. The engineered IL-18 has shown comparable monotherapy efficacy in animals to PD-1 inhibitors, and it has been able to raise the bar in combination with those standard cancer therapies. SR One led a $40 million Series B financing of the company in 2022, and was joined by BVF Partners, Samsara BioCapital, Rock Springs Capital, ArrowMark Partners, and Logos Capital among others. Foresite Capital and A16Z have backed Aaron’s other ventures.

In this conversation we talked about how Aaron developed his interest in science, how he thinks about which problems to go after, and using the new tools of biology and the data they throw off to develop better therapies.

Now for a word from the sponsor of The Long Run.

Tired of spending hours searching for the exact research products and services you need? Scientist.com is here to help. Their award-winning digital platform makes it easy to find and purchase life science reagents, lab supplies and custom research services from thousands of global laboratories.  Scientist.com helps you outsource everything but the genius! 

Save time and money and focus on what really matters, your groundbreaking ideas. 

Learn more at:

Scientist.com/LongRun

Now, please join me and Aaron Ring on The Long Run.

4
Jun
2023

Pharma R&D Execs Offer Extravagant Expectations for AI But Few Proof Points

David Shaywitz

As the excitement around generative AI sweeps across the globe, biopharma R&D groups (like most everyone else) are actively trying to figure out how to leverage this powerful but nascent technology effectively, and in a responsible fashion.

In separate conversations, two prominent pharma R&D executives recently sat down with savvy healthtech VCs to discuss how generative AI specifically, and emerging digital technologies more generally, are poised to transform the ways new medicines are discovered, developed, and delivered.

The word “poised” is doing quite a lot of work in the sentence above. Both conversations seamlessly and rather expertly blend what’s actually been accomplished (a little bit) with the vision of what might be achieved (everything and then some).

The first conversation, from the a16z “Bio Eats World” podcast, features Greg Meyers, EVP and Chief Digital and Technology Officer of Bristol Myers Squibb (BMS), and a16z Bio+Health General Partner Dr. Jorge Conde.  The second discussion, from the BIOS community, features Dr. Frank Nestle, Global Head of Research and CSO, Sanofi, and Flagship Pioneering General Partner and Founder and CEO of Valo Health, Dr. David Berry.  (Readers may recall our discussion of a previous BIOS-hosted interview with Dr. Nestle, here.)

Greg Meyers, chief digital and technology officer, Bristol Myers Squibb

Rather than review each conversation individually, I thought it would be more useful to discuss common themes emerging from the pair of discussions.

Theme 1: How Pharma R&D organizations are meaningfully using AI today

AI has started to contribute meaningfully to the design of small molecules in the early stages of drug development.  “A few years ago,” Meyers says, BMS started “to incorporate machine learning to try to predict whether or not a certain chemical profile would have the bioreactivity you’re hoping.”  He says this worked so well (producing a “huge spike” in hit rate) that they’ve been trying to scale this up.

Meyers also says BMS researchers “are currently using AI pretty heavily in our protein degrader program,” noting “it’s been very helpful” in enabling the team to sort through different types of designs.

Nestle also highlights the role of AI in developing novel small compounds.  “AI-empowered models” are contributing to the design of modules, he says, and are starting to “shift the cycle times” for the industry.

Frank Nestle, chief scientific officer, Sanofi

AI is also now contributing to the development of both digital and molecular biomarkers.  For example, Meyers described the use of AI to analyze a routine 12-lead ECG to identify patients who might have undiagnosed hypertrophic cardiomyopathy.  (Readers may recall a very similar approach used by Janssen to diagnose pulmonary artery hypertension, see here.)

Nestle offered an example from digital pathology.  He described a collaboration with the healthtech company Owkin, whose AI technology, he says, can help analyze the microscope slides with classically stained tissue samples.

Depending on your perspective, these use cases are either pretty slim pickings or an incredibly promising start. 

I’ve not included what seemed to me as still exploratory efforts involving two long-standing industry aspirations:

  • Integrating multiple data sources to improve target selection for drug development;
  • Integrating multiple data sources to improve patient selection for clinical trials.

We’ll return to these important but elusive ambitions later, in our discussion of “the magic vat.”

I’ve also not included examples of generative AI, because I didn’t hear much in the way of specifics here, probably because it’s still such early days.  There was clearly excitement around the concept that, as Meyers put it, “proteins are a lot like the human language,” and hence, large language models might be gainfully applied to this domain.

Theme 2: Grand Vision

The aspiration for AI in in biopharma R&D were as expansive as the established proof points were sparse.  The lofty idea seems to be that with enough data points and computation, it will eventually be possible to create viable new medicines entirely in silico.  VC David Berry described an “aspiration to truly make drug discovery and development programmable from end to end.”  Nestle wondered about developing an effective antibody drug “virtually,” suggesting it may be possible in the future.  Also possible, he suggests: “the ability to approve a safe and effective drug in a certain indication, without running a single clinical trial.”

Both Nestle and Meyers cited the same estimate – 10^60 – as the size of “chemical space,” the number of different drug-like molecular structures that are theoretically possible.  It’s a staggering number, more than the stars in the universe, and likely far beyond our ability to meaningfully comprehend.  The point both executives were making is that if we want to explore this space productively, we’re going to get a lot further using sophisticated computation than relying on the traditional approaches of intuition, trial and error.

The underlying aspiration here strikes a familiar chord for those of us who remember some of the more extravagant expectations driving the Human Genome Project. For instance, South African biologist Sydney Brenner reportedly claimed that if he had “a complete sequence of DNA of an organism and a large enough computer” then he “could compute the organism.”   While the sequencing of the genome contributed enormously to biomedical science, our understanding of the human organism remains woefully incomplete, and largely uncomputed.  It’s easy to imagine that our hubris – and our overconfidence in our ability to domesticate scientific research, as Taleb and I argued in 2008 – may be again deceiving us.

Theme 3: Learning Drug Development Organization

For years, healthcare organizations have strived towards the goal of establishing a “learning health system (LHS),” where knowledge from each patient is routinely captured and systematically leveraged to improve the care of future patients.  As I have discussed in detail (see here), the LHS is an entity that appears to exists only as ideal with the pages of academic journals, rather than embodied in the physical world.

Many pharma organizations (as I’ve discussed previously) aspire towards a similar vision, and seek to make better use of all the data they generate.  As Meyers puts it, you “want to make sure that you never run the same experiment twice,” and you want to capture and make effective use of the digital “exhaust” from experiments, in part by ensuring it’s able to be interpreted by computers.

Berry emphasized that a goal of the Flagship company Valo (where he now also serves as CEO) is to “use data and computation to unify how… data is used across all of the steps [of drug development], how data is shared across the steps.”  Such integration, Berry argues, “will increase probably of success, will help us reduce time, will help reduce cost.”

The problem – as I’ve discussed, and as Berry points out, is that “drug discovery and development has historically been a highly siloed industry.  And the challenge is it’s created data silos and operational silos.” 

The question, more generally is how to unlock the purported value associated with, as Nestle puts it, the “incredible treasure chest of data” that “large pharmaceutical companies…sit on.”

Historically, pharma data has been collected with a single, high-value use in mind.  The data are generally not organized, identified, and architected for re-use.  Moreover, as Nestle emphasizes, the incentives within pharma companies (the so-called key performance indicators or “KPIs”) are “not necessarily in the foundational space, and that not where typically the resourcing goes.”  In other words, what companies value and track are performance measures like speed of trial recruitment; no one is really evaluating data fluidity, and unless you can directly tie data fluidity to a traditional performance measure, it will struggle to be prioritized.

In contrast, companies like Valo; other Flagship companies like Moderna; and some but not all emerging biopharma companies are constructed (or reconstructed — eg Valo includes components of both Numerate and Forma Therapeutics, as well as TARA biosystems) with the explicit intention of avoiding data silos.  This concept, foundational to Amazon in the context of the often-cited 2002 “Bezos Memo,” was discussed here

In contrast, pharmas have entrenched silos; historically, data were collected to meet the specific needs of a particular functional group, responsible for a specific step in the drug development process.  Access to these data (as I recently discussed) tends to be tightly controlled.

Data-focused biotech startups tend to look at big pharma’s traditional approach to data and see profound opportunities for disruption.  Meanwhile, pharmas tend to look at these data-oriented startups and say, “Sure, that sounds great.  Now what have you got to show for all your investment in this?” 

The result is a standoff of sorts, where pharmas try to retrofit their approach to data yet are typically hampered by the organizational and cultural silos that have very little interest in facilitating data access.  Meanwhile, data biotech startups are working towards a far more fluid approach to data, yet have produced little tangible and compelling evidence to date that they are more effective, or are likely to be more effective, at delivering high impact medicines to patients. 

Theme 4: Partnerships and External Innovation

Both BMS and Sanofi are exploring emerging technologies through investments and partnerships with a number of healthtech startups, even as both emphasize that they are also building internal capabilities. 

“We have over 200 partnerships,” Meyers notes, “including several equity positions with other companies that really come from the in silico, pure-play sort of business.  And we’ve learned a ton from them.”

Similarly, Nestle (again – see here) emphasized key partnerships, including the Owkin relationship and digital biomarker work with MIT Professor Dina Katabi.

Meanwhile, Pfizer recently announced an open innovation competition to source generative AI solutions to a particular company need: creating clinical study reports.

In addition to these examples, I’ve become increasingly aware of a number of other AI-related projects attributed to pharma companies that upon closer inspection, turn out to represent discrete engagements with external partners or vendors who reportedly are leveraging AI.

Theme 5: Advice for Innovators

One of the most important lessons from both discussions was the challenge for aspiring innovators and startups.

Berry, for example, explained why it’s so difficult for AI approaches to gain traction.  “If I want to prove, statistically, that AI or an AI component is doing a better job, how many Phase Two clinical readouts does one actually need to believe it on a statistical basis?  If you’re a small company and you want to do it one by one, it’s going to take a few generations.  That’s not going to work.”

On the other hand, he suggested “there are portions of the drug discovery and development cascade where we’re starting to see insights that are actionable, that are tangible, and the timelines of them and the cost points of them are so quickly becoming transformative that it opens up the potential for AI to have a real impact.”

Meyers, for his part, offered exceptionally relevant advice for AI startups pitching to pharma (in fact, the final section of the episode should be required listening for all biotech AI founders). 

Among the problems Meyers highlights – familiar to readers of this column – are the need “for companies that are focused on solving a real-world problem,” rather than solutions in search of a problem.  He also emphasized that “this is an industry that will not adopt something unless it is, really 10x better than the way things are historically done.” 

This presents a real barrier to the sort of incremental change that may hard be appreciate in the near term but can deliver appreciable value over time.  Even “slight improvements” in translational predictive models, as we recently learned from Jack Scannell, can deliver outsized impact, significantly elevating the probability of success while reducing the many burdens of failure.

Meyers also reminded listeners of the challenges of finding product-market fit because healthcare “is the only industry where the consumer, the customer, and the payor are all different people and they don’t always have incentives that are aligned.”  (See here.)

David Berry, CEO, Valo Health

On a more optimistic note, Berry noted that one of the most important competitive advantages a founder has is recognizing that “a problem is solvable, because that turns out to be one of the most powerful pieces of information.”  For Berry, the emergence of AI means that “we can start seeing at much larger scales problems that are solvable that we didn’t previously know to be solvable.”  Moreover, he argues, once we realize a problem is solvable, we’re more likely to apply ourselves to this challenge.

Paths Forward

In thinking about how to most effectively leverage AI, and digital and data more generally, in R&D, I’m left with two thoughts which are somewhat in tension.

“Pockets of Reducibility”

The first borrows (or bastardizes) a phrase from the brilliant Stephen Wolfram: look for pockets of reducibility.  In other words – focus your technology not on fixing all of drug development, but on addressing a specific, important problem that you can meaningfully impact. 

For instance, I was speaking earlier this week with one of the world’s experts on data standards.  I asked him how generative AI as “universal translator” (to use Peter Lee’s term) might obviate the need for standards.  While the expert agreed conceptually, his immediate focus was on figuring out how to pragmatically apply generative AI tools like GPT-4 to standard generation so that it could be done more efficiently, potentially with people validating the output rather than generating it.

On the one hand, you might argue this is disappointingly incremental.  On the other hand, it’s implementable immediately, and seems likely to have a tangible impact. 

(In my own work, I am spending much of my time focused on identifying and enabling such tangible opportunities within R&D.)

The Magic Vat

There’s another part of me, of course, that both admires and deeply resonates with the integrated approach that companies like Valo are taking: the idea and aspiration that if, from the outset, you deliberately collect and organize your data in a thoughtful way, you can generate novel insights that cross functional silos (just as Berry says).  These insights, in principle, have the potential to accelerate discovery, translation (a critical need that this column has frequently discussed, and that Conde appropriately emphasized), and clinical development.

Magic Vat. Image by DALL-E.

Integrating diverse data to drive insights has captivated me for decades; it’s a topic I’ve discussed in a 2009 Nature Reviews Drug Discovery paper I wrote with Eric Schadt and Stephen Friend.  The value of integrating phenotypic data with genetic data was also a key tenet I brought to my DNAnexus Chief Medical Officer role, and a lens through which I evaluated companies when I subsequently served as corporate VC.   

Consequently, I am passionately rooting for Berry at Valo – and for Daphne Koller’s insitro and Chris Gibson’s Recursion.  I’m rooting for Pathos, a company founded by Tempus that’s focused on “integrating data into every step of the process and thereby creating a self-learning and self-correcting therapeutics engine,” and that has recruited Schadt to be the Chief Science Officer.   I’m also rooting for Aviv Regev at Genentech, and I am excited by her integrative approach to early R&D.

Daphne Koller, founder and CEO, insitro

But throughout my career, I’ve also seen just how challenging it can be to move from attractive integrative ambition to meaningful drugs.  I’ve seen so many variations of the “magic vat,” where all available scientific data are poured in, a dusting of charmed analysis powder is added (network theory, the latest AI, etc), the mixture is stirred, and then – presto! – insights appear. 

Or, more typically, not.  But (we’re invariably told) these insights would arrive (are poised to arrive) if only there was more funding/more samples/just one more category of ‘omics data, etc. — all real examples by the way. 

It’s possible that this time will be the charm – we’ve been told, after all, that generative AI “changes everything” — but you can also understand the skepticism.

Chris Gibson, co-founder and CEO, Recursion

My sense is that legacy pharmas are likely to remain resistant to changing their siloed approach to data until they see compelling evidence that data integration approaches are, if not 10x better, then at least offer meaningful and measurable improvement.   In my own work, I’m intensively seeking to identify and catalyze transformative opportunities for cross-silo integration of scientific data across at least some domains, since effective translation absolutely requires it.

For now, big pharmas are likely to remain largely empires of silos – and will continue to do the step-by-step siloed work comprising drug development at a global scale better than anyone.  Technology, including AI, may help to improve the efficiency of specific steps (eg protocol drafting, an example Meyers cites). Technology may also improve the efficiency of sequential data handoffs, critical for drug development, and help track operational performance, providing invaluable information to managers, as discussed here.

But foundationally integrating scientific knowledge across organizational silos?  Unless a data management organization already deeply embedded within many pharmas – perhaps a company like Veeva or Medidata – enables it, routine integration of scientific knowledge across long-established silos, in the near to medium term, seems unlikely.  It may take a visionary, persistent and determined startup (Valo? Pathos?) to capture persuasively the value that must be here.

Bottom Line:

Biopharma companies are keenly interested in leveraging generative AI, and digital and data technologies more generally, in R&D.  To date, meaningful implementations of AI in large pharmas seem relatively limited, and largely focused on small molecule design, and biomarker analysis (such as identifying potential patients through routine ECGs).  Nevertheless, the ambitions for AI in R&D seem enormous, perhaps even fanciful, envisioning virtual drug development and perhaps even in silico regulatory approvals.  More immediately, pharmas aspire to make more complete use of the data they collect but are likely to continue to struggle with long-established functional silos.  External partnerships provide access to emerging technologies, but it can be difficult for healthtech startups to find a permanent foothold with large pharmas.  Technology focused on alleviating important, specific problems – “pockets of reducibility” – seems most likely to find traction in the near term.  Ambitious founders continue to pursue the vision of more complete data integration.

30
May
2023

From Structural Biology to Structuring Companies: Deb Palestrant on The Long Run

Today’s guest on The Long Run is Deb Palestrant.

Deb is a partner with 5AM Ventures and the executive chair of the 4:59 Initiative. 5AM invests in early-stage startups working on a variety of novel biological targets and some of the emerging new treatment modalities – gene therapy, gene editing, oligonucleotides. As the name suggests, it’s not afraid to get involved in companies in very early days, when they are high-risk/high-reward propositions.

Deb Palestrant, partner, 5AM Ventures; executive chair, 4:59

Deb comes to this venture work with a deep scientific background, and significant hands-on operating experience. She got her PhD in structural biology at Columbia University, and made the move to industry at the Novartis Institutes of Biomedical Research in the mid-2000s. She found her way into the Boston biotech startup world in the 2010s, and was a part of building a series of ambitious companies – Blueprint Medicines, Editas Medicine, and Relay Therapeutics included.

We talk in this episode about Deb’s career journey, about how she and her partners think about creating companies, and what areas of opportunity she sees in science and medicine.

And now for a word from the sponsor of The Long Run.

 

 

Occam Global is an international professional services firm focusing on executive recruitment, organizational development and board construction. The firm’s clientele emphasize intensely purposeful and broadly accomplished entrepreneurs and visionary investors in the Life Sciences. Occam Global augments such extraordinary and committed individuals in building high performing executive teams and assembling appropriate governance structures. Occam serves such opportune sectors as gene/cell therapy, neuroscience, gene editing, the intersection of AI and Machine Learning and drug discovery and development

Connect with Occam at:

www.occam-global.com/longrun

 

Now, please join me and Deb Palestrant on The Long Run.

21
May
2023

Big, If True: Opportunities and Obstacles Facing AI (Plus: Summer Reading)

David Shaywitz

Today, we’ll begin with a consideration of the promise for AI some experts see in healthcare and biopharma.

Next, we’ll look at some of the obstacles – some technical, some organizational – and re-visit the eternal “data parasite” debate.

Finally, we’ll conclude with a few suggestions for summer reading.

The AI Opportunity: Elevating Healthcare for All

Earlier this month, I moderated a conversation about AI and healthcare (video here, transcript here) at Harvard’s historic Countway Library of Medicine, in a room just down the hall from a display of Phineas Gage’s skull and the tamping iron that lanced it on September 13, 1848, famously altering his behavior but sparing his life.  The episode soon became part of neurology history and lore.

With less overt drama, but addressing a topic of perhaps even greater biomedical importance, the panelists – Harvard’s Dr. Zak Kohane, Microsoft’s Peter Lee, and journalist Carey Goldberg (all co-authors of the recently published The AI Revolution in Medicine: GPT-4 and Beyond, discussed here), addressed their subject.

A key opportunity for AI in health that Kohane emphasized was the chance to elevate care across the board by improving consistency. He told the story of a friend whose spouse was dealing with a series of difficult health issues.

Innovation image created on DALL-E.

Kohane said his friend described “how delightful it was to have a doctor who really understood what was going on, who understood the plan. The light was on.”

However, Kohane continued, the friend would then “go talk to another doctor and another doctor, and the light was not on. And there was huge unevenness.”

The story, Kohane reflected, “reminds me of my own intuition just from experiencing medical training and medical care, which is there are huge variations. There are some brilliant doctors. But there are some also non-brilliant doctors and some doctors who might have been brilliant but then are harried, squished by the forces that propel modern medicine.”

Kohane says he saw Chat-GPT as a potential response to physician inconsistency. For Kohane, generative AI represented a disruptive force that “was going to happen, whether or not medicine and the medical establishment were going to pick up the torch.” Why? Because “patients were going to use it.”

Goldberg, too, recognized the opportunities for patients, and spoke to the urgent need she felt to access the technology:

“Okay, we get it. It has inaccuracies, it hallucinates. Just give it to me. Like, I just want it. I just want to be able to use it for my own queries, my own medically related queries. And I think that what I came away from working on this book with was an understanding of just the incredible usefulness that this can have for patients.”

Goldberg also shared a story of a nurse who suffered from hand pain and was evaluated by a series of specialists who were unable to identify the cause. Desperate, the nurse typed her symptoms into Chat-GPT, and learned that one of her medications could be causing the pain.  When this was changed, the pain resolved.   

Kohane sees the ready availability of a savvy second-opinion a tremendous resource for physicians. When he was training, he said, the physicians used to convene after clinic and review all the patients. “Invariably,” he notes, “we changed the management” of a handful “because of what someone else said. That went away. There’s no time for it.”

Innovation image created on DALL-E.

The lack of review represents a real loss, Kohane points out, because “even the best doctors will not remember everything all the time.” Kohane says he is convinced that generative AI will restore this capability and enable it serve a co-pilot function, providing real-time assistance to busy providers.

Another opportunity to make physicians’ lives better, the panelists suggested, was in the area of paperwork and documentation, such as the dreaded pre-authorization letters, often required to beseech payors for reimbursement. 

Since Lee contributed an entire chapter about the impact on paperwork reduction in healthcare, I asked him whether we’re just going to see AI’s battling with each other: provider AI’s writing pre-authorization letters, and payor AI’s writing justifications for rejection. 

Lee responded that this was very similar to a scenario Bill Gates has mentioned, where an email starts as three bullet points you want to share, GPT-4 translates this into a well-composed email, then GPT-4 at other end reduces it back to three bullet points for the reader.  

I told Lee this reminded me of Peter Thiel’s famous quote: “We wanted flying cars, instead we got 140 characters.” Surely, I asked, generative AI must offer healthcare something more profound than more efficient paperwork? 

In response, Lee highlighted the opportunities associated with the ability to better connect and learn from data – perhaps getting us closer to at long last fulfilling the elusive promise of a “learning healthcare system” (see here). In particular, Lee highlighted the potential of AI serving as a “universal translator of healthcare information,” allowing for the near-effortless extraction and exchange of information. 

For more perspectives on how AI could benefit healthcare and the life sciences, I’d recommend:

  1. A recent Atlantic piece by Matteo Wong emphasized the opportunities for leveraging multi-modal data – a topic Eric Topol and colleagues have consistently highlighted.
  2. This short Nature Biotechnology essay, by Jean-Philippe Vert, a data science expert who now leads R&D at the health data company Owkin.  Vert describes four ways AI may impact drug discovery.  Of particular interest: his suggestion that generative AI might provide a “framework for the seamless integration of heterogeneous data and concepts.”  However, he acknowledges that “How exactly to implement this idea and how effective it will be largely remain open research questions.”
  3. This recent Nature Communications paper from M.I.T. and Takeda (disclosure: I work at Takeda, but wasn’t involved in this collaboration), demonstrating an application of AI in manufacturing. This operational area seems especially amenable to AI-driven improvements, in part because the richness and completeness of data capture (see also here).
Pesky Obstacles to AI implementation

The inconvenient truth is that while generative AI and other emerging technologies have captivated us with their promise, we’re still figuring out how to use them.

Innovation image created with DALL-E.

Even user-friendly applications like ChatGPT and GPT-4-enabled-Bing are not always plug-and-play. For example, in preparation for an upcoming workshop I’m leading for a particular corporate function highlighting the capabilities of GPT-4, I tried out some of the team’s most elementary use cases with Bing-GPT.  The results were disappointing and included a number of basic mistakes. Often, Bing-GPT seemed to perform worse than Bing or Google search alone.  The results generated seemed unlikely to inspire corporate colleagues to urgently adopt the technology.

These challenges are hardly limited to GPT-4 or Bing. From the perspective of a drug development organization, technology issues seem to surface in every area of digital and data.  Far more often than not, the hype and promise touted by eager startups seem at odds with the capabilities these nascent companies demonstrably can deliver. In fairness, the difficulty many legacy biopharma companies have figuring out how to work in new ways with these healthtech startups probably also contributes to the challenge. 

To understand the issues better, let’s consider one example, outside of biopharma, recently discussed by University of North Carolina Vice Chair and Professor of Medicine Spencer Dorn.  His focus: the adoption of AI in radiology.

Dorn notes that while AI expert Geoffrey Hinton predicted in 2016 that AI would obviate the need for radiologists within five years, this hasn’t happened. In fact, Dorn says, only a third of radiologists use AI at all, “usually for just a tiny fraction of their work.” 

Dorn cites several reasons for AI’s limited adoption in clinical radiology:

  • Inconsistent performance of AI in real-world settings, compared to test data;
  • AI “may be very good a specific tasks (e.g. identifying certain lesions)…but not most others”;
  • “Embedding AI into diagnostic imaging workflows require time, effort, and money,” and basically, the juice doesn’t seem to be worth the squeeze;

Dorn warns that generative AI “in healthcare will need to overcome these same hurdles. Plus, several more.”

Similar issues apply to the adoption, for high-stakes use-cases, of a range of emerging technologies, including digital pathology, decentralized trials, and “the nightmare” of digital biomarkers – challenges this column has frequently discussed.

Innovation image created on DALL-E.

But remarkably, technology problems are probably not the most difficult issue for healthtech innovators to solve. Technology tends to improve dramatically over time (think about the camera on your smartphone). No, the most difficult sticking point may well be organizational behavior.  Essentially, the return of the eternal, dreaded “Data Parasite” debate (as I discussed in 2016 in a three-part series in Forbes, starting here.) 

In most large organizations, both academic and corporate (I am unaware of many exceptions), there is a constant battle between those who effectively own the data and those who want to analyze the data. In theory, of course, and depending upon the situation, data belong to: patients / the organization / taxpayers, or some combination of the three. Researchers, meanwhile, are just “stewards” or “trustees” of the data. Yet in practice, someone always seems to control and zealously guard the access to any given data set within an organization.  

Typically, those who “own” the data (whether an academic clinical investigator or a pharma clinical development team) are using the data to pursue a defined, high-value objective. Others who want access to these data tend to have more exploratory tasks in mind. Theoretically, there’s a huge amount of value that can be obtained by enabling data exploration. Once again, in practice, the theoretical value is often difficult to demonstrate, and is often viewed as offering little upside – and a fair amount of perceived downside risk, as well as gratuitous aggravation – to the data “owners.” Much of this perceived risk relates to the concern about sloppy or ill-informed analyses that generates, essentially, “false positive” concerns, as I allude to here.

I’ve seen very few examples where the data “analyzers” have sufficient leverage to win here.  In general, the data “owners” tend to hire data scientists of their own and say “let us know what you want to know, and we’ll have our people run the analysis for you.” This has the effect of slowing down integrative exploratory analyses to a trickle, particularly given the degree of pre-specification the data “owners” tend to require.

If you are a data owner, you probably view this as an encouraging result, since analyses are only done by people who ostensibly have a feel for how the data were generated and understand the context and the limitations. As discussed in a previous column, “data empathy” is vitally important. 

But if you are a data analyzer not working directly with a data “owner,” you are constantly frustrated by the near-impossibility of obtaining access to data you’d like to explore. Perhaps most strikingly, many researchers who fiercely defend their own data from external analyses are often fiercely critical of others for not sharing data the same researchers hope to explore. As Miles famously observed, “where you stand depends on where you sit.”

Innovation image created on DALL-E.

Of course, it’s possible that technology could help ease sharing. Even so, it’s really difficult to envision the tight hold on data changing, so long as so much power in organizations clearly rests with those in control of the data. Perhaps, as Lakhani and others suggest, this can be addressed in new companies who have a fundamentally different view of data (Amazon – driven by the “Bezos Mandate” — is the canonical example), and can readily monetize data fluidity.  Alternatively, the demonstrated utility of exploratory integrated analyses across multiple data silos and “owners” in legacy organizations could potentially facilitate more consistent access. 

For now, in both academia and biopharma, virtuous stated preferences to the contrary, this revealed tension remains very much alive.

Briefly Noted Summer Reading

A must-read for all biotechies, For Blood and Money, by Marketwatch’s Nathan Vardi, tells the captivating story of two cancer medicines targeting the BTK kinase: ibrutinib and acalabrutinib.  A decade ago, for Forbes, I wrote about the beginning of the ibrutinib story. 

It was thrilling to read Vardi’s account of the medicine’s complete journey – and the journey of its competitor, acalabrutinib (which, fun fact, was originally discovered by the same company in the Netherlands that discovered the product that became the blockbuster Keytruda, see here). As Jerome Groopman’s thoughtful review in the New York Review of Books suggests, Vardi’s book also raises difficult questions about the role of luck vs skill in drug development, as well as the role of capital vs labor, since the investors appeared to make out far better than the scientists who did the lion’s share of the work. This pithy review by Adrian Woolfson, in Science, also provides a good summary.

Less essential but fascinating for readers who recall the rise of companies like Gawker and Buzzfeed, is Traffic, by Ben Smith. He describes how emerging media companies – and the young men and women who contributed the content – desperately chased reader traffic, with important consequences both for them and society. See here for an excellent review of the book by the Bulwark’s Sonny Bunch.

Also intriguing, if a bit uneven: Beyond Measure, a book about the history of measurement, written by James Vincent, Senior Reporter at The Verge. See here for a thoughtful review of Vincent’s book by Jennifer Szalai in The New York Times.

Finally, a few recommended posts. On the concerning side, this piece about the devolution of clinical medicine captures what I seem to be hearing from nearly every single physician I know.  Even doctors who were once so excited about taking care of patients now seem abjectly miserable, trapped in a system that has reduced them to widgets. (See also here, here, here.)

On the innovation front, several comments about the wildly popular GLP-1 medicines tirzepatide and semaglutide caught my eye (see also my last piece, here). On the one hand, it’s clear the development of these powerful and promising medicines was, as Dr. Michael Albert of Accomplish Health suggests, clearly the result of deliberate, meticulous effort, both by companies like Lilly and Novo Nordisk, and pioneering academics like physician-scientist Daniel Drucker (who also maintains this authoritative website on the evolving science). On the other hand, it’s interesting that (as Sarah Zhang writes in The Atlantic), these medicines may have entirely unanticipated application in the management of addictions and compulsions.

Bottom Line

Generative AI offers the possibility of elevating the quality of healthcare patients receive.  However, the implementation of AI and other digital technologies may be impaired both by the growing pains of nascent technology and, more significantly, by the territoriality of those who control access to data silos within large organizations — although this may also ensure that the data are more likely to be analyzed by those who have a greater feel for the context in which they were generated). Finally, For Blood and Money, by Nathan Vardi, Traffic, by Ben Smith, and Beyond Measure, by James Vincent are all good additions to your summer reading list.