14
Mar
2023

The Future of Neuroscience Drug R&D: Ryan Watts on The Long Run

Today’s guest on The Long Run is Ryan Watts.

Ryan is the co-founder and CEO of South San Francisco-based Denali Therapeutics.

Ryan Watts, co-founder and CEO, Denali Therapeutics

Denali is one of the prominent development-stage biotech companies working on treatments for neurodegenerative diseases. It has a pipeline with seven drug candidates in clinical development. It’s developing small molecules and large molecules against a range of neurodegenerative diseases that includes rare diseases such as Hunter Syndrome and ALS, as well as more common maladies such Alzheimer’s disease and Parkinson’s.

Ryan is a scientist by training. He did his PhD at Stanford University and spent the first part of his career running labs at Genentech. He joined with former Genentech colleagues Alex Schuth and Marc Tessier-Lavigne to co-found Denali in 2015. The company secured a Series A financing of $217 million – which was big then, and is still big now. The company doesn’t yet have any products on the market, but it has amassed $1.34 billion in cash as of the end of 2022, and has established a broad base of support for its R&D through partnerships with Sanofi, Biogen, and Takeda Pharmaceuticals.

This is a wide-ranging conversation that includes Ryan’s path into biotech and neuroscience, some of the classic challenges of the field, and reasons why he’s optimistic that significant progress is coming to neuroscience R&D.

And now for a word from the sponsor of The Long Run.

Tired of spending hours searching for the exact research products and services you need? Scientist.com is here to help. Their award-winning digital platform makes it easy to find and purchase life science reagents, lab supplies and custom research services from thousands of global laboratories.  Scientist.com helps you outsource everything but the genius!

Save time and money and focus on what really matters, your groundbreaking ideas.

Learn more at:

Scientist.com/LongRun

Now, please join me and Ryan Watts on The Long Run.

 

6
Mar
2023

Remote/Hybrid Work is Here to Stay. Biotech Should Embrace It

Chris Garabedian, chairman and CEO, Xontogeny

There has been much debate about the biotech workplace in the aftermath of pandemic disruptions.

Employers and employees are all thinking about how and to what extent companies should enable and support remote-based work. People are discussing the advantages and disadvantages of remote work, especially in terms of the effect on productivity and creativity. 

This matters in biotech. The goal of the industry is to discover, develop and manufacture products for patients. Two of those three activities — wet-lab discovery and manufacturing — require a specialized physical space. Much of this work has been outsourced and takes place far from headquarters. Many biotech office workers rarely, if ever, step foot into the wet labs or manufacturing facilities, even if those functions are in the same building.

The nostalgia for pre-Covid office culture must die. Some biotech leaders, including John Maraganore (TR, Jan. 24, 2023), argue that having management and employees working together in person is essential to a company’s ability to be productive, creative, and competitive.

I disagree. This is a notion that should fade away.

The Fourth Industrial Revolution, which includes the blurring of the physical and digital worlds with increasingly smarter and connected technologies that allow us to intimately communicate with an expanded network of people ever further away from us geographically, has arrived. Those clinging to work practices established in the Third Industrial Revolution are swimming against the current of an inevitable change in corporate culture and talent management. Companies that embrace this change will have a sustainable competitive advantage. 

It is important to take a long view of how technology has profoundly changed our behaviors in how we live, learn and work over the last 200 years, especially over the past 50 years.

No one questions that we are designed to be social creatures and we long for interaction with others from the moment we leave our mother’s womb. Throughout most of history, we have established strong relationships with our immediate and extended families and forged friendships and work relationships with people nearby. 

Transportation and Communications

The last two centuries brought technological change to transportation and communications. These developments changed the nature of commerce, how and where we worked, and our personal and professional identities and lifestyles. 

Although sea vessels were used by the Egyptians and Mesopotamians thousands of years ago, most relied on horses and camels to connect with others for socialization and commerce.  It was not until the 19th and 20th centuries that we saw the invention of the steam locomotive (1812), the first automobile (1886) and the first airplane (1903) which became available to the masses, at least in wealthier developed countries, decades later. 

While the invention of the Gutenberg printing press in the 15th Century is credited as one of the greatest inventions in history, it was not until the last 175 years that we saw the widespread proliferation of interpersonal communication tools. It started with the telegraph (1844) and telephone (1876), and continued with the mobile telephone (1973). These communication tools enabled us to reach and interact with others who lived great distances away. 

The more profound impact on our socialization, culture, identity and sense of community came with the advent of radio and television. Although these were one-way passive forms of communication, these media allowed us to become familiar with those that were not in close physical contact with us.  

This passive form of communications technology has exploded into an interactive lollapalooza over the last 30 years with the internet, social media, VOIP, texting and messaging apps. When Covid hit, it brought another change, forcing everyone in business to use Zoom and similar video conference platforms. 

We discovered some things. It was now possible to have a productive meeting, with internal colleagues or external collaborators, that went well beyond the audio-only conference calls of the past.

Before COVID, the arguments for remote-based and hybrid work over traditional office culture were largely theoretical. But now we have run the experiment.

Consider the following advantages:

  • Flexibility and Improved Work-Life Balance: Remote work allows for a more flexible schedule, allowing employees to balance their work and personal lives better: work-life balance should be minimized or eliminated as a problem if the employee is empowered;
  • Increased Productivity: Studies have shown that remote workers are often more productive because of fewer distractions and a quieter work environment. Employees can also structure their day and work environment based on what’s best for them;
  • Cost Savings: Remote work eliminates the need for commuting, saving time and money on transportation. It also reduces office space and other overhead expenses;
  • Access to a wider pool of talent: Companies can hire the best employees, regardless of where they live. It may also provide an easier path to achieve a more diverse workforce;
  • Improved mental and physical health: Remote work can reduce stress from the daily commute. It can allow more time for exercise that improves physical health;
  • Improved morale and satisfaction: Remote workers have reported higher levels of job satisfaction and morale;
  • Environmentally friendly: Remote work can reduce carbon emissions by reducing commuting.

While no one is suggesting the choice between old office culture and remote-based work is black and white, as each has their pros and cons, I believe the balance is more favorably weighted toward decentralizing work that that can be done anywhere.

Let’s Stop (or considerably slow) the Travel

My job requires leading investments across dozens of companies, serving on Boards of Directors, and having a presence at conferences and industry events.

It might sound shocking, but it is no exaggeration to say that the move to video meetings allowed me to be 2 to 3 times more productive than in the pre-Covid era. 

In 2018 and 2019, I spent over 200 nights in hotels. That translated into countless hours on trains and planes (often with spotty or unworkable WiFi) and the often unproductive time Uber-ing to the airport, waiting at the gate, Uber-ing to a hotel, and waiting in line to check in. At the end of this typical slog, I’d realize I wasted an entire day, often to attend a 90-minute in-person meeting or to be part of a 60-minute panel at a conference.

While I often would be able to stay on top of my emails or dial into a few critical conference calls, it would have been easier to manage if I were consistently in front of a computer, on video, with no concerns about a good WiFi signal. 

On prolonged trips in the pre-Covid era, was not uncommon for me to have no or limited meaningful interactions with my employees. Since Zoom became a mainstay, I now have more routine daily interactions with my employees. I have experienced more team engagement, not less.

Several years before Covid lockdowns forced the new way of working, I founded my company, Xontogeny, with a simple concept: good science and entrepreneurs were found in institutions and geographies all over the country – not just in the top biotech hubs of Boston/Cambridge and the San Francisco Bay Area. Our industry needed a better way to assist these companies by providing operational and strategic support remotely. 

The model has worked. We have successfully supported more than a dozen seed investments, which often require weekly interactions. Almost none of those meetings take place in-person.  Our seed investment companies are located in Philadelphia, Chicago, San Diego, Research Triangle Park, NC and our first collaboration was with a company in Blacksburg, Virginia. The Xontogeny team covers even more territory through investments out of our Perceptive Xontogeny Venture Funds (an investment vehicle of Perceptive Advisors).

As investors, we keep tabs on our companies largely through quarterly board meetings. With over 20 investments, simply participating in board meetings translates to a big time commitment. If we were required to attend four board meetings per year in-person for every company, it would be almost impossible because of the required travel. 

In-person board meetings can be especially inefficient. For example, to justify flying 6 to 8 board members from various distances, the meeting agendas are often extended to 6 to 7 hours (e.g., 8:00am-2:00pm). Many directors have to depart early to catch their Uber back to the airport to get home so they are not forced to take a red-eye back to the East Coast. 

Contrast this experience of dialing into a Zoom link for 3 to 4 hours. It’s now possible to fit two or three board meetings into a day. In the last three years, no board meeting I’ve attended virtually has needed more than four hours. More often than not, they end early, without anyone ever feeling there was insufficient time to cover the necessary topics. This increased efficiency frees up time for me to do other valuable things, like meet with employees.

Requests for in-person attendance at conferences are back in full-swing, but I already long for the Covid era where I was able to attend multiple conferences within a couple weeks, or even in the same week.

At one Netherlands-focused virtual conference, I was able to engage with dozens of scientists and entrepreneurs. The following week, I attended a similar virtual conference focused on Copenhagen. Each of these conferences took up less than a couple of hours on my calendar, but led to numerous follow-ups on email, many pitch decks shared, and subsequent one-on-one Zoom meetings.

The results were just as good, if not better, than if I had spent a lot of extra time and money to attend in person. The same could be said the for the JP Morgan Healthcare Conference. I stopped attending in 2020, and have found ways to be just as productive, if not moreso, by working in the virtual office.

Office Culture is Overrated and Outdated

Biotech has historically taken root in geographic hubs because they have an abundance of drug development talent. San Francisco, Boston, New York/New Jersey have been traditional leaders, thanks to their many excellent academic institutions. But it’s also true that many outstanding scientific entrepreneurs are scattered across the United States (and throughout the world). An increasing pool of experienced industry talent prefers to live outside the main biotech hubs, and many can’t be enticed to move back. 

Employers, myself included, have had to choose between losing star employees or adopting a more flexible remote-based model. I prefer to choose keeping star employees, and offering a more flexible, remote workplace. Young employees are especially interested in the flexible, remote work offerings. Any company that wants to recruit and retain this group of workers should be paying close attention.

The argument for how the younger generation will suffer in their careers if they don’t experience the same in-office facetime with their managers or CEOs strikes me as empty. They are getting more facetime, virtually, and more opportunities to contribute, display their creativity and convey the fruits of their work to managers. In this new virtual, geographically-agnostic biotech community, I’ve met more people, and established more meaningful relationships, than during any three-year period of my career.

Biotech Can Benefit

These last 18 months have been a challenging time for the vast majority of early-stage biotech companies. Management teams are finding it difficult to close financings, are asked to shelve programs and focus on one or two core activities to conserve cash. Layoffs are a weekly occurrence. 

Our industry is constantly under pressure to be smarter about R&D productivity and careful with investor dollars. Cutting back on unnecessary leases, and embracing the remote workplace, is one way to be a good steward of capital and extend the company runway.

Of course, there are still good reasons to maintain some physical space, even for the office workers. There will remain employees who prefer the go into the office and management teams that prefer office culture and will demand all employees report to the office 3 or 4 days per week. I don’t begrudge those that choose to offer this alternative working environment. There are many workers that prefer to get more of their social needs and personal identity through daily in-person work interactions. Many of us have formed our closest friendships, met romantic partners or found our spouses through the workplace. 

In speaking to many colleagues, it seems that for every one of those individuals that prefer to have an in-person office setting, there are as many or more that prefer to spend more time at home with their partner or spouse, to see their children more often, or to visit aging parents. Some prefer to use their extra time to indulge their desire to travel. All this can be done while being a productive employee with just as much opportunity to impress the boss, take on new assignments, and advance one’s career. 

It is time to embrace the future of remote work or ‘very’ flexible hybrid models as we embark on the Fourth Industrial Revolution. We can work together more efficiently, productively and, yes, creatively.

These new communication tools bring us all closer together. Rather than go back to an old way of working, let’s put more energy into determining how to optimize this new hybrid/remote model so we can get better at our fundamental work – discovery, development and manufacturing of new products for patients

If all goes well, we’ll see benefits extend far beyond what we’ve seen in these last three years.

28
Feb
2023

A Life in Biotech & the Cell Therapy Wave: David Hallal on The Long Run

Today’s guest on The Long Run is David Hallal.

David is the CEO of Waltham, Mass.-based ElevateBio.

David Hallal, chairman and CEO, ElevateBio

ElevateBio describes itself as a technology-driven company for cell therapies. It has pulled together gene editing tools, induced pluripotent stem cells, and various viral vectors necessary to modify cells to fight cancer or treat other diseases.

David co-founded ElevateBio in 2017 with Mitchell Finer, the president of R&D, and Vikas Sinha, the chief financial officer. They saw a big bang moment in cell therapy, as hundreds of companies were being formed around the time of FDA approval of CD19-directed CAR-T therapies for cancer from Novartis and Kite Pharma. They saw many of these companies weren’t fully formed, and had a piece of technology here or there, but not the whole toolkit. Many of these companies were going to struggle to raise the cash needed to invest in needed facilities, and they were likely to need help from partners to refine their processes if they were ever going to do complex manufacturing at scale.

ElevateBio raised $150 million in a Series A financing in May 2019. It has used the money, and more that came later, to invest a lot in facilities and people with know-how to run them. The business is something of a hybrid animal. It uses its technology, people and facilities to make cell therapies under contract for other companies. You could call that traditional contract manufacturing. But this isn’t exactly a ho-hum service provider with relatively flat profit margins. It seeks to further leverage its technology and entrepreneurial people by investing in companies with upside potential, such as AlloVir, Abata Therapeutics, Life Edit Therapeutics and a startup from the lab of George Daley, the prominent stem cell researcher at Boston Children’s Hospital and Dean of Harvard Medical School.

David came to this moment – the beginning of a cell and gene therapy wave – after a long career in more traditional biotech. He was CEO of Alexion Pharmaceuticals, the rare disease company that was eventually acquired by AstraZeneca. He came up on the commercial side of the business, including key early career stops at Amgen, Biogen and OSI Eyetech.

This episode was recorded in person at the JP Morgan Healthcare Conference in San Francisco. Biotech history buffs will especially enjoy the first half, where he talks about what pharmaceutical sales was like and what it was like to work at Amgen in the early days.

Now, please join me and David Hallal on The Long Run.

26
Feb
2023

The Success of Your Tech Deployment Depends On A Role You’ve Probably Never Heard Of 

David Shaywitz

The success or failure of many technology platforms — including in particular health tech platforms — rests with a largely obscure role of outsized importance: the “solutions engineer.” 

The role itself goes by many names. Back when I was at DNAnexus in the mid-2010s, this role was called “Solutions Scientist.” Others call it “Forward-Deployed Engineer” or “Embedded Analyst.”

Whatever the title, the solutions engineer (SE) is a technology or data expert who embeds within an organization that’s trying to figure out how to make a new technology platform work. The SE functions as an on-site super tech-support specialist who helps the customer use the technology and get as much value from it as possible. While the SE doesn’t need to reside in the customer’s organization, the SE must inhabit the customer’s challenges and workflow — and having a desk (real or virtual) within the customer’s team can help.

Richard Daly, CEO, DNAnexus

“In a high velocity health information technology environment such as we are in,” says DNAnexus CEO Richard Daly, the SE is the “key person” for both selling and service, because “they get inside the customer’s skin, co-own the problem, and fit the solution to the customer need.”

At the same time – and arguably, this is the most critical aspect – the SE also develops a richly nuanced understanding of the needs of the customer and is able to provide this intelligence back to the product manager and the rest of the engineering team, so that the platform can be evolved to more effectively meet the needs (and future needs) of the customer, and presumably other future customers.

In serving as a “key transmission point” for both customer and tech company, Daly notes, the SE “improves product development and fit-to-market.”

Moreover, Daly adds, since “customers in the health tech market are scaling,” which introduces its own set of challenges, a well-functioning team of SE’s can “draw on industry and other customer experiences, in a way syndicating industry-wide knowledge, and ensuring continuing fit as the customer evolves.”

SEs come in different flavors, says Dr. Amy Abernethy, President of Product Development and Chief Medical Officer of Verily. 

“The type of SE depends on the type of tech itself. Sometimes the tech is purely software, in which case you need someone fluent in the intersection of software development and customer needs. Sometimes the tech developer is supporting mostly data products, in which case the SE looks more like an embedded analyst who has data empathy, an understanding of customer analytic needs, and a sense of the possible from a data perspective. In pharma, the tech product is becoming increasingly more a combo of software and data, and the SE needs to be able to blend both of these skills.”

Your Initial Solution May Not Be Your Customer’s Exact Problem

A lot of the thinking here comes back to the advice offered by Lean Startup author and entrepreneurship guru Steve Blank, who emphasized the need for the technology developers to spend as much time as possible with actual customers, to ensure that the needs the developers have prioritized are actually the same needs customers face. 

In this context, SE roles provide the opportunity for essential fine tuning; after all, if a tech platform wasn’t in the ballpark, the SE would never have been given the opportunity to engage with a customer in the first place, and certainly wouldn’t be in the privileged position to go work on a customer’s team. 

The most successful SE’s exhibit an authentic interest in the customer’s business – they are innately, intensely curious about the work the customer is doing, and the problems the customer is trying to solve.  A high EQ SE can become an enormously valuable member of the customer’s team, a critical resource that enables them to function better and achieve more.

At the same time, it is absolutely critical that the SE is not regarded as simply as an implementation specialist, someone who gets a team up and running on a new system. Rather, the most significant opportunity the SE affords is to provide an inquisitive and adaptable technology team with a sense of what is and isn’t working well, and how the technology could more effectively meet customer needs. If the technology team isn’t thirsty for and responsive to this feedback, a valuable opportunity is lost.

Significant Challenge And Opportunity for Biopharmas

The need for SEs applies not only to technology companies who are developing platforms, but also, in biopharmas, to internal technology teams as well. Indeed, because of the abiding, largely unbridged differences between the seemingly immiscible cultures of life-science trained drug developers, and engineering-oriented technologists, effective communication and shared understanding can be a challenge, as I’ve discussed here and here.

Amy Abernethy, president of product development and chief medical officer, Verily

In this context, Dr. Abernethy points out, SE’s can “help with ‘lingua franca,’ translating terms between customers and developers.”

Adds Dr. Abernethy, “This task is also becoming more important within the tech and pharma companies themselves as we see a confluence of actions/capabilities across teams within the companies. Building lingua franca will be a key accelerant across the industry and the SE can help.”

In large biopharmas, it’s common for technology teams, after a highly-structured, well-intentioned needs-gathering exercise, to put their heads down and set up developing and deploying a technology solution. 

Afterwards, there’s often little rejoicing. Tech teams tend to grumble about how their brilliant technology is not being efficiently utilized, inevitably calling for more “change management,” while customers routinely complain that their most important needs are (still) not being met. It’s not uncommon to hear technology teams push for mandates that compel customers to use the new technology – an effort that’s received about as well as you might expect. What’s worse, this pattern seems to repeat all the time – it feels like the rule, not the exception.

In many cases, a critical missing link is an SE role – a person with “amphipathic” qualities embedded inside the customer teams to help apply and troubleshoot the technology, and (critically!) to provide ongoing granular feedback to the tech teams about how the tech could evolve to meet customer needs more effectively.

Success critically requires both curiosity and a commitment to continuous iteration on the part of tech teams. These teams must want to deeply understand customer challenges, and ideally begin to viscerally understand what the customers are trying to do, the problems the customers are trying to solve. At least as importantly, the tech teams must also see the relationship with their customers as a constant, on-going dialog. This can be particularly challenging in biopharma when technology teams are often more accustomed to obtaining and then building towards a set of fixed specifications. 

There’s another complicating factor: many customers — especially within biopharmas — may not be all that clear on what they initially want from technology, or understand what might (or might not) be possible. This understanding can evolve and sharpen over time, particularly when catalyzed by a skilled SE, and enabled by a tech organization that’s driven to constantly refine the product on offer.

An Evolving Role

As technology plays a more central role in healthcare and biopharma, the role of the SE is likely to evolve accordingly. As Dr. Abernethy observes, “The technologies that are being developed are more and more informed by our understanding of underlying biology and/or data/implementation elements from clinical care. I wouldn’t be surprised if the phenotype of the solutions engineer of the future has a bit more science or clinical knowledge built in, and the job descriptions will similarly become more sophisticated.”

Perhaps reflecting her previous experience as Deputy Commissioner of the FDA, Dr. Abernethy adds:

“The SE may also need to generally understand the implications and requirements of many different regulatory paradigms. We see this right now, because sometimes the SE looks like a person who understands computational biology and software engineering, sometimes looks like a person who understands clinical and EHR data coupled with the needs of outcomes researchers delivering a dataset for a regulatory filing, and sometimes looks like a person who understands when a product crosses over the line to being regulated clinical decision support.” 

Phrased differently, the SE role reflects and embodies the need for constant dialog and evolutionary refinement between those developing digital, data, and technology offerings, and those who hope to leverage these powerful but still unformed or unfinished capabilities.

Bottom Line

A key gap in technology deployment, particularly in biopharmas, is the space between what technology teams develop and what customers actually want and need. A skilled solution engineer (SE) can bridge this gap and serve as a bidirectional translator, helping customers more effectively utilize the technology, and guiding technology teams to create improved solutions. Success requires not only a skilled SE, but also a curious and adaptable technology team driven to elicit, understand, and respond to customer needs – even (especially) when these needs are difficult for the customer to articulate.

14
Feb
2023

Engineered B-Cell Therapies for Cancer & Rare Diseases: Joanne Smith-Farrell on The Long Run

Today’s guest on The Long Run is Joanne Smith-Farrell.

Joanne is the CEO of Cambridge, Mass.-based Be Biopharma.

Joanne Smith-Farrell, CEO, Be Biopharma

Many listeners of this show are familiar with the explosion of activity in cell therapy. Engineered T cell therapies have delivered extraordinary results for people with certain types of cancer. The success in these personalized T cell therapies, which get modified outside the body and re-infused, has inspired all kinds of academic and industrial work in engineering other cell types as cancer fighters, such as NK cells. Many others are seeking ways to make off-the-shelf, or so-called allogeneic cell therapies, that can be administered to patients much more cheaply and easily in clinics around the world.

What you don’t hear as much about is engineered B-cell therapies. This other arm of the adaptive immune system has been challenging for scientists to work with. This is the work Be Biopharma is setting out to do. It seeks to create engineered B cell therapies for cancer and rare diseases, which can be given off-the-shelf to any patient, and be given via repeat doses over time, without the need for toxic preconditioning regimens that are required by today’s cell therapies.

Joanne came to lead this startup in 2021 from Bluebird Bio, where she was chief operating officer and head of the company’s oncology business unit.

Joanne passion for biopharmaceutical R&D shines through in this conversation. She has a personal story here that reveals a lot about her outlook on life.

And now for a word from the sponsor of The Long Run.

Tired of spending hours searching for the exact research products and services you need? Scientist.com is here to help. Their award-winning digital platform makes it easy to find and purchase life science reagents, lab supplies and custom research services from thousands of global laboratories. Scientist.com helps you outsource everything but the genius!

Save time and money and focus on what really matters, your groundbreaking ideas.

Learn more at:

Scientist.com/LongRun

9
Feb
2023

First, We Need to Generate the Right Data. Then AI Will Shine

Alice Zhang, CEO, Verge Genomics

ChatGPT is a hot topic across many industries. Some say the technology underpinning it – called generative AI – has created an “A.I. arms race.” However, relatively little attention is given to what is needed to fully leverage the promise of generative AI in healthcare, and specifically how it may help accelerate drug discovery and development.

That’s a mistake.

Recently, David Shaywitz offered a thoughtful opinion on why he sees generative AI as a profound technology with implications across the entire value chain. We agree with many of David’s views but want to offer additional perspective.

Victor Hanson-Smith, head of computational biology, Verge Genomics

In short, our belief is that AI will identify better targets, thus reducing clinical failures in drug development and leading to new medicines. Generative AI will play a role. However, the fundamental challenge in making better medicines a reality comes down to closing the massive data gaps that remain in drug development today.

And when it comes to the most complex diseases that still lack meaningful medicines, where the data comes from is essential. Today, the source of data that powers generative AI has substantial gaps. Over the long term, generative AI will enable the creation of meaningful medicines, but it will not offer a panacea for what ails all of drug discovery.

A Primer on M.L. Classification and Generative AI

To start, it’s necessary to have a grounding in machine learning (ML) classification. As the name implies, ML classification predicts whether things are or are not in a class.

Email spam filters are a great example. They ask, “Is this spam, or is it not?”

They work because they’ve been “trained” on thousands of previous data points (i.e., emails and the text within). Generative AI, by contrast, uses a class of algorithms called auto encoders and other approaches to generate new data that look like the input training data. It’s why a tool like ChatGPT is great at writing a birthday card. There are thousands, maybe even millions, of examples of birthday cards that it can pull from.

There are limitations though.

Ask ChatGPT to summarize a novel that was published this week, and it will give you the wrong answer, or maybe no answer. That’s because the book isn’t yet in the training data.

What does this have to do with drug discovery?

The above example illustrates a foundational point in drug discovery: input data – especially its provenance and quality – is essential for training models. Input data is the biggest bottleneck in drug development, especially for complex diseases where few or no therapies exist. Our worldview is that the sophistication of the AI/ML approach is irrelevant if the training data underpinning it is insufficient in the first place.

So, what kind of input biological data does generative AI need? It depends on the task. For optimizing chemical structures, generative AI mainly relies on vast databases of publicly available protein structures and sequences. This is powerful. We expect generative AI will have a massive impact on small molecule drug design when there is already a target in mind, a known mechanism of action, and the goal is to optimize the structure of a chemical. The wealth of available protein structure and chemistry data means a model can be well trained to craft an optimized small molecule candidate.

But a different problem – finding new therapeutic drug targets – requires different types of input data. This includes genomic, transcriptomic, and epigenomic sequence data from human tissue. What happens when this type of training data is unavailable? That’s what we’re solving for at Verge. We first fill a fundamental gap in generating the right kind of training data and then using ML classification to ask and answer the question, “Is this a good target or a bad target?”

Building a bridge from genetic drivers to disease symptoms

Take amyotrophic lateral sclerosis (ALS) as an example. At least 56 genes drive the development of ALS. Looking at one of those genes in isolation will tell you something about certain people with ALS, but nothing about the shared mechanisms that impact ALS in all patients. This genetic association data, or GWAS data, alone is insufficient to find treatments that are widely applicable to broad ALS populations. That theme repeats itself for other complex diseases we’ve evaluated, including neurodegeneration, neuropsychiatry, and peripheral inflammation.

The existing drug therapies for ALS treat symptoms of the disease, rather than the underlying causes. It is likely that if a generative AI approach was applied to ALS, it could predict more symptom-modifying treatments, but it would fail to identify fundamentally new disease-modifying treatments. Although AI can be excellent at pattern-matching to create additional examples of a thing, AI can struggle to create the first example of a thing.

This is precisely the problem the field of biotech faces for a wide range of diseases with no effective drug treatments. We don’t know what causes the disease, and haven’t collected the right kind of underlying data to even begin to lead us to the right answers.

Our approach in ALS is to use layers of “Omics” data – sourced from human, not animal tissue – to fill gaps in available training data. This enables us to discover molecular mechanisms that cause ALS. When these human omics data form the input for a training set, the output is insight into disease-modifying therapies for what we believe will be  a wide range of ALS patients. Using this approach, we build a bridge from diverse genetic drivers to shared disease symptoms; from genotype to phenotype. For Verge, this approach has been pivotal towards identifying a new target for ALS and starting clinical trials with a small molecule drug candidate against that target in just 4.5 years.

Back to the Value Chain

AI could affect the entire biopharmaceutical and healthcare value chains, but studies like this one have shown that “a striking contrast” has run through R&D in the last 60 years. The authors write that while “huge scientific and technological gains” should have improved R&D efficiency, “inflation-adjusted industrial R&D costs per novel drug increased nearly 100-fold between 1950 and 2010.” Worse, “drugs are more likely to fail in clinical development today than in the 1970s.”

AI today is being used to test more drugs faster, but it hasn’t fundamentally changed the probability of success. The biggest driver of rising R&D costs is the cost of failure. While using AI to optimize design is appealing it won’t mean much until it can better predict effectiveness of targets or drugs in humans. Today’s disease models (cells and animal models) are not great predictors of whether drugs work, so increases in efficiency in these models just provide larger quantities of poor-quality data. When models are poor, the outcomes will be, too. As the old saying goes — garbage in, garbage out.

Concluding Thoughts

No single type of training data will solve the complexities of discovering and developing new medicines. It will take multiple data types. But a relentless focus on finding the best types of data for the scientific problem, and generating lots of that data in a high-quality manner, will be what truly paves the way for AI to fulfill its potential in drug discovery.

4
Feb
2023

Grand Défi Ou Goulot D’étranglement Ultime: A French Pharma Tackles Data Science

David Shaywitz

Most biopharma companies have started down the path of digital transformation – a fundamental overhaul of everything they do for the digital age.

It’s not clear yet that anyone has arrived at the desired destination.

Even so, there have been some early wins, generally related to operations, as the CEOs of both Novartis and Lilly have described. Arguably, the most significant R&D success has been the organizational alignment and focus afforded by accurate, up-to-date digital dashboards, reflecting, for example, the status of the COVID clinical trials that Pfizer was running, as discussed here.

Behind the scenes, many biopharma R&D organizations have been exceptionally busy trying to apply emerging digital and data technologies to improve every aspect of how impactful new medicines are discovered, developed, and delivered. This strategic focus – and my current job description – is an industry preoccupation.

R&D represents such a vast opportunity space for emerging digital and data technologies that it can be difficult to keep track of all the activity across this expansive frontier. But a recent podcast delivers. A January 2023 episode of BIOS hosted by Chas Pulido and Chris Ghadban of Alix Ventures, and Brian Fiske of Mythic Therapeutics features Sanofi’s CSO and head of research, Frank Nestle. He provides a comprehensive, fairly representative introduction into the many ways biopharmas are approaching digital and data. He also shares insights into key underlying challenges (spoiler alert: data wrangling).

Frank Nestle, chief scientific officer, Sanofi

Nestle is a physician-scientist and immunologist by training; he has been with Sanofi since 2016, and in his current role for about two years.

Below, I discuss the key points Nestle makes about digital and data across R&D, and then offer additional perspective on these opportunities and challenges.

Vision: The Great Convergence

Nestle envisions that we’re heading towards a “great convergence between life sciences, engineering, and data science.” He adds that the “classical scientific foundations of physics, chemistry, and biology” have each “had their heyday.” Now, he says, “it’s data sciences and A.I.”

AI/ML Impact on Research: Optimizing the Assembly Line

Early drug development, Nestle argues, can be understood as an assembly line, where a new molecule is designed and then serially optimized. At each stage, “we are optimizing drug-like properties, like absorption, biodistribution in the body,” he explains. Historically, decisions along the way were made by people – often with extensive experience — sitting around a table and reviewing the data. Now, Nestle is trying to collect the rich data associated with each step in a more systematic way, so that A.I. can contribute. 

At the moment, Nestle says, the focus is on using data science to optimize each individual step, but allows that eventually, a “grand model” might be possible. 

Nestle notes that both early focus and early successes involve small molecules. For example, the number of potential molecules that must be synthesized and evaluated in the course of making a potential small molecule drug has been reduced, he said, from 5,000 to “several hundred.” He asserts Sanofi remains  interested in applying A.I. to the optimization of biologics. The challenge is “complexity.” Nevertheless, he suggests that using A.I.-based approaches to optimize biologics holds “at least as much promise, if not more promise, than in the small molecule space.”

To accelerate their efforts in the early drug discovery and design space, Nestle says, Sanofi has partnered with a number of tech bio companies including Exscientia, Atomwise, and Insilico Medicine

Translational Research: Organizing Multimodal Data

The process of understanding the molecular basis of disease, Nestle says, begins with “molecule disease maps.” The best description of these maps that I found was actually from ChatGPT (a technology I recently discussed), which reports that the term:

“refers to a visual representation or diagram of the molecular interactions and processes that are associated with a particular disease. This map can include information about the genes, proteins, and signaling pathways involved in the disease, as well as how these elements interact with each other. The goal of creating a molecular disease map is to gain a better understanding of the underlying causes of a disease and to identify potential targets for therapeutic intervention. By mapping the molecular interactions involved in a disease, researchers can gain insights into the complex biological mechanisms that drive disease progression and develop more effective treatments.”

According to Nestle, “these molecular disease maps are becoming more complex and more rich in data sets by the day.” He continues, “It started with mainly genetic datasets, but now we have expression data sets at every single level from RNA to proteins to metabolites. And we look at these molecular disease maps actually at a single-cell level.” 

The upshot, he says, is that these “provide us with an incredible space of data to interrogate.”

The algorithms used to analyze these data are often simple cluster analyses, Nestle says, but the goal is always to “reduce dimensionality” and find a “signal in these sometimes very noisy datasets.”

These molecular disease maps not only inform biomarker identification, but also assist with the identification of patient populations who might be particularly well (or poorly) suited for specific medications. 

Also contributing to the translational work, according to Nestle: a partnership with the French-American company Owkin. He cites their expertise in both federated machine learning (focused on clinical data associated with various medical centers) and digital pathology. 

Emerging Trial Technologies and Patient-Centricity

Nestle describes digital biomarkers as “absolutely ready for prime time.” He cites the use of actigraphy (see this helpful explainer from Koneksa) in the assessment of Parkinson’s Disease as a promising example in an area – neurology – where such biomarkers are critically needed because “trials just take too long.” He also mentions an approach under development, pioneered by MIT professor Dina Katabi, that repurposes a typical wireless router to monitor activities such as itching and scratching in some skin conditions.

Sanofi, like all biopharmas, is interested in decentralized trials; Nestle highlights a partnership (initiated pre-pandemic) with the company Science37. He also sees a “clear future not only for (the use of technology) in patient recruitment but also remote patient monitoring.” The adoption of technology to enhance trial recruitment and patient monitoring, he says, has been accelerated, dramatically and irreversibly, by the pandemic.

Nestle also emphasizes the role and importance of real world data (RWD) as a tool to better understand patients and their journeys. Insights from RWD can be used to improve “study feasibility or sample size optimization or endpoint modeling,” he says, and points to a “journey mapper” Sanofi has used to integrate and interpret RWD. This approach helped identify additional indications for Sanofi and Regeneron’s IL-4/IL-13 inhibitor dupilumab (Dupixent). That work has translated into benefits for a broad range of patients — and more revenue for the company.

Finally, Nestle highlights internal work on “integrated platform data solutions.” That sounds like efforts focused on supporting a drug after it’s launched through the provision of enabling technologies connecting patients, physicians, and data.  

Limitations: Analyse Sexy, Données Difficiles

Perhaps Nestle’s most important comments concern the limitations of advancing A.I. and other emerging technologies. And the most difficult challenge – “the ultimate bottleneck,” he asserts –is “the data.” 

He continues:

“Right now, data often exists in silos, they’re fraught with missing values – those zeros, as we call them — they’re not labeled correctly, they’re difficult to find. And that’s probably one of the biggest hurdles … that whole effort of generating, aggregating, normalizing, processing the data sometimes outweighs the actual analysis effort.”

He points out that “building foundations” – required for thoughtful data management – “is not necessarily a KPI [key performance indicator]” for large pharmas (who tend to be more focused on near-term measurements of performance). Hence you can only accomplish this, he says, with strong strategic support from “very senior leaders.” (This support may prove both more elusive and more essential in the context of Sanofi’s disappointing 2023 outlook – see here.)

A second problem Nestle points out are the well-intentioned country-specific regulations governing data protection. These policies tend to be quite fragmented across nations and healthcare systems. That impedes the flow of data, and complicates opportunities to learn from the aggregated experiences of patients. The federated approach used by Owkin represents one approach to managing some of these challenges.

Additional Reflections

Nestle offers a comprehensive and generally upbeat assessment of the opportunities before us in the application of emerging digital and data technology to R&D. 

Additional opportunities I’m particularly excited about include imagining what may be possible (more accurately, anticipating what seems likely to be possible) through current and future large language models like GPT-3 and (soon?) GPT-4. One example: effortlessly matching clinical trial protocols to the patient populations already present at certain medical centers. 

There are additional significant challenges that are easy to lose sight of, particularly in the context of such a compelling vision. For instance, I’m impressed by the challenge of timely digital biomarker development and validation. In a regulatory environment where teams struggle to validate electronic administration of well-established paper scales, the challenge of validating wearable parameters is often substantial, and doing this at the same time you’re developing the molecule you hope to use the digital tool to assess is exceptionally, often prohibitively ambitious. The many levels of complexity, and relatively constricted timelines, can be overwhelming.

On the clinical trial front, the universal embrace of decentralized trials – an obviously patient-centric concept that is endorsed and pursued by most everyone – belies the many challenges in pulling these off at all, to say nothing of doing them efficiently and reliably, and using available technology platforms (PR promises aside). The complexity and expense of actually executing meaningfully decentralized trials (versus, for example, conducting a single check-in remotely and calling it a win) is the elephant in the room (one of them) that isn’t discussed in polite company, but which preoccupies many of us who believe deeply in the concept of decentralized clinical trials and are eager to see the promise fulfilled.

Also not to be underestimated: the challenge of organizing multimodal data on a platform where the data can be accessed and analyzed. I appreciate the value of giving names (like “digital molecular map” or “journey map”) to important problems and significant data missions. Ultimately, though, success depends on the quality of the underlying data and the utility of the platform on which these data are housed and analyzed. This is arguably yet another area where vendor slideware seems far ahead of actual user experience.

Some of the most significant challenges digital and data efforts face within R&D are (remain) organizational. Traditional drug developers – those in the trenches, doing the work – are still trying to figure out what to make of data science and data scientists in what is still a largely traditional drug development world. This challenge is compounded by a massive gap between the hype of technology and contemporary reality. 

On the other hand, when there is compelling, readily implementable technology, it’s enthusiastically adopted.  Every structural biology group, for instance, routinely uses AlphaFold, the program that predicts protein folding (structure) based on underlying amino acid sequence data.  

An area of real opportunity (and acknowledging my own bias) is translational medicine, where practitioners struggle to parse actionable insights from a motley collection of multi-modal data. (The need is particularly great given that the industry’s most costly problem — see here — is the absence of good translational models.) The concept of integrating diverse data to generate biological and clinical insights is universally celebrated, of course, but the data wrangling challenges are exceptional. Perhaps because of this, translational medicine, arguably, still hasn’t quite lived up to its potential. As a discipline, it offers particular promise to thoughtful data scientists, with profound opportunity for outsized impact. 

Bottom Line

R&D organizations recognize that emerging digital and data technologies represent important, perhaps essential, enabling tools to advance their mission. As Sanofi’s Frank Nestle explains, digital and data technologies are increasingly deployed across a range of R&D activities. So far, our reach far exceeds our grasp, but there’s been real progress, along with a high probability of more success. Our greatest challenges are navigating not the sexy analytics and data visualizations that everyone covets, but rather, the far less glamorous work of establishing the underlying data flows upon which everything depends.  This is a lesson familiar to accomplished data scientists like Recursion’s Imran Haque and Verily’s Amy Abernethy, among others – see here.

The data challenges are amplified in large biopharmas because these companies:

  • operate internationally, necessitating adherence to a wide array of data restrictions;
  • are the result of decades of mergers and acquisitions, further complicating data management;
  • inevitably involve complex organizational politics that must be navigated, as Stanford’s Jeffrey Pfeffer has compellingly described.

Data science continues to hold extraordinary promise for biopharma; translational medicine represents a particularly compelling opportunity. Our collective challenge is figuring out how to work through the considerable data wrangling challenges and deliver palpable progress.

2
Feb
2023

When Life-Saving Medicines Are Ammunition in a Trade War

Dr. Jingyi Liu, clinical fellow in medicine, Brigham & Women’s Hospital

In early January, I held a telemedicine visit with a patient who reported a positive at-home COVID test and mild shortness of breath.

During the visit, I looked through her health conditions and medications and decided that it was appropriate to prescribe Paxlovid. Later that day, she received the prescription from her local pharmacy, free of charge.

Afterwards, I called my elderly grandparents. They live in Wuhan, China, where the COVID-19 pandemic began. They have been confined to their neighborhood for the last three years.

Shortly after China’s started to relax its Zero COVID policy in December, my grandparents both were infected with COVID. Between weak breaths, my grandfather described to me over WeChat how he and my grandmother spent hours visiting different pharmacies just to find a bottle of acetaminophen, the over-the-counter pain reliever. He told me that the only way to get a prescription of Paxlovid was on the back market, at a cost of 50,000 RMB, or more than $7,000. My family’s prior attempts to send a care package with personal protective equipment and over-the-counter anti-fever medications had been confiscated at border control.

I work as a doctor at one of the most well-resourced healthcare systems in the United States. I have never had a patient who needed a COVID drug but was unable to obtain it. While I remember what it was like to treat patients before we had COVID vaccines and therapeutics, that memory is fading.

That is, until my loved ones contracted COVID half a world away.

China lifted its zero COVID policy after civil discontent rocked major cities from Beijing to Shanghai. It’s no surprise that the virus spread like wildfire after the policy was lifted, given China’s average population density is four times that of the United States. For a country that has zero approved mRNA vaccines, only one foreign antiviral (Paxlovid) and little herd immunity, a surge led to more than 900 million cases of COVID over a few weeks, as well as a coffin shortage.

While domestic COVID vaccines are available and the government reports that its COVID vaccination rate is around 90%, it’s widely believed that these domestic vaccines are less efficacious than the foreign-produced mRNA vaccines. In addition to having less efficacious vaccines, patients in China also have fewer treatment options once they’ve been infected with COVID. Paxlovid is approved but its availability is highly controlled. Corroborating my family’s description of the challenges of finding life-saving drugs, WeChat, a Chinese social media platform, exploded with dealers selling questionable supplies of “Paxlovid”.  

At a price of $7,000 per course, Paxlovid was hopelessly inaccessible for ordinary people like my family.

COVID exposed and deepened healthcare inequities not only domestically, but also internationally. If the pandemic has taught us anything, it’s that illness has no borders. Yet, it is also apparent that access to life-saving medicines like COVID vaccines and therapeutics depends in large part on where you live.

If China continues to favor home-grown medications over foreign-produced but more efficacious treatments and vaccines, I am concerned that this will have a negative public health impact in China. Given the current tense state of US-China trade relations, the supply of effective treatments will continue to be controlled in the name of nationalism. Drug approvals will continue to be weaponized in a race to become the next global superpower.

This is the current dynamic in the COVID pandemic, but the same dynamics could easily restrict access to other important medicines.

When life-saving medicines are used as ammunition in a trade war, ordinary people get caught in the crossfire. This is a kind of war that no one wins.

1
Feb
2023

A Biotech Journalism Outlet Built to Last: Rick Berke on The Long Run

Today’s guest on The Long Run is Rick Berke.

Rick is the co-founder and executive editor of STAT.

Rick Berke, co-founder and executive editor, STAT

Just about everyone who listens to The Long Run probably already reads STAT. If you don’t, you should. It’s become a go-to publication for breaking news, features, and in-depth investigative reporting across the world of biotech and healthcare.

John Henry, the billionaire investor and owner of the Boston Red Sox, bankrolled STAT from the beginning in 2015. He had taken an interest in media through his acquisition of Boston Globe Media. Boston clearly had a thriving life sciences and healthcare economy, and he noticed that it wasn’t being covered by media in the kind of breadth and depth it deserved.

There’s a reason for that. The media industry was in crisis. The online business models of the early 21st century were failing. Newspapers were the beating heart of the journalistic enterprise, and they have shed about 60-70 percent of their workforce in the past 20 years. The challenge for STAT, in the online era, was to create not only a quality outlet for independent journalism about life sciences, but to do so with a sustainable business model that could support it.

Rick, a veteran of The New York Times, had just come off a stint at Politico. He was new to life sciences. But he quickly discovered there were a lot of amazing stories to tell. This was a challenge he could sink his teeth into.

Nearly eight years later, STAT has established a reputation for journalistic excellence. Equally important, it has created a sustainable business model. STAT is now in position to hire more journalists, and extend its ambitions into new coverage areas and geographies. Rick deserves a lot of credit for this.

The biotech industry needs quality reporting to help people make good decisions, and to hold people accountable when necessary. This conversation is a rare peek behind the curtain of our industry, which I think many people in biotech seldom think about, but will find illuminating.

And now for a word from the sponsor of The Long Run – the BIO CEO & Investor Conference.

Now in its 25th year, the BIO CEO & Investor Conference is a premier event connecting biotech leaders from established and emerging public and private companies with the investor and banking communities.

You can expect limitless networking, on-point sessions crafted by impressive industry experts, polished company presentations, and making important connections powered by BIO One-on-One PartneringTM.

We look forward to seeing you February 6-9 in New York and virtually.

Register now

31
Jan
2023

Rebounding From a Setback With a Transformative Partnership

Ankit Mahadevia, CEO, Spero Therapeutics

One of the well-known rituals at many companies is the holiday party. It’s a way to unwind after a long and demanding year and get to know our colleagues better.

The January gathering was extra special at Spero Therapeutics. This was our first in-person holiday celebration in three years. We gathered at a hotel in Cambridge, not far from the office. We thanked employees and their families for their support of our mission to fight hard-to-treat bacterial infections.

The room was decorated in silver. The mood was festive. The team presented an “Up and Comer” award to a star employee and toasted a long-time employee’s 7-year-anniversary. Some introverted scientists even made it out to the dance floor.

The celebration was extra sweet, given some memories of the venue. This was the same hotel where, last year, our executive team learned from the FDA that the data package for tebipenem, an oral carbapenem for patients with urinary tract infections, may be insufficient to support approval in the current review cycle. It’s also the same hotel where the senior team made the difficult, but necessary decision to restructure the company to focus on our other medicines. In May, we had to lay off about 110 employees – 75 percent of our staff.

What a difference a few months can make. By September, we had turned things around. We announced a transformative partnership with GSK that puts tebipenem on a path to potential FDA approval. The collaboration’s $66 million upfront payment extends Spero’s runway to advance the rest of our late-stage pipeline beyond 2024 and enables over $600 million in potential milestone payments.   

Our stock was depressed, along with most others in the biotech sector. We had to manage through major changes. Even so, we secured this partnership while working in a therapeutic area that has fallen out of favor with many in large pharma.  

As we think back on the past year, we had to ask ourselves: How did we do it?

This doesn’t happen in antibiotics – but fundamentals and strategic fit matter

The popular view on antibiotics is that any drug that kills a microorganism is already cursed from birth. Some high-profile failures several years ago have informed the narrative that antibiotics will never sell, and will never get credit with Pharma or investors. Therefore, the initial reaction to our collaboration was surprise – a transformative collaboration for an antibiotic just didn’t square with most pre-conceived notions.   

Readers of this column know that there’s more to the story – the fundamentals around a medicine matter more than broad, category-level pronouncements. Medicines that work on a true unmet need, and that are commercially viable, have driven momentum for the companies that develop them.

The conviction that we and our partners have in tebipenem comes from doing the work. We, and our partners, spoke to clinicians treating UTIs. We engaged in payor research. We did a thorough analysis of the UTI treatment landscape.

New oral, broader spectrum UTI agents serve a major clinical need. Drug resistance impacts millions of patients. There hasn’t been a new oral therapy for these infections in over 30 years. We heard from patients (even a few on the Spero team and their families) suffering from difficult-to-treat UTIs.

Importantly, we also learned a new oral UTI therapy has commercial potential. It could reduce unnecessary hospitalizations, providing relief to frustrated patients, and enabling distribution of therapy outside the hospital (see here for more on why that matters).

Further, there was a strong strategic fit. GSK is developing gepotidacin, a complementary medicine to tebipenem for patients with uncomplicated urinary tract infections.  

Securing the perimeter quickly created room for creative options

Before we could think about a new regulatory path for tebipenem, we needed the time, resources, and credible alternatives necessary to let collaboration opportunities blossom. Time was of the essence – the longer we waited last spring to restructure, the less time we’d have to operate and deliver on key data milestones. We couldn’t afford to delay the inevitable.

This meant pivoting quickly to focus on delivering near-term data on our orphan disease program SPR720, extending our capital runway, and ensuring the core team stayed.    

The most critical task was focusing on the team we needed for this new chapter. We spent some emotional hours last summer to honor the contributions of departing Sperobes. Our efforts to help departing Sperobes find their next roles helped us process the moment and focus on the future.

I will always be grateful to the Sperobes that stuck with us to deliver on the mission in those most uncertain months following our restructuring. Despite having to reorient themselves on the fly, they stayed focused on the mission and delivered.

Leading a biotech is lonely, and even lonelier when things haven’t gone to plan; having a committed team that didn’t quit when it got hard inspired us to keep finding new ways to move forward.  

Long-term relationships, strengthened by transparency

Successful business partnerships are about relationships. The roots of our partnership go back multiple years. GSK has been a longtime investor in Spero. They know what we do. Over time, as we delivered on what we promised on tebipenem, and GSK continued to advance its own infectious disease portfolio, these discussions broadened and deepened. 

Leading up to the September partnership announcement, there was ongoing dialogue at multiple levels of each organization about what could be possible to bring tebipenem to patients. In a world emerging post-pandemic, this went beyond polite Zoom conversations. Several dinners, and a shared love of single malt scotch and world history paved the way for a shared vision.

Our commitment to transparency deepened the discussions. This isn’t the norm. The tendency following a setback is to close ranks. Instead of doing that, we invited GSK in to join us on key regulatory interactions.

They could see for themselves what the FDA was saying, and help us think about what it would take to win an approval for tebipenem.

We built shared conviction on the importance of the program and the viability of the path forward. This conviction served us well when deal discussions hit inevitable sticking points.

Moving forward

Along with the chance to spend time with our colleagues, our holiday party was special because we could make some positive new memories in the venue and focus on the future.

The urgency of this mission hit close home recently as I helped my mother navigate through a couple of months of cycling through ineffective oral antibiotics. It was a reminder of the importance of what we do. We all know patients are waiting.

30
Jan
2023

Generative AI: No Humbug

David Shaywitz

In 1845, dentist Horace Wells stood before Harvard medical students and faculty, eager to demonstrate the utility of nitrous oxide – laughing gas – as a general anesthetic. 

Wells tried it out on a patient who needed  a tooth extraction. The dose, it turned out, wasn’t enough. The patient screamed in agony. 

As described by Paul Offit in You Bet Your Life (my 2021 WSJ review here), the demonstration elicited “peals of laughter from the audience, some of whom shouted, ‘Humbug!’ Wells left the building in disgrace.”

About a year and a half later, another dentist, Charles Morton, conducted a similar demonstration, using ether as the anesthetic instead.  In front of an  audience at an auditorium at the Massachusetts General Hospital (MGH), Morton excised a large tumor from the jaw of a 20-year-old housepainter named Gilbert Abbott. Abbott slept through the entire procedure. 

When the operation was complete, John Collins Warren, a professor of surgery at MGH who had hosted both demonstrations, “looked at the audience and declared, ‘Gentleman – this is no humbug.’” 

Today, the smartest and most skeptical academic experts I know are floored by a different emerging technology: generative AI. There seems to be a rush among healthtech investors to back startups leveraging AI to solve specific problems in biomedicine and healthcare. Meanwhile, incumbent biopharma and healthcare stakeholders are (or soon will be) urgently contemplating how and where to leverage generative AI – and where their own troves of unique data might be utilized to fine tune AI models and generate distinctive insight.

It seems time to declare, “Generative AI is no humbug.” 

Hope Beyond The Hype

I’ve started with this 19th century story to remind us that physicians and scientists have always struggled to assess the promise of emerging technologies.

Today, the hype around generative AI is off the charts. “A New Area of A.I. Booms, Even Amid the Tech Gloom,” reads a recent New York Times headline. It continues, “An investment frenzy over ‘generative artificial intelligence’ has gripped Silicon Valley, as tools that generate text, images and sounds in response to short prompts seize the imagination.”

It’s reasonable to wonder whether this is just the latest shiny tech object that arrives with dazzling promise only to fizzle out, never meaningfully impacting the way care is delivered and the way drugs are discovered and developed.

So far, AI hasn’t really moved the needle in healthcare, as a remarkably blunt recent post from the Stanford Institute for Human-Centered AI (HAI) acknowledges. 

But I believe generative AI offers something different, and profound — a perspective shared by the Stanford HAI authors. Generative AI is an area with which we should (and arguably, must) engage deeply, rather than merely follow with detached, bemused interest.

Generative AI and Chat-GPT

What is generative AI? You can ask the AI itself. According to Chat-GPT, openAI’s wildly popular demonstration model of the technology, generative AI, “refers to a type of artificial intelligence that generates new data, such as text, images, or sound, based on a set of training data.” 

That explanation is well and good, but if you really want to viscerally appreciate some of the power of the technology, you really need to — and owe it to yourself to — experience it. Go to chat.openai.com, sign up for free, and try chat-GPT for yourself. It’s unbelievable in a way that you need to engage with to really understand. The specific examples always seem trivial, but the range and fluidity of the responses the technology provides is extraordinary.

For example, I asked it to write a commentary about climate change from the perspective of Bernie Sanders, and then another one from the perspective of Donald Trump – the results were uncanny. One of my teenage daughters, not easily impressed, was blown away when I asked the technology to “write a 200-word essay from perspective of teenage daughter asking dad to approve [a particular app],” a highly topical subject in our household. The result was fantastic, even persuasive.

Of course, the technology isn’t perfect, and certainly not infallible – for example when I asked it about the line “Is it safe yet?” in a Dustin Hoffman movie, it correctly identified both the film (“Marathon Man”) and Hoffman’s character, but incorrectly thought it was Hoffman’s line, rather than that of his interrogator, portrayed by Laurence Olivier. 

Such errors are not unusual and reflect a well-described challenge known as “hallucinations,” where the model confidently provides inaccurate information, often in the context of other information that’s accurate. 

In another example, discussed by Ben Thompson at Stratechery, the model is asked about the views of Thomas Hobbes. It generates a response that Thompson describes as “a confident answer, complete with supporting evidence and a citation to Hobbes work, and it is completely wrong,” confusing the arguments of Hobbes with those of John Locke.

Not surprisingly, healthcare AI experts tend to emphasize the role of “human in the loop” systems for high stakes situations like providing diagnoses. One framing I’ve heard a lot from AI enthusiasts is “you’re not going to be replaced by a computer – you’re going to be replaced by a person with a computer.”

Large Language Models and Emergence

The capabilities behind chat-GPT are driven by a category of model known as “Large Language Models,” or LLMs. The models are trained on as much coherent text as they can find to hoover up, and are designed to recognize words found in proximity to each other. 

A remarkable property of LLMs and other generative AI models is emergence: an ability that isn’t present in smaller models, but is present (and often arises, seeming abruptly) in larger models. 

As two authors of a recent paper on emergence in the context of LLMs explain,

“This new paradigm represents a shift from task-specific models, trained to do a single task, to task-general models, which can perform many tasks. Task-general models can even perform new tasks that were not explicitly included in their training data. For instance, GPT-3 showed that language models could successfully multiply two-digit numbers, even though they were not explicitly trained to do so. However, this ability to perform new tasks only occurred for models that had a certain number of parameters and were trained on a large-enough dataset.”

(If you’re first thought is that of Skynet becoming sentient, I’m with you.)

Models in this category are often termed “foundation models,” since they may be adapted to many applications (see this exceptional write-up in The Economist, and this associated podcast episode). While the training of the underlying model is generally both time-consuming and expensive, the adaptation of the model to a range of specific applications can be done with relative ease, requiring only modest additional tuning.

Implications for Healthcare and Biopharma

Foundational models represent a particularly attractive opportunity in healthcare, where there’s a “need to retrain every model for the specific patient population and hospital where it will be used,” which “creates cost, complexity, and personnel barriers to using AI,” as the Stanford HAI authors observe.

They continue:

”This is where foundation models can provide a mechanism for rapidly and inexpensively adapting models for local use. Rather than specializing in a single task, foundation models capture a wide breadth of knowledge from unlabeled data. Then, instead of training models from scratch, practitioners can adapt an existing foundation model, a process that requires substantially less labeled training data.”

Foundation models also offer the ability to combine multiple modalities during training. As Eric Topol writes in a recent, essential review (see also the many excellent references within). “Foundation models for medicine provide the potential for a diverse, integration of medical data that includes electronic health records, images, lab values, biologic layers such as the genome and gut microbiome, and social determinants of health.” 

At the same time, Topol acknowledges that the path forward is “not exactly clear or rapid.” Even so, he says, the opportunity to apply generative AI to a range of tasks in healthcare “would come in handy (an understatement).” (Readers interested in keeping up with advances in healthcare-related AI should consider subscribing to “Doctor Penguin,” a weekly update produced by Topol and colleagues.)

The question, of course, is how to get from here to there — not to mention envisioning and describing the “there.” 

The journey won’t be easy. The allure of applying tech to healthcare and drug discovery has been repeatedly, maddeningly thwarted by a range of challenges, particularly involving data: comparatively limited data volume (vs text on the internet, say), inconsistent data quality, data accessibility, and data privacy. Other obstacles include healthcare’s notorious perverse incentives and the perennial difficulty of reinventing processes in legacy organizations (how’s your latest digital transformation working out?).

As the seasoned tech experts at the “All In” podcast recently discussed, it’s not yet clear how the enormous models underlying generative AI will find impactful expression in startups – though the interest in figuring this out is enormous. One of the hosts suggested that the underlying AI itself was likely to become commoditized, or nearly commoditized; hence,

“the real advantage will come from applications that are able to get a hold of proprietary data sets and then use those proprietary data sets to generate insights, and then layering on … reinforcement learning.  If you can be the first out there in a given vertical with a proprietary data set, then you get the advantage, the moat of reinforcement learning. That would be the way to create, I think, a sustainable business.”

When you think about promising proprietary data sets, those that are owned or managed by healthcare organizations and biopharmaceutical companies certainly come to mind.

Healthtech Investors See An Opportunity

Perhaps not surprisingly, many healthtech experts are keen jump on these emerging opportunities through investments in AI-driven startups.

Dimension partners (L to R) Zavain Dar, Adam Goulburn, Nan Li

A new VC, Dimension, was recently launched with $350M in the bank, led by Nan Li (formerly a healthtech investor at Obvious Ventures), Adam Goulburn and Zavain Dar (both experienced healthtech investors joining from Lux Capital).  They’re focused on companies at the “interface of technology and the life sciences,” and looking “is looking for platform technologies that marry elements of biotech with computing.”  (TR coverage).

Healthtech and the promise of AI has also captured the attention of established biotech investors — it’s a key thesis of Noubar Afeyan’s Flagship Pioneering – and prominent tech VCs, like Andreessen Horowitz.  Generative AI informs the thinking of Vijay Pande, who leads Andreessen’s Bio Fund. 

Also focused on this interface: five emerging VC investors who collaborate on an thoughtful Substack focused on the evidence-based evaluation of advances (or putative advances) in Tech Bio, with a particular emphasis on AI. The contributors include Amee Kapadia, a biomedical engineer (Cantos Ventures); Morgan Cheatham, a data scientist and physician-in-training (Bessemer Venture Partners); Pablo Lubroth, a biochemical engineer and neuropharmacologist (Hummingbird Ventures); Patrick Malone, a physician-scientist (KdT Ventures); and Ketan Yerneni, a physician (also KdT Ventures).

Meanwhile, physician Ronny Hashmonay, recently announced on LinkedIn that he “is leaving Novartis, after 11.5 years,” and “is founding a new VC fund to continue working and leading the tech revolution in healthcare.”

Concluding Thoughts

It’s enormously exciting, if frequently disorienting, to participate in the installation phase of a new technology, the stage of technology development where the promise is recognized but the path to realization is less clear. Our challenge and opportunity is to help figure out how to translate, responsibly, the power and possibility associated with generative AI into tangible, meaningful benefit for patients and for science.

One final note: despite the ignominious demonstration of Horace Wells, both ether and nitrous oxide ultimately found widespread use as general anesthetics, along with chloroform. Significantly improved reagents and processes were developed, often incrementally, in the first half of the 20th century, and continuing forward. The progress in anesthetics over the last 150 years has been nothing short of remarkable. 

And yet, as Offit reminds us more than 175 years later, “the exact mechanism by which they work remains unknown.”

Sounds familiar.