29
Mar
2023

Timmerman Traverse for Life Science Cares is Back

Luke Timmerman, founder & editor, Timmerman Report

The next biotech team adventure is here.

I’m thrilled to announce the Timmerman Traverse for Life Science Cares 2023.

We’re on a mission to raise $1 million. We’re giving it all back to the most vulnerable people in the communities where we live and work.

This trip involves shared sacrifice. We will cover 20+ miles and gain 8,000 feet of net elevation on the Presidential Traverse in New Hampshire. The expedition is set for Aug. 20-23, 2023.

Sweat will be involved. Legs will be tired. The weather may get rough. Teammates will need to lend each other a hand.

The difficulty is what makes it so meaningful. Over the past two years, this initiative has raised a combined $1.7 million to fight poverty.

Awareness of Life Science Cares and its outstanding network of community nonprofits is on the rise. Relationships among scientists, executives and investors have been forged on the trails. Sometimes these relationships translate into new professional and business opportunities.

Who’s on the Team?
  • Luke Timmerman, founder & editor, Timmerman Report
  • Art Krieg, founder and former chief scientific officer, Checkmate Pharmaceuticals
  • Samantha Truex, CEO, Upstream Bio; board of directors, Life Science Cares
  • Lydia Meyer-Turkson, senior director, early innovation partnering, Johnson & Johnson
  • Ted Love, entrepreneur, former CEO, Global Blood Therapeutics
  • Tim Springer, Professor of Biological Chemistry and Molecular Pharmacology, Professor of Medicine, Harvard Medical School; founder, Morphic Therapeutic, Scholar Rock
  • Andrea van Elsas, partner, Third Rock Ventures
  • Lalo Flores, CEO, Century Therapeutics
  • David Schenkein, general partner, GV
  • June Lee, venture partner, 5AM Ventures
  • Katherine Andersen, Head of Life Science & Healthcare
    Corporate Banking and Relationship Management, SVB
  • Ben Portney, investment partner, Andreessen Horowitz
  • Kaja Wasik, co-founder and chief scientific officer, Variant Bio
  • Yael Weiss, CEO, Mahzi Therapeutics
  • Heath Lukatch, managing partner, Red Tree Venture Capital
  • Raju Prasad, CFO, CRISPR Therapeutics
  • Kaye Foster, senior advisor, executive coach, BCG; venture partner, ARCH Venture Partners
  • Sandra Glucksmann, biotech entrepreneur, former CEO, Cedilla Therapeutics
  • Uciane Scarlett, principal, MPM Capital
  • Sheila Gujrathi, board chair, ImmPACT Bio
  • Hamid Ghanadan, founder and CEO, The Linus Group

The Life Science Cares network is brimming with excellent nonprofits who are entrepreneurial problem-solvers. Some provide basics like food and shelter. Others concentrate on on-ramps to a better life in the long term, through science education and job training. They are smart, caring, inspirational, resilient people.

Our resources are concentrated in biotech industry clusters — Boston, San Francisco, San Diego, Philadelphia, and New York.

Everyone who goes on this expedition must raise a minimum of $25,000. Many raise more. Each hiker has his or her personal reasons for stepping up to the challenge. I encourage you to look over their personal statements and donate to their campaigns on JustGiving.org.

Corporate sponsorship opportunities for this year’s Timmerman Traverse for Life Science Cares are available. See Mathias Vialva mathias@lifesciencecares.org for more information. If you are interested in joining the hike, see me. luke@timmermanreport.com.

These trips are a wonderful way to enjoy nature, get fit, give back, and make new friends.

Biotech entrepreneurs can do amazing things when focused on a tough challenge.

Let’s roll up our sleeves and fight poverty together.

Luke

See what alumni of the Timmerman Traverse have to say:

“We started as industry colleagues with an aligned philanthropic goal. We finished as friends, deeply connected through an experience none of us will ever forget and all of us will work to rekindle in our lives.” — Reid Huber, partner, Third Rock Ventures

On the top of Mt Clay, there are no hierarchies, job titles, favored sons, or favored daughters. You are driven by your personal grit, your fellow climbers, and the beauty and challenges in front of you. We climbed for ourselves, we climbed for each other, but most importantly we climbed for the community being served by Life Science Cares. It was a bond we will never lose.” — Dave Melville, founder and CEO, The Bowdoin Group

It was amazing to see leaders across our biotech industry come together to raise funds to help bridge the unfortunately real gap between the medicines we develop, and the patients and communities who need access to them.” – Vineeta Agarwala, general partner, Andreesen Horowitz

 

Members of the Timmerman Traverse for Life Science Cares 2022. At Crawford Notch.

 

28
Mar
2023

Treating a Common, Underappreciated Disease: Eric Dobmeier on The Long Run

Today’s guest on The Long Run is Eric Dobmeier.

Eric is the CEO of Seattle-based Chinook Therapeutics.

Eric Dobmeier, CEO, Chinook Therapeutics

Chinook is seeking to develop drugs for kidney diseases. About 10 percent of people on Earth are estimated to have some degree of chronic kidney disease. It ranges from mild forms all the way through end-stage renal disease that requires dialysis. America spends $130 billion a year on managing and treating kidney disease.

The therapeutic options are pretty limited. Chinook is developing a Phase III drug candidate, atrasentan, for IgA nephropathy – a disease of localized inflammation in the kidneys. That’s a small molecule. It also has an antibody drug candidate for IgA nephropathy in Phase II, aimed against a target called APRIL.

The founding thesis of the company was to use some of the tools of precision medicine – which have successfully changed the way many types of cancer are treated. That vision hasn’t exactly materialized yet. But the FDA has shown some willingness to consider surrogate endpoints, biomarkers, that make clinical trials a bit more practical to run. Kidney disease has become more attractive for drug developers who have to look at the time and expense required, and probability of success of various disease categories, before deciding where to invest.

During a biotech financial downturn, Chinook finds itself in the fortunate position of having late-stage clinical assets that will deliver meaningful data readouts within the next year.

Eric comes to this opportunity after a long career on the business side of biotech. Eric, a lawyer by training, made his way from basic contracts to more strategic business development, and eventually other functions – investor relations, communications, manufacturing, and corporate strategy. He was there 15 years, as the company grew into the leading maker of antibody-drug conjugates for cancer. Seagen has now agreed to be acquired by Pfizer for $43 billion.

Eric’s had quite a career already, and he’s an area with a lot of patient need, and a lot of potential for biotech to help people live better lives.

And now for a word from the sponsor of The Long Run.

 

 

Occam Global is an international professional services firm focusing on executive recruitment, organizational development and board construction. The firm’s clientele emphasize intensely purposeful and broadly accomplished entrepreneurs and visionary investors in the Life Sciences. Occam Global augments such extraordinary and committed individuals in building high performing executive teams and assembling appropriate governance structures. Occam serves such opportune sectors as gene/cell therapy, neuroscience, gene editing, the intersection of AI and Machine Learning and drug discovery and development

Connect with Occam

Now, please join me and Eric Dobmeier on The Long Run.

 

23
Mar
2023

Welcome to the AI Irruption

David Shaywitz

Biopharma, like the rest of the world, appears to be on the threshold of profound, technology-induced change. Incredible advances in artificial intelligence, manifested most recently in GPT-4, are here. 

This technology, Ezra Klein explains in the New York Times, “changes everything.”  Bill Gates describes it as “the most important advance in technology since the graphical user interface,” and declares, “the age of AI has begun.” Similarly, Times columnist Thomas Friedman argues:

“This is a Promethean moment we’ve entered — one of those moments in history when certain new tools, ways of thinking or energy sources are introduced that are such a departure and advance on what existed before that you can’t just change one thing, you have to change everything. That is, how you create, how you compete, how you collaborate, how you work, how you learn, how you govern and, yes, how you cheat, commit crimes and fight wars.”

At an entrepreneurship salon at Harvard this week, I discussed GPT-4 with Dr. Zak Kohane, Chair of the Department of Biomedical Informatics at Harvard University (disclosure: I’m a lecturer in the department), and Editor-in-Chief of the soon-to-be-launched NEJM-AI.  

Kohane received early access to GPT-4. He has just completed a book, The AI Revolution in Medicine: GPT-4 and Beyond, to be published in mid-April, about the impact of emerging AI technology on healthcare. I read an advanced, draft copy of the book. Kohane’s co-authors are Peter Lee, Corporate Vice President and Head of Microsoft Research, and Carey Goldberg, a distinguished journalist. 

From both the book and the salon, the three most striking features of GPT-4 seem to be:

  1. Its ability to reason;
  2. Its ability to communicate and engage with people in natural language;
  3. The fact that no one really understands how it works.

How did GPT-4 impress Kohane? For starters, it performs spectacularly on standardized exams like the medical boards, and seems to be able to reason thoughtfully, Kohane says.

Zak Kohane

For example, we discussed the ability of GPT-4 to respond to an apparent paradox that Kohane says stumps the vast majority of the non-physician data scientists. The question: why is a low white blood cell count between midnight and 8 am associated with far worse outcomes than a low count between 8am and 4 pm? 

GPT-4’s top suggestion, Kohane says, was the correct answer: the issue isn’t so much the low blood count but rather the existence of a blood draw in the middle of the night. That signals the patient is experiencing some sort of medical crisis. 

GPT-4 can also provide sophisticated differential diagnoses, Kohane says, and suggest relevant next steps. 

He posed GPT-4 a question from his own specialty, pediatric endocrinology:

“I gave it a very complicated case of ambiguous genitalia that I was actually called for once back in my training. And it’s able to go through everything from the clinical presentation to the molecular biology. It had a disagreement with me and was able to cogently disagree with me, and it was also able to articulate concerns for the parents of this child and for the future engagement of the child in that discussion. So, on the surface it’s acting like one of the most sensitive, socially aware doctors I’ve ever met. But we have no guarantee that it is such.”

The ability of GPT-4 to engage in such a human-like fashion is one of the most striking, and disarming, characteristics of the technology. Many who engage with GPT-4 over time describe the sense of developing a close relationship with it, in a fashion that can feel dislocating. Kohane, Lee, Friedman and others all describing losing sleep after spending time with GPT-4. They are overwhelmed, it seems, by the power and possibility of what they’ve experienced.

 “GPT-4’s abilities to do math, engage in conversation, write computer programs, tell jokes, and more were not programmed by humans,” wrote Lee, of Microsoft Research. These capacities emerged unexpectedly, he wrote, “as its neural network grew.“

Peter Lee, Corporate Vice President,
Microsoft Research

This creates what Lee calls a “very big problem.” He writes: “Because we don’t understand where GPT-4’s capabilities in math, programming, and reasoning come from, we don’t have a good way of understanding when, why and how it makes mistakes or fails….”

The implications are dizzying. As Kohane writes, “I realized we had met an alien agent and it seemed to know a lot about us, but at the moment, I could not decide if it should be given the keys to our planet or sealed in a bunker until we figured it out.”

Implications for Healthcare

Some AI experts, like University of Toronto professor and author Avi Goldfarb, said the AI technologies will serve as a democratizing force in healthcare. He suggests, in a podcast interview with Patrick O’Shaughnessy, it will be able to “automate diagnosis,” with the consequence of “upskilling” the “millions of medical professionals” like nurses and pharmacists. The consequence, he suggests, is that:

“There’s hundreds of thousands of doctors in the U.S. and their special skill in diagnosis is going to go away. They’ll have to retool and figure out how to deal with that. But there’s millions of other medical professionals who are now going to be able to do their jobs much better, be more productive. And that upskilling provides a lot of what we see as the hope and opportunity for AI.”

I asked Kohane about this, and (to my surprise) he seemed to largely agree, albeit with a slightly different framing.  He notes that we have a crisis resulting from a shortage of  primary care doctors. Massachusetts (as the Boston Globe has recently reported) is being hit particularly hard. While we may have an idealized view of how the best primary care doctors can treat patients, Kohane argues, this is generally not the lived reality, and suggests that a nurse practitioner or physician assistant, coupled with AI, could generally offer a higher level of care for patients than a typical primary care doctor without AI. Given the shortage of trained medical professionals, AI can help improve the quality and quantity of available care.

It’s also clear that patients will have access – and, through Bing (which works with GPT-4 when accessed from Microsoft’s Edge browser), already have access – to this knowledge and information. 

Many patients and caregivers are eager for this capability, as Goldberg writes in the book. The problem is that GPT-4 (like surgeons and Harvard grads) tends to be frequently correct but rarely in doubt. It still suffers from the problem of “hallucinations,” making up information that sounds plausible but isn’t accurate. Like humans suffering from the Dunning-Kruger Effect, it can insist that it’s correct, when it’s wrong. I saw that when GPT-4  tried to persuade me that Goose, not Merlin, uttered the line “That MIG really screwed him up,” in the original Top Gun

For now, everyone seems to acknowledge the hallucination problem, and call for “human in the loop” approaches. But what happens as we gain more confidence in GPT-4 and, motivated by both convenience and cost, are increasingly tempted to take the human out of the loop?

Stepping Back

What seems clear is that we are truly experiencing what economist Carlota Perez has described (see here) as the “irruption phase” of emerging technology. We recognize that there’s something promising and incredibly exciting, and now everyone is trying to figure out what to make of it, and how to apply it.

These days, it feels like every healthtech person I know who hasn’t started their own VC fund (as noted here) is either starting their own health+AI company or writing a book about health+AI, and in some cases both. Microsoft Office programs are about to be boosted by GPT-4. Other companies, such as the regulatory intelligence company Vivpro – are already offering GPT-powered tools. Every consultancy is offering executives frameworks and navigation guides to the new technology.

The truth, of course, is that no one has any idea how things are going to evolve. The pharma company of the future, the healthcare system of the future, the payor of the future, perhaps even the FDA of the future: all are likely to be profoundly changed by technologies like GPT-4. 

The technology will likely first arrive as incremental, point solutions, says Goldfarb, of the University of Toronto. Eventually, however, the real productivity gains arise from more fundamental change.  The classic example here, as I’ve discussed, and as Goldfarb also cites, is that dropping electric generators into factories built around steam power didn’t have much impact. But reconceptualizing the structure of factories from the ground up, in a fashion enabled by electricity, was transformative.

As Microsoft AI expert Sebastien Bubeck observes in Kohane’s book, “GPT-4 has randomized the future.  There is now a thick fog even just one year into the future.”

What an amazing, terrifying, thrilling, and hopeful time to be alive.  For those of us in medicine and biomedical science: what an opportunity, and profound responsibility, to be in the arena, actively shaping the future we hope to create and aspire to inhabit.

14
Mar
2023

The Future of Neuroscience Drug R&D: Ryan Watts on The Long Run

Today’s guest on The Long Run is Ryan Watts.

Ryan is the co-founder and CEO of South San Francisco-based Denali Therapeutics.

Ryan Watts, co-founder and CEO, Denali Therapeutics

Denali is one of the prominent development-stage biotech companies working on treatments for neurodegenerative diseases. It has a pipeline with seven drug candidates in clinical development. It’s developing small molecules and large molecules against a range of neurodegenerative diseases that includes rare diseases such as Hunter Syndrome and ALS, as well as more common maladies such Alzheimer’s disease and Parkinson’s.

Ryan is a scientist by training. He did his PhD at Stanford University and spent the first part of his career running labs at Genentech. He joined with former Genentech colleagues Alex Schuth and Marc Tessier-Lavigne to co-found Denali in 2015. The company secured a Series A financing of $217 million – which was big then, and is still big now. The company doesn’t yet have any products on the market, but it has amassed $1.34 billion in cash as of the end of 2022, and has established a broad base of support for its R&D through partnerships with Sanofi, Biogen, and Takeda Pharmaceuticals.

This is a wide-ranging conversation that includes Ryan’s path into biotech and neuroscience, some of the classic challenges of the field, and reasons why he’s optimistic that significant progress is coming to neuroscience R&D.

And now for a word from the sponsor of The Long Run.

Tired of spending hours searching for the exact research products and services you need? Scientist.com is here to help. Their award-winning digital platform makes it easy to find and purchase life science reagents, lab supplies and custom research services from thousands of global laboratories.  Scientist.com helps you outsource everything but the genius!

Save time and money and focus on what really matters, your groundbreaking ideas.

Learn more at:

Scientist.com/LongRun

Now, please join me and Ryan Watts on The Long Run.

 

6
Mar
2023

Remote/Hybrid Work is Here to Stay. Biotech Should Embrace It

Chris Garabedian, chairman and CEO, Xontogeny

There has been much debate about the biotech workplace in the aftermath of pandemic disruptions.

Employers and employees are all thinking about how and to what extent companies should enable and support remote-based work. People are discussing the advantages and disadvantages of remote work, especially in terms of the effect on productivity and creativity. 

This matters in biotech. The goal of the industry is to discover, develop and manufacture products for patients. Two of those three activities — wet-lab discovery and manufacturing — require a specialized physical space. Much of this work has been outsourced and takes place far from headquarters. Many biotech office workers rarely, if ever, step foot into the wet labs or manufacturing facilities, even if those functions are in the same building.

The nostalgia for pre-Covid office culture must die. Some biotech leaders, including John Maraganore (TR, Jan. 24, 2023), argue that having management and employees working together in person is essential to a company’s ability to be productive, creative, and competitive.

I disagree. This is a notion that should fade away.

The Fourth Industrial Revolution, which includes the blurring of the physical and digital worlds with increasingly smarter and connected technologies that allow us to intimately communicate with an expanded network of people ever further away from us geographically, has arrived. Those clinging to work practices established in the Third Industrial Revolution are swimming against the current of an inevitable change in corporate culture and talent management. Companies that embrace this change will have a sustainable competitive advantage. 

It is important to take a long view of how technology has profoundly changed our behaviors in how we live, learn and work over the last 200 years, especially over the past 50 years.

No one questions that we are designed to be social creatures and we long for interaction with others from the moment we leave our mother’s womb. Throughout most of history, we have established strong relationships with our immediate and extended families and forged friendships and work relationships with people nearby. 

Transportation and Communications

The last two centuries brought technological change to transportation and communications. These developments changed the nature of commerce, how and where we worked, and our personal and professional identities and lifestyles. 

Although sea vessels were used by the Egyptians and Mesopotamians thousands of years ago, most relied on horses and camels to connect with others for socialization and commerce.  It was not until the 19th and 20th centuries that we saw the invention of the steam locomotive (1812), the first automobile (1886) and the first airplane (1903) which became available to the masses, at least in wealthier developed countries, decades later. 

While the invention of the Gutenberg printing press in the 15th Century is credited as one of the greatest inventions in history, it was not until the last 175 years that we saw the widespread proliferation of interpersonal communication tools. It started with the telegraph (1844) and telephone (1876), and continued with the mobile telephone (1973). These communication tools enabled us to reach and interact with others who lived great distances away. 

The more profound impact on our socialization, culture, identity and sense of community came with the advent of radio and television. Although these were one-way passive forms of communication, these media allowed us to become familiar with those that were not in close physical contact with us.  

This passive form of communications technology has exploded into an interactive lollapalooza over the last 30 years with the internet, social media, VOIP, texting and messaging apps. When Covid hit, it brought another change, forcing everyone in business to use Zoom and similar video conference platforms. 

We discovered some things. It was now possible to have a productive meeting, with internal colleagues or external collaborators, that went well beyond the audio-only conference calls of the past.

Before COVID, the arguments for remote-based and hybrid work over traditional office culture were largely theoretical. But now we have run the experiment.

Consider the following advantages:

  • Flexibility and Improved Work-Life Balance: Remote work allows for a more flexible schedule, allowing employees to balance their work and personal lives better: work-life balance should be minimized or eliminated as a problem if the employee is empowered;
  • Increased Productivity: Studies have shown that remote workers are often more productive because of fewer distractions and a quieter work environment. Employees can also structure their day and work environment based on what’s best for them;
  • Cost Savings: Remote work eliminates the need for commuting, saving time and money on transportation. It also reduces office space and other overhead expenses;
  • Access to a wider pool of talent: Companies can hire the best employees, regardless of where they live. It may also provide an easier path to achieve a more diverse workforce;
  • Improved mental and physical health: Remote work can reduce stress from the daily commute. It can allow more time for exercise that improves physical health;
  • Improved morale and satisfaction: Remote workers have reported higher levels of job satisfaction and morale;
  • Environmentally friendly: Remote work can reduce carbon emissions by reducing commuting.

While no one is suggesting the choice between old office culture and remote-based work is black and white, as each has their pros and cons, I believe the balance is more favorably weighted toward decentralizing work that that can be done anywhere.

Let’s Stop (or considerably slow) the Travel

My job requires leading investments across dozens of companies, serving on Boards of Directors, and having a presence at conferences and industry events.

It might sound shocking, but it is no exaggeration to say that the move to video meetings allowed me to be 2 to 3 times more productive than in the pre-Covid era. 

In 2018 and 2019, I spent over 200 nights in hotels. That translated into countless hours on trains and planes (often with spotty or unworkable WiFi) and the often unproductive time Uber-ing to the airport, waiting at the gate, Uber-ing to a hotel, and waiting in line to check in. At the end of this typical slog, I’d realize I wasted an entire day, often to attend a 90-minute in-person meeting or to be part of a 60-minute panel at a conference.

While I often would be able to stay on top of my emails or dial into a few critical conference calls, it would have been easier to manage if I were consistently in front of a computer, on video, with no concerns about a good WiFi signal. 

On prolonged trips in the pre-Covid era, was not uncommon for me to have no or limited meaningful interactions with my employees. Since Zoom became a mainstay, I now have more routine daily interactions with my employees. I have experienced more team engagement, not less.

Several years before Covid lockdowns forced the new way of working, I founded my company, Xontogeny, with a simple concept: good science and entrepreneurs were found in institutions and geographies all over the country – not just in the top biotech hubs of Boston/Cambridge and the San Francisco Bay Area. Our industry needed a better way to assist these companies by providing operational and strategic support remotely. 

The model has worked. We have successfully supported more than a dozen seed investments, which often require weekly interactions. Almost none of those meetings take place in-person.  Our seed investment companies are located in Philadelphia, Chicago, San Diego, Research Triangle Park, NC and our first collaboration was with a company in Blacksburg, Virginia. The Xontogeny team covers even more territory through investments out of our Perceptive Xontogeny Venture Funds (an investment vehicle of Perceptive Advisors).

As investors, we keep tabs on our companies largely through quarterly board meetings. With over 20 investments, simply participating in board meetings translates to a big time commitment. If we were required to attend four board meetings per year in-person for every company, it would be almost impossible because of the required travel. 

In-person board meetings can be especially inefficient. For example, to justify flying 6 to 8 board members from various distances, the meeting agendas are often extended to 6 to 7 hours (e.g., 8:00am-2:00pm). Many directors have to depart early to catch their Uber back to the airport to get home so they are not forced to take a red-eye back to the East Coast. 

Contrast this experience of dialing into a Zoom link for 3 to 4 hours. It’s now possible to fit two or three board meetings into a day. In the last three years, no board meeting I’ve attended virtually has needed more than four hours. More often than not, they end early, without anyone ever feeling there was insufficient time to cover the necessary topics. This increased efficiency frees up time for me to do other valuable things, like meet with employees.

Requests for in-person attendance at conferences are back in full-swing, but I already long for the Covid era where I was able to attend multiple conferences within a couple weeks, or even in the same week.

At one Netherlands-focused virtual conference, I was able to engage with dozens of scientists and entrepreneurs. The following week, I attended a similar virtual conference focused on Copenhagen. Each of these conferences took up less than a couple of hours on my calendar, but led to numerous follow-ups on email, many pitch decks shared, and subsequent one-on-one Zoom meetings.

The results were just as good, if not better, than if I had spent a lot of extra time and money to attend in person. The same could be said the for the JP Morgan Healthcare Conference. I stopped attending in 2020, and have found ways to be just as productive, if not moreso, by working in the virtual office.

Office Culture is Overrated and Outdated

Biotech has historically taken root in geographic hubs because they have an abundance of drug development talent. San Francisco, Boston, New York/New Jersey have been traditional leaders, thanks to their many excellent academic institutions. But it’s also true that many outstanding scientific entrepreneurs are scattered across the United States (and throughout the world). An increasing pool of experienced industry talent prefers to live outside the main biotech hubs, and many can’t be enticed to move back. 

Employers, myself included, have had to choose between losing star employees or adopting a more flexible remote-based model. I prefer to choose keeping star employees, and offering a more flexible, remote workplace. Young employees are especially interested in the flexible, remote work offerings. Any company that wants to recruit and retain this group of workers should be paying close attention.

The argument for how the younger generation will suffer in their careers if they don’t experience the same in-office facetime with their managers or CEOs strikes me as empty. They are getting more facetime, virtually, and more opportunities to contribute, display their creativity and convey the fruits of their work to managers. In this new virtual, geographically-agnostic biotech community, I’ve met more people, and established more meaningful relationships, than during any three-year period of my career.

Biotech Can Benefit

These last 18 months have been a challenging time for the vast majority of early-stage biotech companies. Management teams are finding it difficult to close financings, are asked to shelve programs and focus on one or two core activities to conserve cash. Layoffs are a weekly occurrence. 

Our industry is constantly under pressure to be smarter about R&D productivity and careful with investor dollars. Cutting back on unnecessary leases, and embracing the remote workplace, is one way to be a good steward of capital and extend the company runway.

Of course, there are still good reasons to maintain some physical space, even for the office workers. There will remain employees who prefer the go into the office and management teams that prefer office culture and will demand all employees report to the office 3 or 4 days per week. I don’t begrudge those that choose to offer this alternative working environment. There are many workers that prefer to get more of their social needs and personal identity through daily in-person work interactions. Many of us have formed our closest friendships, met romantic partners or found our spouses through the workplace. 

In speaking to many colleagues, it seems that for every one of those individuals that prefer to have an in-person office setting, there are as many or more that prefer to spend more time at home with their partner or spouse, to see their children more often, or to visit aging parents. Some prefer to use their extra time to indulge their desire to travel. All this can be done while being a productive employee with just as much opportunity to impress the boss, take on new assignments, and advance one’s career. 

It is time to embrace the future of remote work or ‘very’ flexible hybrid models as we embark on the Fourth Industrial Revolution. We can work together more efficiently, productively and, yes, creatively.

These new communication tools bring us all closer together. Rather than go back to an old way of working, let’s put more energy into determining how to optimize this new hybrid/remote model so we can get better at our fundamental work – discovery, development and manufacturing of new products for patients

If all goes well, we’ll see benefits extend far beyond what we’ve seen in these last three years.

28
Feb
2023

A Life in Biotech & the Cell Therapy Wave: David Hallal on The Long Run

Today’s guest on The Long Run is David Hallal.

David is the CEO of Waltham, Mass.-based ElevateBio.

David Hallal, chairman and CEO, ElevateBio

ElevateBio describes itself as a technology-driven company for cell therapies. It has pulled together gene editing tools, induced pluripotent stem cells, and various viral vectors necessary to modify cells to fight cancer or treat other diseases.

David co-founded ElevateBio in 2017 with Mitchell Finer, the president of R&D, and Vikas Sinha, the chief financial officer. They saw a big bang moment in cell therapy, as hundreds of companies were being formed around the time of FDA approval of CD19-directed CAR-T therapies for cancer from Novartis and Kite Pharma. They saw many of these companies weren’t fully formed, and had a piece of technology here or there, but not the whole toolkit. Many of these companies were going to struggle to raise the cash needed to invest in needed facilities, and they were likely to need help from partners to refine their processes if they were ever going to do complex manufacturing at scale.

ElevateBio raised $150 million in a Series A financing in May 2019. It has used the money, and more that came later, to invest a lot in facilities and people with know-how to run them. The business is something of a hybrid animal. It uses its technology, people and facilities to make cell therapies under contract for other companies. You could call that traditional contract manufacturing. But this isn’t exactly a ho-hum service provider with relatively flat profit margins. It seeks to further leverage its technology and entrepreneurial people by investing in companies with upside potential, such as AlloVir, Abata Therapeutics, Life Edit Therapeutics and a startup from the lab of George Daley, the prominent stem cell researcher at Boston Children’s Hospital and Dean of Harvard Medical School.

David came to this moment – the beginning of a cell and gene therapy wave – after a long career in more traditional biotech. He was CEO of Alexion Pharmaceuticals, the rare disease company that was eventually acquired by AstraZeneca. He came up on the commercial side of the business, including key early career stops at Amgen, Biogen and OSI Eyetech.

This episode was recorded in person at the JP Morgan Healthcare Conference in San Francisco. Biotech history buffs will especially enjoy the first half, where he talks about what pharmaceutical sales was like and what it was like to work at Amgen in the early days.

Now, please join me and David Hallal on The Long Run.

26
Feb
2023

The Success of Your Tech Deployment Depends On A Role You’ve Probably Never Heard Of 

David Shaywitz

The success or failure of many technology platforms — including in particular health tech platforms — rests with a largely obscure role of outsized importance: the “solutions engineer.” 

The role itself goes by many names. Back when I was at DNAnexus in the mid-2010s, this role was called “Solutions Scientist.” Others call it “Forward-Deployed Engineer” or “Embedded Analyst.”

Whatever the title, the solutions engineer (SE) is a technology or data expert who embeds within an organization that’s trying to figure out how to make a new technology platform work. The SE functions as an on-site super tech-support specialist who helps the customer use the technology and get as much value from it as possible. While the SE doesn’t need to reside in the customer’s organization, the SE must inhabit the customer’s challenges and workflow — and having a desk (real or virtual) within the customer’s team can help.

Richard Daly, CEO, DNAnexus

“In a high velocity health information technology environment such as we are in,” says DNAnexus CEO Richard Daly, the SE is the “key person” for both selling and service, because “they get inside the customer’s skin, co-own the problem, and fit the solution to the customer need.”

At the same time – and arguably, this is the most critical aspect – the SE also develops a richly nuanced understanding of the needs of the customer and is able to provide this intelligence back to the product manager and the rest of the engineering team, so that the platform can be evolved to more effectively meet the needs (and future needs) of the customer, and presumably other future customers.

In serving as a “key transmission point” for both customer and tech company, Daly notes, the SE “improves product development and fit-to-market.”

Moreover, Daly adds, since “customers in the health tech market are scaling,” which introduces its own set of challenges, a well-functioning team of SE’s can “draw on industry and other customer experiences, in a way syndicating industry-wide knowledge, and ensuring continuing fit as the customer evolves.”

SEs come in different flavors, says Dr. Amy Abernethy, President of Product Development and Chief Medical Officer of Verily. 

“The type of SE depends on the type of tech itself. Sometimes the tech is purely software, in which case you need someone fluent in the intersection of software development and customer needs. Sometimes the tech developer is supporting mostly data products, in which case the SE looks more like an embedded analyst who has data empathy, an understanding of customer analytic needs, and a sense of the possible from a data perspective. In pharma, the tech product is becoming increasingly more a combo of software and data, and the SE needs to be able to blend both of these skills.”

Your Initial Solution May Not Be Your Customer’s Exact Problem

A lot of the thinking here comes back to the advice offered by Lean Startup author and entrepreneurship guru Steve Blank, who emphasized the need for the technology developers to spend as much time as possible with actual customers, to ensure that the needs the developers have prioritized are actually the same needs customers face. 

In this context, SE roles provide the opportunity for essential fine tuning; after all, if a tech platform wasn’t in the ballpark, the SE would never have been given the opportunity to engage with a customer in the first place, and certainly wouldn’t be in the privileged position to go work on a customer’s team. 

The most successful SE’s exhibit an authentic interest in the customer’s business – they are innately, intensely curious about the work the customer is doing, and the problems the customer is trying to solve.  A high EQ SE can become an enormously valuable member of the customer’s team, a critical resource that enables them to function better and achieve more.

At the same time, it is absolutely critical that the SE is not regarded as simply as an implementation specialist, someone who gets a team up and running on a new system. Rather, the most significant opportunity the SE affords is to provide an inquisitive and adaptable technology team with a sense of what is and isn’t working well, and how the technology could more effectively meet customer needs. If the technology team isn’t thirsty for and responsive to this feedback, a valuable opportunity is lost.

Significant Challenge And Opportunity for Biopharmas

The need for SEs applies not only to technology companies who are developing platforms, but also, in biopharmas, to internal technology teams as well. Indeed, because of the abiding, largely unbridged differences between the seemingly immiscible cultures of life-science trained drug developers, and engineering-oriented technologists, effective communication and shared understanding can be a challenge, as I’ve discussed here and here.

Amy Abernethy, president of product development and chief medical officer, Verily

In this context, Dr. Abernethy points out, SE’s can “help with ‘lingua franca,’ translating terms between customers and developers.”

Adds Dr. Abernethy, “This task is also becoming more important within the tech and pharma companies themselves as we see a confluence of actions/capabilities across teams within the companies. Building lingua franca will be a key accelerant across the industry and the SE can help.”

In large biopharmas, it’s common for technology teams, after a highly-structured, well-intentioned needs-gathering exercise, to put their heads down and set up developing and deploying a technology solution. 

Afterwards, there’s often little rejoicing. Tech teams tend to grumble about how their brilliant technology is not being efficiently utilized, inevitably calling for more “change management,” while customers routinely complain that their most important needs are (still) not being met. It’s not uncommon to hear technology teams push for mandates that compel customers to use the new technology – an effort that’s received about as well as you might expect. What’s worse, this pattern seems to repeat all the time – it feels like the rule, not the exception.

In many cases, a critical missing link is an SE role – a person with “amphipathic” qualities embedded inside the customer teams to help apply and troubleshoot the technology, and (critically!) to provide ongoing granular feedback to the tech teams about how the tech could evolve to meet customer needs more effectively.

Success critically requires both curiosity and a commitment to continuous iteration on the part of tech teams. These teams must want to deeply understand customer challenges, and ideally begin to viscerally understand what the customers are trying to do, the problems the customers are trying to solve. At least as importantly, the tech teams must also see the relationship with their customers as a constant, on-going dialog. This can be particularly challenging in biopharma when technology teams are often more accustomed to obtaining and then building towards a set of fixed specifications. 

There’s another complicating factor: many customers — especially within biopharmas — may not be all that clear on what they initially want from technology, or understand what might (or might not) be possible. This understanding can evolve and sharpen over time, particularly when catalyzed by a skilled SE, and enabled by a tech organization that’s driven to constantly refine the product on offer.

An Evolving Role

As technology plays a more central role in healthcare and biopharma, the role of the SE is likely to evolve accordingly. As Dr. Abernethy observes, “The technologies that are being developed are more and more informed by our understanding of underlying biology and/or data/implementation elements from clinical care. I wouldn’t be surprised if the phenotype of the solutions engineer of the future has a bit more science or clinical knowledge built in, and the job descriptions will similarly become more sophisticated.”

Perhaps reflecting her previous experience as Deputy Commissioner of the FDA, Dr. Abernethy adds:

“The SE may also need to generally understand the implications and requirements of many different regulatory paradigms. We see this right now, because sometimes the SE looks like a person who understands computational biology and software engineering, sometimes looks like a person who understands clinical and EHR data coupled with the needs of outcomes researchers delivering a dataset for a regulatory filing, and sometimes looks like a person who understands when a product crosses over the line to being regulated clinical decision support.” 

Phrased differently, the SE role reflects and embodies the need for constant dialog and evolutionary refinement between those developing digital, data, and technology offerings, and those who hope to leverage these powerful but still unformed or unfinished capabilities.

Bottom Line

A key gap in technology deployment, particularly in biopharmas, is the space between what technology teams develop and what customers actually want and need. A skilled solution engineer (SE) can bridge this gap and serve as a bidirectional translator, helping customers more effectively utilize the technology, and guiding technology teams to create improved solutions. Success requires not only a skilled SE, but also a curious and adaptable technology team driven to elicit, understand, and respond to customer needs – even (especially) when these needs are difficult for the customer to articulate.

14
Feb
2023

Engineered B-Cell Therapies for Cancer & Rare Diseases: Joanne Smith-Farrell on The Long Run

Today’s guest on The Long Run is Joanne Smith-Farrell.

Joanne is the CEO of Cambridge, Mass.-based Be Biopharma.

Joanne Smith-Farrell, CEO, Be Biopharma

Many listeners of this show are familiar with the explosion of activity in cell therapy. Engineered T cell therapies have delivered extraordinary results for people with certain types of cancer. The success in these personalized T cell therapies, which get modified outside the body and re-infused, has inspired all kinds of academic and industrial work in engineering other cell types as cancer fighters, such as NK cells. Many others are seeking ways to make off-the-shelf, or so-called allogeneic cell therapies, that can be administered to patients much more cheaply and easily in clinics around the world.

What you don’t hear as much about is engineered B-cell therapies. This other arm of the adaptive immune system has been challenging for scientists to work with. This is the work Be Biopharma is setting out to do. It seeks to create engineered B cell therapies for cancer and rare diseases, which can be given off-the-shelf to any patient, and be given via repeat doses over time, without the need for toxic preconditioning regimens that are required by today’s cell therapies.

Joanne came to lead this startup in 2021 from Bluebird Bio, where she was chief operating officer and head of the company’s oncology business unit.

Joanne passion for biopharmaceutical R&D shines through in this conversation. She has a personal story here that reveals a lot about her outlook on life.

And now for a word from the sponsor of The Long Run.

Tired of spending hours searching for the exact research products and services you need? Scientist.com is here to help. Their award-winning digital platform makes it easy to find and purchase life science reagents, lab supplies and custom research services from thousands of global laboratories. Scientist.com helps you outsource everything but the genius!

Save time and money and focus on what really matters, your groundbreaking ideas.

Learn more at:

Scientist.com/LongRun

9
Feb
2023

First, We Need to Generate the Right Data. Then AI Will Shine

Alice Zhang, CEO, Verge Genomics

ChatGPT is a hot topic across many industries. Some say the technology underpinning it – called generative AI – has created an “A.I. arms race.” However, relatively little attention is given to what is needed to fully leverage the promise of generative AI in healthcare, and specifically how it may help accelerate drug discovery and development.

That’s a mistake.

Recently, David Shaywitz offered a thoughtful opinion on why he sees generative AI as a profound technology with implications across the entire value chain. We agree with many of David’s views but want to offer additional perspective.

Victor Hanson-Smith, head of computational biology, Verge Genomics

In short, our belief is that AI will identify better targets, thus reducing clinical failures in drug development and leading to new medicines. Generative AI will play a role. However, the fundamental challenge in making better medicines a reality comes down to closing the massive data gaps that remain in drug development today.

And when it comes to the most complex diseases that still lack meaningful medicines, where the data comes from is essential. Today, the source of data that powers generative AI has substantial gaps. Over the long term, generative AI will enable the creation of meaningful medicines, but it will not offer a panacea for what ails all of drug discovery.

A Primer on M.L. Classification and Generative AI

To start, it’s necessary to have a grounding in machine learning (ML) classification. As the name implies, ML classification predicts whether things are or are not in a class.

Email spam filters are a great example. They ask, “Is this spam, or is it not?”

They work because they’ve been “trained” on thousands of previous data points (i.e., emails and the text within). Generative AI, by contrast, uses a class of algorithms called auto encoders and other approaches to generate new data that look like the input training data. It’s why a tool like ChatGPT is great at writing a birthday card. There are thousands, maybe even millions, of examples of birthday cards that it can pull from.

There are limitations though.

Ask ChatGPT to summarize a novel that was published this week, and it will give you the wrong answer, or maybe no answer. That’s because the book isn’t yet in the training data.

What does this have to do with drug discovery?

The above example illustrates a foundational point in drug discovery: input data – especially its provenance and quality – is essential for training models. Input data is the biggest bottleneck in drug development, especially for complex diseases where few or no therapies exist. Our worldview is that the sophistication of the AI/ML approach is irrelevant if the training data underpinning it is insufficient in the first place.

So, what kind of input biological data does generative AI need? It depends on the task. For optimizing chemical structures, generative AI mainly relies on vast databases of publicly available protein structures and sequences. This is powerful. We expect generative AI will have a massive impact on small molecule drug design when there is already a target in mind, a known mechanism of action, and the goal is to optimize the structure of a chemical. The wealth of available protein structure and chemistry data means a model can be well trained to craft an optimized small molecule candidate.

But a different problem – finding new therapeutic drug targets – requires different types of input data. This includes genomic, transcriptomic, and epigenomic sequence data from human tissue. What happens when this type of training data is unavailable? That’s what we’re solving for at Verge. We first fill a fundamental gap in generating the right kind of training data and then using ML classification to ask and answer the question, “Is this a good target or a bad target?”

Building a bridge from genetic drivers to disease symptoms

Take amyotrophic lateral sclerosis (ALS) as an example. At least 56 genes drive the development of ALS. Looking at one of those genes in isolation will tell you something about certain people with ALS, but nothing about the shared mechanisms that impact ALS in all patients. This genetic association data, or GWAS data, alone is insufficient to find treatments that are widely applicable to broad ALS populations. That theme repeats itself for other complex diseases we’ve evaluated, including neurodegeneration, neuropsychiatry, and peripheral inflammation.

The existing drug therapies for ALS treat symptoms of the disease, rather than the underlying causes. It is likely that if a generative AI approach was applied to ALS, it could predict more symptom-modifying treatments, but it would fail to identify fundamentally new disease-modifying treatments. Although AI can be excellent at pattern-matching to create additional examples of a thing, AI can struggle to create the first example of a thing.

This is precisely the problem the field of biotech faces for a wide range of diseases with no effective drug treatments. We don’t know what causes the disease, and haven’t collected the right kind of underlying data to even begin to lead us to the right answers.

Our approach in ALS is to use layers of “Omics” data – sourced from human, not animal tissue – to fill gaps in available training data. This enables us to discover molecular mechanisms that cause ALS. When these human omics data form the input for a training set, the output is insight into disease-modifying therapies for what we believe will be  a wide range of ALS patients. Using this approach, we build a bridge from diverse genetic drivers to shared disease symptoms; from genotype to phenotype. For Verge, this approach has been pivotal towards identifying a new target for ALS and starting clinical trials with a small molecule drug candidate against that target in just 4.5 years.

Back to the Value Chain

AI could affect the entire biopharmaceutical and healthcare value chains, but studies like this one have shown that “a striking contrast” has run through R&D in the last 60 years. The authors write that while “huge scientific and technological gains” should have improved R&D efficiency, “inflation-adjusted industrial R&D costs per novel drug increased nearly 100-fold between 1950 and 2010.” Worse, “drugs are more likely to fail in clinical development today than in the 1970s.”

AI today is being used to test more drugs faster, but it hasn’t fundamentally changed the probability of success. The biggest driver of rising R&D costs is the cost of failure. While using AI to optimize design is appealing it won’t mean much until it can better predict effectiveness of targets or drugs in humans. Today’s disease models (cells and animal models) are not great predictors of whether drugs work, so increases in efficiency in these models just provide larger quantities of poor-quality data. When models are poor, the outcomes will be, too. As the old saying goes — garbage in, garbage out.

Concluding Thoughts

No single type of training data will solve the complexities of discovering and developing new medicines. It will take multiple data types. But a relentless focus on finding the best types of data for the scientific problem, and generating lots of that data in a high-quality manner, will be what truly paves the way for AI to fulfill its potential in drug discovery.

4
Feb
2023

Grand Défi Ou Goulot D’étranglement Ultime: A French Pharma Tackles Data Science

David Shaywitz

Most biopharma companies have started down the path of digital transformation – a fundamental overhaul of everything they do for the digital age.

It’s not clear yet that anyone has arrived at the desired destination.

Even so, there have been some early wins, generally related to operations, as the CEOs of both Novartis and Lilly have described. Arguably, the most significant R&D success has been the organizational alignment and focus afforded by accurate, up-to-date digital dashboards, reflecting, for example, the status of the COVID clinical trials that Pfizer was running, as discussed here.

Behind the scenes, many biopharma R&D organizations have been exceptionally busy trying to apply emerging digital and data technologies to improve every aspect of how impactful new medicines are discovered, developed, and delivered. This strategic focus – and my current job description – is an industry preoccupation.

R&D represents such a vast opportunity space for emerging digital and data technologies that it can be difficult to keep track of all the activity across this expansive frontier. But a recent podcast delivers. A January 2023 episode of BIOS hosted by Chas Pulido and Chris Ghadban of Alix Ventures, and Brian Fiske of Mythic Therapeutics features Sanofi’s CSO and head of research, Frank Nestle. He provides a comprehensive, fairly representative introduction into the many ways biopharmas are approaching digital and data. He also shares insights into key underlying challenges (spoiler alert: data wrangling).

Frank Nestle, chief scientific officer, Sanofi

Nestle is a physician-scientist and immunologist by training; he has been with Sanofi since 2016, and in his current role for about two years.

Below, I discuss the key points Nestle makes about digital and data across R&D, and then offer additional perspective on these opportunities and challenges.

Vision: The Great Convergence

Nestle envisions that we’re heading towards a “great convergence between life sciences, engineering, and data science.” He adds that the “classical scientific foundations of physics, chemistry, and biology” have each “had their heyday.” Now, he says, “it’s data sciences and A.I.”

AI/ML Impact on Research: Optimizing the Assembly Line

Early drug development, Nestle argues, can be understood as an assembly line, where a new molecule is designed and then serially optimized. At each stage, “we are optimizing drug-like properties, like absorption, biodistribution in the body,” he explains. Historically, decisions along the way were made by people – often with extensive experience — sitting around a table and reviewing the data. Now, Nestle is trying to collect the rich data associated with each step in a more systematic way, so that A.I. can contribute. 

At the moment, Nestle says, the focus is on using data science to optimize each individual step, but allows that eventually, a “grand model” might be possible. 

Nestle notes that both early focus and early successes involve small molecules. For example, the number of potential molecules that must be synthesized and evaluated in the course of making a potential small molecule drug has been reduced, he said, from 5,000 to “several hundred.” He asserts Sanofi remains  interested in applying A.I. to the optimization of biologics. The challenge is “complexity.” Nevertheless, he suggests that using A.I.-based approaches to optimize biologics holds “at least as much promise, if not more promise, than in the small molecule space.”

To accelerate their efforts in the early drug discovery and design space, Nestle says, Sanofi has partnered with a number of tech bio companies including Exscientia, Atomwise, and Insilico Medicine

Translational Research: Organizing Multimodal Data

The process of understanding the molecular basis of disease, Nestle says, begins with “molecule disease maps.” The best description of these maps that I found was actually from ChatGPT (a technology I recently discussed), which reports that the term:

“refers to a visual representation or diagram of the molecular interactions and processes that are associated with a particular disease. This map can include information about the genes, proteins, and signaling pathways involved in the disease, as well as how these elements interact with each other. The goal of creating a molecular disease map is to gain a better understanding of the underlying causes of a disease and to identify potential targets for therapeutic intervention. By mapping the molecular interactions involved in a disease, researchers can gain insights into the complex biological mechanisms that drive disease progression and develop more effective treatments.”

According to Nestle, “these molecular disease maps are becoming more complex and more rich in data sets by the day.” He continues, “It started with mainly genetic datasets, but now we have expression data sets at every single level from RNA to proteins to metabolites. And we look at these molecular disease maps actually at a single-cell level.” 

The upshot, he says, is that these “provide us with an incredible space of data to interrogate.”

The algorithms used to analyze these data are often simple cluster analyses, Nestle says, but the goal is always to “reduce dimensionality” and find a “signal in these sometimes very noisy datasets.”

These molecular disease maps not only inform biomarker identification, but also assist with the identification of patient populations who might be particularly well (or poorly) suited for specific medications. 

Also contributing to the translational work, according to Nestle: a partnership with the French-American company Owkin. He cites their expertise in both federated machine learning (focused on clinical data associated with various medical centers) and digital pathology. 

Emerging Trial Technologies and Patient-Centricity

Nestle describes digital biomarkers as “absolutely ready for prime time.” He cites the use of actigraphy (see this helpful explainer from Koneksa) in the assessment of Parkinson’s Disease as a promising example in an area – neurology – where such biomarkers are critically needed because “trials just take too long.” He also mentions an approach under development, pioneered by MIT professor Dina Katabi, that repurposes a typical wireless router to monitor activities such as itching and scratching in some skin conditions.

Sanofi, like all biopharmas, is interested in decentralized trials; Nestle highlights a partnership (initiated pre-pandemic) with the company Science37. He also sees a “clear future not only for (the use of technology) in patient recruitment but also remote patient monitoring.” The adoption of technology to enhance trial recruitment and patient monitoring, he says, has been accelerated, dramatically and irreversibly, by the pandemic.

Nestle also emphasizes the role and importance of real world data (RWD) as a tool to better understand patients and their journeys. Insights from RWD can be used to improve “study feasibility or sample size optimization or endpoint modeling,” he says, and points to a “journey mapper” Sanofi has used to integrate and interpret RWD. This approach helped identify additional indications for Sanofi and Regeneron’s IL-4/IL-13 inhibitor dupilumab (Dupixent). That work has translated into benefits for a broad range of patients — and more revenue for the company.

Finally, Nestle highlights internal work on “integrated platform data solutions.” That sounds like efforts focused on supporting a drug after it’s launched through the provision of enabling technologies connecting patients, physicians, and data.  

Limitations: Analyse Sexy, Données Difficiles

Perhaps Nestle’s most important comments concern the limitations of advancing A.I. and other emerging technologies. And the most difficult challenge – “the ultimate bottleneck,” he asserts –is “the data.” 

He continues:

“Right now, data often exists in silos, they’re fraught with missing values – those zeros, as we call them — they’re not labeled correctly, they’re difficult to find. And that’s probably one of the biggest hurdles … that whole effort of generating, aggregating, normalizing, processing the data sometimes outweighs the actual analysis effort.”

He points out that “building foundations” – required for thoughtful data management – “is not necessarily a KPI [key performance indicator]” for large pharmas (who tend to be more focused on near-term measurements of performance). Hence you can only accomplish this, he says, with strong strategic support from “very senior leaders.” (This support may prove both more elusive and more essential in the context of Sanofi’s disappointing 2023 outlook – see here.)

A second problem Nestle points out are the well-intentioned country-specific regulations governing data protection. These policies tend to be quite fragmented across nations and healthcare systems. That impedes the flow of data, and complicates opportunities to learn from the aggregated experiences of patients. The federated approach used by Owkin represents one approach to managing some of these challenges.

Additional Reflections

Nestle offers a comprehensive and generally upbeat assessment of the opportunities before us in the application of emerging digital and data technology to R&D. 

Additional opportunities I’m particularly excited about include imagining what may be possible (more accurately, anticipating what seems likely to be possible) through current and future large language models like GPT-3 and (soon?) GPT-4. One example: effortlessly matching clinical trial protocols to the patient populations already present at certain medical centers. 

There are additional significant challenges that are easy to lose sight of, particularly in the context of such a compelling vision. For instance, I’m impressed by the challenge of timely digital biomarker development and validation. In a regulatory environment where teams struggle to validate electronic administration of well-established paper scales, the challenge of validating wearable parameters is often substantial, and doing this at the same time you’re developing the molecule you hope to use the digital tool to assess is exceptionally, often prohibitively ambitious. The many levels of complexity, and relatively constricted timelines, can be overwhelming.

On the clinical trial front, the universal embrace of decentralized trials – an obviously patient-centric concept that is endorsed and pursued by most everyone – belies the many challenges in pulling these off at all, to say nothing of doing them efficiently and reliably, and using available technology platforms (PR promises aside). The complexity and expense of actually executing meaningfully decentralized trials (versus, for example, conducting a single check-in remotely and calling it a win) is the elephant in the room (one of them) that isn’t discussed in polite company, but which preoccupies many of us who believe deeply in the concept of decentralized clinical trials and are eager to see the promise fulfilled.

Also not to be underestimated: the challenge of organizing multimodal data on a platform where the data can be accessed and analyzed. I appreciate the value of giving names (like “digital molecular map” or “journey map”) to important problems and significant data missions. Ultimately, though, success depends on the quality of the underlying data and the utility of the platform on which these data are housed and analyzed. This is arguably yet another area where vendor slideware seems far ahead of actual user experience.

Some of the most significant challenges digital and data efforts face within R&D are (remain) organizational. Traditional drug developers – those in the trenches, doing the work – are still trying to figure out what to make of data science and data scientists in what is still a largely traditional drug development world. This challenge is compounded by a massive gap between the hype of technology and contemporary reality. 

On the other hand, when there is compelling, readily implementable technology, it’s enthusiastically adopted.  Every structural biology group, for instance, routinely uses AlphaFold, the program that predicts protein folding (structure) based on underlying amino acid sequence data.  

An area of real opportunity (and acknowledging my own bias) is translational medicine, where practitioners struggle to parse actionable insights from a motley collection of multi-modal data. (The need is particularly great given that the industry’s most costly problem — see here — is the absence of good translational models.) The concept of integrating diverse data to generate biological and clinical insights is universally celebrated, of course, but the data wrangling challenges are exceptional. Perhaps because of this, translational medicine, arguably, still hasn’t quite lived up to its potential. As a discipline, it offers particular promise to thoughtful data scientists, with profound opportunity for outsized impact. 

Bottom Line

R&D organizations recognize that emerging digital and data technologies represent important, perhaps essential, enabling tools to advance their mission. As Sanofi’s Frank Nestle explains, digital and data technologies are increasingly deployed across a range of R&D activities. So far, our reach far exceeds our grasp, but there’s been real progress, along with a high probability of more success. Our greatest challenges are navigating not the sexy analytics and data visualizations that everyone covets, but rather, the far less glamorous work of establishing the underlying data flows upon which everything depends.  This is a lesson familiar to accomplished data scientists like Recursion’s Imran Haque and Verily’s Amy Abernethy, among others – see here.

The data challenges are amplified in large biopharmas because these companies:

  • operate internationally, necessitating adherence to a wide array of data restrictions;
  • are the result of decades of mergers and acquisitions, further complicating data management;
  • inevitably involve complex organizational politics that must be navigated, as Stanford’s Jeffrey Pfeffer has compellingly described.

Data science continues to hold extraordinary promise for biopharma; translational medicine represents a particularly compelling opportunity. Our collective challenge is figuring out how to work through the considerable data wrangling challenges and deliver palpable progress.