31
Jul
2023

Lessons For Biopharma from a Healthcare AI Pioneer

David Shaywitz

As drug developers consider how to leverage AI and other emerging digital and data technologies, they look to related businesses, such as healthcare systems, for lessons and learning.   

We would be hard-pressed to find a better guide to AI in healthcare than Ziad Obermeyer, an emergency room physician and health science researcher at the University of California-Berkeley. His research focuses on decision-making in medicine and on the equitable use of AI in healthcare. 

Obermeyer was recently interviewed on the NEJM-AI podcast by co-hosts Andrew Beam and Raj Manrai (both faculty in the Department of Biomedical Information at Harvard).  The entire conversation — reflective, nuanced, and chock full of insights — should be required listening for anyone interested in potential applications of AI in biopharma.  The most relevant highlights for biotech are summarized below.

AI in Medicine: The Opportunity

Obermeyer didn’t set out to become a doctor. He initially studied history and philosophy, then did a stint as a management consultant before eventually applying to med school. He hit his stride once he began his residency in emergency medicine.

Ziad Obermayer, associate professor and Blue Cross of California Distinguished Professor at the Berkeley School of Public Health

“I really, really liked being a doctor,” he says, “and I think there’s something about that exposure to the real world and the problems of patients that I think it’s shaped the problems that I work on in my research as well.”

Obermeyer began to see medicine as “a series of prediction problems,” and saw artificial intelligence (specifically, machine learning) as a tool that could help make doctors better by assisting them with challenges like establishing a diagnosis, assessing risk, or providing an accurate prognosis.

If you walk into a doctor’s office today, he readily acknowledges, you’re not overwhelmed by a sense of futuristic technology as you fill out paperwork and listen to the fax machine hum. 

However, he notes, AI is already widely used in medicine – it’s just operating at the back end, behind the scenes.  “On the population health management side, on a lot of other operational sides, like clinic bookings, things that have a direct impact on health, these tools are already in very, very wide use.”

Medicine today is still quite artisanal, guided by rules of thumb and local traditions, Obermayer says. Much of the problem, he suggests, is that it’s hard for doctors to wrap their heads around the volume and variety of healthcare data, which are “high-dimensional” and “super complicated.” To make the best possible predictions given the number of variables, he argues, requires assistance through approaches such as machine learning. 

The question isn’t how we think about AI plus medicine; rather, he says, “that is medicine.  That is the thing that medicine will be as a science.” This perspective is shared by others in the field including Harvard’s Zak Kohane, who often asserts “medicine is at its core an information- and knowledge-processing discipline,” and progress requires “tools and methods in data science.”

Casting his eye towards the future of AI in medicine, Obermeyer can envision both bear and bull scenarios.

His fear is that AI tools, in addition to harboring biases (see below), will be used for “local optimization of a system that sucks and that isn’t proactive, that’s very oriented towards billing and coding.” He can envision “a very unappealing path where we just get a hyper-optimized version of our current [suboptimal] system.”

”There is a certain lack of ambition in how people are applying AI today,” he said. 

More hopefully, he can imagine a future where AI helps solves some of healthcare’s most vexing problems. One opportunity area he sees are addressing conspicuous “misallocation of resources” – essentially, improving our ability to provide the right treatment for the right patient at the right time. 

For example, Obermeyer points out that many patients die of sudden cardiac arrest, while at the same time, the majority of defibrillators implanted to prevent sudden cardiac deaths are never triggered.  It would be far better medicine, he observes, if more defibrillators were implanted in the patients who would ultimately need them.

He also envisions how AI might enable new discoveries around the pathophysiology of disease by linking biological understanding, biomarkers, and outcomes.  A squiggle on an ECG isn’t just an image that an AI can recognize, like a cat on the internet.  “We actually know a lot about how the heart produces the ECG,” he explains.  “We know what part of the heart leads to what part of the wave form. We have simulation models of the heart that we can get to produce waveforms.” 

Consequently, he views the idea of “tying together that pipeline of biological understanding of the heart and how the heart generates data,” and connecting data about patient outcomes is “super-promising,” and suggests it may eventually lead to new drug discoveries.  “There are a lot of things you can do once you get the data talking to the biology,” he says.

AI in Medicine: Tactical Considerations

The exceptional promise of applying AI in medicine seemed to be matched only by the challenge of implementing it. 

Obermeyer described hurdles in four key areas: data, talent, bias, execution.

Data.  AI depends on data as its foundation.  This can be a particular problem in healthcare Obermeyer says, noting that “getting the data that you need to do your research is a huge, huge preoccupation of any researcher in this area.”  The problem, he continues, “is that the data are essentially locked up inside of the health systems that produced the data. And it can be really perverse… it’s Byzantine and it’s very frustrating, and I think it’s really holding back this space.”

Obermayer established an open-science platform (Nightingale) to make it easier for researchers to get access to datasets from healthcare systems.  One example: the team digitized breast cancer biopsy slides that “were literally collecting dust on a shelf in the basement” of a hospital and linked these data to EHR information and cancer registry data. 

Getting started wasn’t easy. He approached 200 healthcare systems, he said, and only five agreed to participate: several large non-academic health systems and a few small county hospital systems.

Obermeyer has also set up a for-profit company, Dandelion Health, that aspires to serve as a trusted data broker to make it easier for healthcare AI tool developers to think about their creative applications, rather than spending too much time wrestling to get access to the data in the first place. “There are so many insights and products that could directly benefit patients that are not getting developed today because it’s so hard to access those data,” he says.

Talent. Obermeyer sees healthcare systems as operating at a disadvantage in the digital and data world. “Hospitals can’t hire all the computer scientists that they would need” to do the necessary data science,” he says, “and they’re not going to win the war for talent against Google or Facebook or even just computer science departments of different universities.”

Obermeyer also doesn’t feel that it’s feasible to pair a healthcare expert and an AI expert; he believes it far better to have a “single brain,” even though he acknowledges this “seems ridiculously inefficient” because of the time and effort required to gain this kind of medical and data science expertise. (See here for a contrasting perspective from Dr. Amy Abernethy, championing the collaboration approach.)

The good news, though, is that Obermeyer shares the optimism of venture capitalist Bill Gurley that there’s a huge amount of useful, free information available online, and motivated individuals can find a lot of the training they need; this seems to be how Obermeyer himself became proficient in artificial intelligence.

Obermeyer suggests two conceptual paths for healthcare experts interested in mastering AI.  One, he says, starts with statistics, since he (somewhat controversially) regards AI as “an applied version of statistics with real datasets.” In his view, there’s “no substitute for learning the basic statistical stuff. And I think as a starting point, that is an amazing place to start to get a handle on thinking about how AI works, where to apply it, where it can go wrong”

The second route into AI, Obermeyer says, and the one he took, begins with the microeconomics “toolkit,” which he argues was designed “for dealing with data that’s produced by humans and is messy and error prone and driven by incentives. That seems a lot like medicine to me.”

Obermeyer sees the ultimate goal of data science training as learning how to formulate problems effectively – “how to take an abstract question and then think about what is the data frame that would answer this question.”

Obermeyer also points to how helpful AI itself can be to trainees. ChatGPT is particularly helpful in writing code, he says, and approvingly cites AI expert Andrej Karpathy’s quip, “The hottest new programming language is English.”

Obermeyer sees the ultimate goal of data science training as learning how to formulate problems effectively – “how to take an abstract question and then think about what is the data frame that would answer this question.”

Bias. Obermeyer’s research is focused on bias and AI; he seeks to root out hidden bias, as well as to use AI to reduce bias. 

Obermeyer is especially well known for a 2019 Science paper that identified an unexpected bias in a population health algorithm.  The tool he studied looked at health data from a population and tried to predict which patients were most likely to get sick in the upcoming year, so they could receive extra attention, pre-emptively, and thus stay healthier. 

When Obermeyer’s team looked at how this worked in practice, the team found that Black patients predicted to be at same health risk as White patients were far more likely to get sick. 

As the authors explain, “The bias arises because the algorithm predicts health care costs rather than illness, but unequal access to care means that we spend less money caring for Black patients than for White patients.”  In other words, by using health care costs as a proxy for health care needs – a common assumption of convenience — the algorithm developers inadvertently overlooked, and ultimately propagated, an underlying bias.

David Cutler, Otto Eckstein Professor of Applied Economics, Harvard University

Obermeyer has also explored how AI might be used to reduce healthcare disparities.  He explains that the project was inspired by a talk from a friend and colleague, Harvard healthcare economist David Cutler, on the consistent observation that “black patients have more pain than white patients…even when you control for severity of disease.”

For example, if you consider patients with equally severe knee arthritis, based on standard X-ray scoring, Black patients on average will report more pain than White patients. Cutler attributed this gap, Obermeyer says, to “stuff that’s going on outside knee” – psychosocial stressors, for instance. But Obermeyer thought the issue was something in the knee, and together they decided to study the problem.

Obermeyer’s team trained a deep learning algorithm to predict patient’s pain level – rather than a radiologist’s arthritis score – from the X-rays.  “This approach,” the authors report, “dramatically reduces unexplained racial disparities in pain.” 

According to Obermeyer, “the algorithm is doing a much, much better job of explaining pain overall, but it’s doing a particularly good job of explaining the particular pain that radiologists miss and that black patients report, but that can be traced back to some pixels in the knee.”

Execution.  Motivated by both his sense of purpose and innate curiosity, Obermeyer was clearly frustrated by the slow pace of some of the research in academia, in contrast to an urgency he noticed from colleagues from industry.

“One of the things that I’ve really come to appreciate about the private sector and basically my new non-academic friends and acquaintances,” he said, “is, boy, do they get [stuff] done.”

“They don’t have projects like I have that have gone on for eight years. If it goes on for eight days, it’s like, what’s going on? What’s taking so long? So there’s an impatience and a raw competence that I’ve been trying to learn from that world.”

Bottom Line and Implications for Biopharma

Obermeyer’s experience can’t help but resonate with drug developers.  Ours, too, is a business focused on “a series of prediction problems.” Our work tends to leverage digital and data technology far less than many other industries, yet (as I’ve discussed) there are pockets (such as in manufacturing and supply chain management) where there is a remarkable level of sophistication.  There would seem to be a profound opportunity for drug developers to make better use of multi-dimensional data.  There is already a strong focus on applying emerging technology to improve near-term efficiencies, and the earnest hope these technologies also can be used to identify and elucidate profound scientific opportunities to improve human health.  Access to high quality data remains a crippling problem for industry data scientists focused on R&D.  Great talent is always in demand, and upskilling employees is an industry priority, while figuring out how to integrate most effectively talented drug developers with skilled data scientists remains a work in progress.  Bias is, of course, an area of exceptional concern and focus, and the notion of using technology to promote equity is particularly appealing.  Finally, the ability of industry to execute when inspired reminds us of what we can achieve, while the relatively limited impact of AI and data science in R&D across the industry to date, particularly when contrasted with the outsized potential, reminds us of how far we still have to go.

Recent Astounding HealthTech columns on Gen AI

24
Jul
2023

Betting on Bold and Brave Ideas for Cancer: Yung Lie on The Long Run

Today’s guest on The Long Run is Yung Lie.

Yung Lie, president and CEO, Damon Runyon Cancer Research Foundation

Yung is the president and CEO of the Damon Runyon Cancer Research Foundation. The New York-based foundation supports some of the best young scientists around the US and gives them funds to pursue their bold and brave ideas.

To give just one example, it bet on cancer immunotherapy research when it was considered a fringe concept, years before it became a mainstay part of everyday treatment.

Over its more than 75-year history, Damon Runyon has invested more than $430 million in almost 4,000 scientists. Thirteen have gone on to win the Nobel Prize, and 97 went on to be elected by peers into the National Academy of Sciences.

Yung is a scientist by training herself and was a recipient of one of those prestigious Damon Runyon Fellowships when she was a postdoc. She eventually joined the organization full-time, and worked her way up until becoming president and CEO in 2018.

I am particularly interested in Damon Runyon, as I have started doing volunteer work for the organization this summer. I’m recruiting a team of biotech executives and investors for the Timmerman Traverse for Damon Runyon in February 2024 on Mt. Kilimanjaro. Our goal is to raise $1 million.

I’m committing because I believe in the organization’s mission and am impressed with its ability to execute on a national scale. You’ll hear more about this expedition in the months ahead on Timmerman Report, but if you are interested in joining the Kilimanjaro team, or sponsoring the team, email me at luke@timmermanreport.com.

In this conversation, Yung talks about the philosophy of the organization, how it got started, its accomplishments, and a few challenges it sees for young scientists.

And now for a word from the sponsor of The Long Run

 

Occam Global is an international professional services firm focusing on executive recruitment, organizational development and board construction. The firm’s clientele emphasize intensely purposeful and broadly accomplished entrepreneurs and visionary investors in the Life Sciences. Occam Global augments such extraordinary and committed individuals in building high performing executive teams and assembling appropriate governance structures. Occam serves such opportune sectors as gene/cell therapy, neuroscience, gene editing, the intersection of AI and Machine Learning and drug discovery and development

Connect with them at

www.occam-global.com/longrun

Now, please join me and Yung Lie on The Long Run.

12
Jul
2023

Timmerman Traverse for Life Science Cares Hits $1M Goal to Fight Poverty

I’m delighted to share some good news.

The Timmerman Traverse for Life Science Cares has hit its goal in 2023. Together, we have raised more than $1 million to fight poverty in five biotech hubs around the US.

The funds will help fulfill basic human needs like food and shelter. They will also go toward education and job training to help people get on a path to fulfill their dreams.

Summit of Mt. Washington, Timmerman Traverse for Life Science Cares 2022

I’d like to thank the 20 biotech executives and investors who committed to this cause early in 2023. They are training to hike the Presidential Traverse in New Hampshire in August.

I also want to thank our 50 corporate sponsors. You can see the list at lifesciencecares.org. A shout out goes to top sponsors HSBC and Fenwick & West, and to Jeb and Sonia Keiper for an exceptionally generous donation.

This is a special milestone.

Staying fit. Making friends. Enjoying nature. Giving back. Impact in our communities.

That’s what these campaigns are all about.

Want to be a part of it? Email me at luke@timmermanreport.com.

DONATE HERE

10
Jul
2023

Detecting Cancer Early When It’s Most Treatable: Kevin Conroy on The Long Run

Today’s guest on The Long Run is Kevin Conroy.

Kevin is the chairman and CEO of Madison, Wis.-based Exact Sciences.

Kevin Conroy, chairman and CEO, Exact Sciences

Exact Sciences has grown over the past decade into a success story for cancer screening and diagnosis. It’s best known for marketing the noninvasive Cologuard test that screens people for colorectal cancer.

It also markets the Oncotype DX test that’s used to predict the likelihood that a patient with breast cancer will have a recurrence, and whether a preventive round of chemotherapy is likely to be beneficial. Exact is also developing a blood-based screening test that it hopes will be able to detect early signs of many, many types of cancer that aren’t routinely detected until the disease has already caused a lot of damage.

The Cologuard test has now been run 12 million times.

Kevin has a fascinating personal story, starting with the environment where he grew up — Flint, Michigan. He joined Exact Sciences as CEO in 2009 when the company was on the ropes. Kevin and his colleagues set audacious goals, and persevered to build a company that now has a market value of more than $16 billion.

In this conversation, Kevin shares this company story, along with some of his insights into building a company in the Upper Midwest, the importance of partnerships, and where cancer screening and diagnosis is heading.

And now for a word from the sponsor of The Long Run.

Tired of spending hours searching for the exact research products and services you need? Scientist.com is here to help. Their award-winning digital platform makes it easy to find and purchase life science reagents, lab supplies and custom research services from thousands of global laboratories.Scientist.com helps you outsource everything but the genius! 

Save time and money and focus on what really matters, your groundbreaking ideas. 

Learn more at

Scientist.com/LongRun

It’s summertime, which means it’s time to plan for the next biotech team expedition. The Timmerman Traverse for Damon Runyon Cancer Research Foundation is scheduled for Feb. 7-18, 2024. I’m assembling a team of biotech executives and investors to hike to the summit of Mt. Kilimanjaro. If you are up for this trip of a lifetime and want to be part of a team that raises $1 million to support bright young cancer researchers all over the US, send me a note: luke@timmermanreport.com

Now, please join me and Kevin Conroy on The Long Run.

10
Jul
2023

Lessons Learned from the Intense Back-and-Forth Over a $1B Acquisition

Ron Cooper, former CEO, Albireo; board member, Generation Bio

Since 2018, about 90 biotech companies have been acquired for more than $500 million. That’s only about 15 to 20 companies per year, a tiny fraction of the approximately 10,000 biotechnology companies around the world. 

I was privileged to be part of one of those transactions, the sale of Albireo Pharma to Ipsen in January for $952 million upfront, plus contingent value rights that could push the value to well over $1 billion.  

The acquisition will help more patients around the world gain faster access to the innovative medicines Albireo developed. It’s a win for those patients – mostly children with rare diseases – as well as their families, our investors, and the people at both companies.

That’s the triumphant narrative common in M&A press releases. But a lot of things happened behind the scenes to get there. Here are a few lessons I picked up along the way.

Brief Background

Albireo Pharma started as a spinoff from AstraZeneca in Sweden. When I joined the company in 2015, we were privately held with 10 employees, we had great science and an ambitious goal: to design breakthrough drugs for children and bring hope to families.

But like many other biotechs, the development cycle was ahead of the funding cycle. That’s one way of saying our product candidates were advancing rapidly in the clinic, gathering the evidence needed to succeed in the marketplace, but investors weren’t yet convinced to provide the support we needed.

Out of necessity, we decided to divest or shelve some of our R&D programs and put most of our remaining resources behind odevixibat for rare pediatric liver diseases.

In the early days in the mid-2010s, there were some weeks where we nearly ran out of cash. At times, some people questioned our unconventional strategies such as seeking feedback on our phase III trial from FDA and EMA before the phase II study was completed and conducting a single Phase III study with two different primary endpoints. And even though the biotech financial markets were generally trending upward most of these years, the markets always pressured us to be at our absolute best to secure every dollar of funding.

By 2022, we’d cleared many of the hurdles inherent to development-stage biotech. We’d grown to ~200 employees globally. We created four different medications for liver diseases. The first to receive regulatory clearance in the US and Europe was odevixibat (brand name Bylvay) a bile acid transport inhibitor. It was approved and marketed in the U.S. and Europe for a rare pediatric liver disease for which there was no other FDA-approved medical treatment.

That was a publicly visible triumph, but behind the scenes, 2022 was a tumultuous year. We were focused on launching Bylvay globally and advancing two new clinical stage assets when we were first approached to sell the company in February.

We rejected multiple initial offers either because the price was too low or we didn’t think the timing was right for the Company. We finally agreed to undergo a period of diligence and finalized a counterproposal. We thought this was a good match and were hopeful the deal would go through. But in May, the potential acquiror’s board decided to walk away.

It was a shock and big disappointment on many levels.

After this exhausting process which ultimately led nowhere, we decided to get back to focusing on what we do best — developing and delivering important medicines for liver disease that would increase value for Albireo shareholders.  

Surprisingly, it didn’t take long for multiple different suitors to come calling again. We went through an intensive due diligence and negotiation with three of these companies up until just before the JP Morgan Healthcare Conference in January 2023.

Our board ultimately determined Ipsen’s global R&D and commercial capabilities, together with the terms Ipsen provided, presented the best option for all stakeholders. We announced the merger agreement and offer on the opening morning of the JP Morgan event in San Francisco. The sale was formally completed on March 3, 2023.

It was a tough decision to sell the company. It meant breaking up a highly committed A-Team and changing the nature of the close ties we had developed with ourselves, patients, families and their clinicians. But we knew that Ipsen could accelerate access to Bylvay and ensure that the three new product candidates would be maximized.

A Wild Ride

Biotech is not for the faint of heart. Ditto for commercializing medicines globally, taking a company public and navigating a complicated acquisition process It takes extreme dedication, and grit, as a company to get through these challenges. Culture gets tested by these normal events in corporate life – a strong culture stays together, while weaker ones can come unglued.

We had a few things going for us that helped us through the hard times. We believed in the science. We believed in our team. And we believed in our unrelenting commitment to bring hope to the patients and families we served. 

Did we do everything right? Absolutely not. In the first potential deal, I engaged too many people too early and failed to adequately discern alignment with the acquiring firm’s philosophy. (More on that later.)  

Everyone makes mistakes, but I wanted our team to learn from ours and not to repeat the same ones over and over. When something was clearly working well, we sought to turn it into a standard procedure, a hallmark of our culture.

While no transaction is the same, here are seven strategies I’d repeat if I sold another company:

Lessons Learned from Albireo’s Sale
  • Balance Stakeholder Impact – Selling a company requires satisfying multiple constituencies who might, at times, have conflicting goals. An acquisition won’t work unless each stakeholder sees and understands the benefit. For Albireo, that meant taking into consideration the desires of patients, investors, employees and board members. At the right time in the process, I engaged with each stakeholder directly; listening, explaining, listening some more, and making course corrections along the way. Another stakeholder to consider, which often doesn’t get as much attention, is the local community. I wanted our transaction to benefit our local community of Boston, which supported our success in many ways. While it is difficult to benefit a local community within the context of a standard merger deal, I used the Life Science Cares Shares Program to donate a portion of my personal proceeds to outstanding local nonprofits that make Boston a better place to live and work. Thinking carefully about each constituency resulted in a more detailed gameplan for negotiations, and ensured strong support from each party who had a stake in the outcome. Result: A better gameplan + bolstered buy-in.

 

  • Seek and Heed Advice – We ensured our banking, legal and financial modeling advisors were top-notch, and engaged consultants for expertise in targeted areas. Their advice was invaluable every step of the way. Result: Better decisions, fewer mistakes.

 

  • Keep Your Circle Small – As the original suitor sized us up, and we considered a sale to them, I can see in hindsight that I involved too many players too early. This created a large distraction while we needed to focus on our patients and business. Our team members were focused on what was the right thing to do for Albireo but they are human beings and it was challenging not to think about the potential personal impact. When the first deal did not work out, there was a significant toil from the emotional rollcoaster. I learned this lesson the second time around, and kept a tighter circle on who was privy to information. When Ipsen and others approached us, I only informed a handful of key leaders. It was hard for these senior executives to manage a dual workload – doing their normal day jobs to advance the work of Albireo, while also doing all the required work to prepare for a potential acquisition in secret — but it allowed the majority of employees to focus on executing their personal deliverables. We kept our eye on the ball the second time around. Result: More patients got access to our important medicines, even as we navigated the sale.

 

  • Play Your Position – The initial small circle of internal people handling the competitive bidding process for Albireo included me, the CEO, and our C-level executives overseeing finance, legal, business and science. We agreed on our roles, both formal and informal. And we stuck to them. Result: We moved fast, collaborated well, and didn’t get in each other’s way.

 

  • Ensure Alignment – With the first potential transaction in the first quarter of 2022, I didn’t engage deeply with leaders high enough in the potential acquiror’s organization to discern alignment in business philosophy. But I learned from that mistake. In the second process, I got to know the top leaders at the bidders, including Ipsen, with a focus on connecting business beliefs and practices to help ensure a good fit. Result: A sale that benefits both organizations, patients, employees and investors.

 

  • Focus the Team –During the due diligence period, the potential acquirer will bombard your company with literally thousands of information requests, sometimes haphazardly with emails at all hours of the day and night. We protected the team by creating a process to manage incoming requests, and push back on unreasonable inquiries, timelines and duplication. Result: Minimized burn-out and effective information flows.

 

  • Have Fun Along the Way – The biotech business is intense – especially when your team cares so much about patients. Long days, long weeks, long months. The added work required to navigate a sale only increases the pressure. In this context, I learned the added value of personal relationships, humor and even a bit of silliness. Examples: I sent personal cards for their birthday to each team member, or for seemingly small, but important, achievements. I regularly reached out to each diligence team member just to check in. And our town halls became more than just performance updates – we had crazy costumes, fun music videos and lots of shout-outs to bring a little levity to taxing times. Result: Extreme focus, loyalty and talent retention, despite the challenging work.
Grateful for the Experience

If you’re exploring or navigating a biotech sale, I hope these lessons learned can help. The journey isn’t easy, but when you find the right acquiring partner, it’s worth it. We know our drug will able to reach the vast majority of  the approximately 100,000 patients out there around the world who might benefit from it. Our shareholders, employees, and our community are sharing in the financial rewards.

I would not trade my Albireo experience for anything. It’s been a true gift to build an amazing team and work with wonderful parents, clinicians, investors and bankers to serve our patients. And with Ipsen’s acquisition, I’m confident that the work will continue. A large percentage of the Albireo employees plan to stay with Ipsen, while the ones who leave will have freedom to pursue other life interests and career opportunities on their own terms.

I look forward to my next leadership role and if I’m fortunate, I may get to apply these lessons learned again. In the meantime, I am taking a six-month “sabbatical” to enjoy my family, catch up on life and get ready for my next challenge.

27
Jun
2023

Becoming a Biotech CEO: Jodie Morrison on The Long Run

Today’s guest on The Long Run is Jodie Morrison.

Jodie is the acting CEO at Waltham, Mass.-based Q32 Bio. It’s a company developing treatments for autoimmune and inflammatory diseases. It has an antibody in development with Horizon Therapeutics aimed at IL-7 receptor alpha, in Phase II for the treatment of atopic dermatitis. It also has wholly-owned programs aimed at the complement system of the innate immune system, with the intent of making treatments that are tissue-targeted.

Jodie Morrison, acting CEO, Q32 Bio

She came to this position after a series of executive roles and board positions. Her first stint as a CEO, at Tokai Pharmaceuticals, didn’t end well. She dusted herself off and came back to play a role in back-to-back successful outcomes at Syntimmune, Keryx, and Cadent Therapeutics.

In this episode, we talk about how Jodie developed the confidence to lead from some of her early career experiences, how she thinks about hiring, and, at the end of the conversation, she provides some advice to young women seeking to grow and advance in the biotech industry.

And now for a word from the sponsor of The Long Run.

 

 

Occam Global is an international professional services firm focusing on executive recruitment, organizational development and board construction. The firm’s clientele emphasize intensely purposeful and broadly accomplished entrepreneurs and visionary investors in the Life Sciences. Occam Global augments such extraordinary and committed individuals in building high performing executive teams and assembling appropriate governance structures. Occam serves such opportune sectors as gene/cell therapy, neuroscience, gene editing, the intersection of AI and Machine Learning and drug discovery and development

Connect with Occam:

www.occam-global.com/longrun

Now, please join me and Jodie Morrison on The Long Run.

17
Jun
2023

Learning From History How to Think About the Technology of the Moment

David Shaywitz

Generative AI, the transformative technology of the moment, exploded onto the scene with the arrival in late 2022 of chatGPT, an AI-powered chatbot developed by the company OpenAI. 

After only five days, a million users had tried the app; after two months: 100 million, the fastest growth ever seen for a consumer application. TikTok, the previous record holder, took nine months to reach a 100 million users; Instagram had taken 2.5 years.

Optimists thrill to the potential AI offers humanity (“Why AI Will Save The World”), while doomers catastrophize (“The Only Way To Deal With The Threat From AI? Shut It Down”). Consultants and bankers offer frameworks and roadmaps and persuade anxious clients they are already behind. Just this week, McKinsey predicted that generative AI could add $4.4 trillion in value to global economy. Morgan Stanley envisions a $6 trillion opportunity in AI as a whole, while Goldman Sachs says 7% of jobs in the U.S. could be replaced by AI.

The one thing everyone — from Ezra Klein at the New York Times to podcasters at the Harvard Business Review — seems to agree on is that generative AI “changes everything.”

But if we’ve learned anything from previous transformative technologies, it’s that at the outset, nobody has any real idea how these technologies will evolve, much less change the world.  When Edison invented the phonograph player, he thought it might be used to record wills. The internet arose from a government effort to enable decentralized communication in case of enemy attack.  

As we start to contemplate – and are thrust into – an uncertain future, we might take a moment to see what we can learn about technology from the past.

***

Yogi Berra, of course, observed that “it is difficult to make predictions, especially about the future,” and forecasts about the evolution of technology bear him out. 

In 1977, Ken Olsen, President of Digital Equipment Corporation (DEC), told attendees of the World Futures Conference in Boston that “there is no reason for any individual to have a computer in their home.” In 1980, the management consultants at McKinsey projected that by 2000, there might be 900,000 cell phone users in the U.S.; they were off by over 100-fold; the actual number was above 119 million. 

On the other hand, much-hyped technologies like 3D TV, Google Glass, and the Segway never really took off. For others, like cryptocurrency and virtual reality, the verdict is still out. 

AI itself has been notoriously difficult to predict. For example, in 2016, AI expert Geoffrey Hinton declared:

“Let me start by just saying a few things that seem obvious. I think if you work as a radiologist, you’re like the coyote that’s already over the edge of the cliff but hasn’t yet looked down, so doesn’t realize there’s no ground underneath him. People should stop training radiologists now. It’s just completely obvious that within five years, deep learning is going to do better than radiologists because it’s going to get a lot more experience.  It might be 10 years, but we’ve got plenty of radiologists already.”

Writing five years after this prediction, in his book The New Goliaths (2022), Boston University economist Jim Bessen observes that “no radiology jobs have been lost” to AI, and in fact, “there’s a worldwide shortage of radiologists.”

James Bessen, Executive Director of the Technology & Policy Research Initiative, Boston University.

As Bessen notes, we tend to drastically overstate job losses due to new technology, especially in the near term. He calls this the “automation paradox,” and explains that new technologies (including AI) are “not so much replacing humans with machines as they are enhancing human labor, allowing workers to do more, provide better quality, and do new things.” 

Following the introduction of the ATM machine, the number of bank tellers employed actually increased, Bessen reports. Same for cashiers after the introduction of the bar code scanner, and for paralegals after the introduction of litigation-focused software products.

The explanation, Bessen explains, is that as workers are more productive, the costs of what their making tends to go down, which often unleashes greater consumer demand – at least up to a point. 

For instance, automation in textiles enabled customers to afford not just a single outfit, but an entire wardrobe. Consequently, from the mid-nineteenth century to the mid-twentieth century, “employment in the cotton textile industry grew alongside automation, even as automation was dramatically reshaping the industry,” Bessen writes. 

Yet after around 1940, automation continued to improve the efficiency of textile manufacturing, but consumer demand was largely sated; consequently, he says, employment in the U.S. cotton textile industry has decreased dramatically, from around 400,000 production workers in the 1940s to less than 20,000 today.

Innovation image created on DALL-E.

The point is that if historical precedent is a guide, the introduction of a new technology like generative AI will be accompanied by grave predictions of mass unemployment, as well as far more limited, but real examples of job loss, as we’ve seen in recent reporting. In practice, generative AI is likely to alter far more jobs than it eliminates and will likely create entirely new categories of work.

For example, Children’s Hospital in Boston recently advertised for the role “AI prompt engineer,” seeking a person who skilled at effectively interacting with chatGPT.

More generally, while it can be difficult to predict exactly how a new technology will evolve, we can learn from the trajectories previous technological revolutions have followed, as economist Carlota Perez classically described in her 2002 book, Technological Revolutions and Financial Capital.

Carlota Perez, Honorary Professor at the Institute for Innovation and Public Purpose (IIPP) at University College London.

Among Perez’s most important observations is how long it takes to realize “the full fruits of technological revolutions.” She notes that “two or three decades of turbulent adaptation and assimilation elapse from the moment when the set of new technologies, products, industries, and infrastructures make their first impact to the beginning of a ‘golden age’ or ‘era of good feeling’ based on them.” 

The Perez model describes two broad phases of technology revolutions: installation and deployment. 

The installation phase begins when a new technology “irrupts,” and the world tries to figure out what it means and what to do with it. She describes this as a time of “explosive growth and rapid innovation,” as well as what she calls “frenzy,” characterized by “flourishing of the new industries, technology systems, and infrastructures, with intensive investment and market growth.” There’s considerable high-risk investment into startups seeking to leverage the new technology; most of these companies fail, but some achieve outsized, durable success.

It isn’t until the deployment phase that the technology finally achieves wide adoption and use. This period is characterized by the continued growth of the technology, and “full expansion of innovation and market potential.” Ultimately, the technology enters the “maturity” stage, where the last bits of incremental improvement are extracted. 

As Perez explained to me, “A single technology, however powerful and versatile is not a technological revolution.” While she describes AI as “an important revolutionary technology … likely to spawn a whole system of uses and innovations around it,” she’s not yet sure whether it will evolve into the sort of full-blown technology revolution she has previously described. 

One possibility, she says, is that AI initiates a new “major system” – AI and robotics – within an ongoing information and communication technology revolution.  

At this point, it seems plausible to imagine we’re early in the installation stage of AI (particularly generative AI), where there’s all sorts of exuberance, and an extraordinary amount of investing and startup activity. At the same time, we’re frenetically struggling to get our heads around this technology and figure out how to most effectively (and responsibly) use it.

This is normal. 

Technology, as I wrote in 2019, “rarely arrives on the scene fully formed—more often it is rough-hewn and finicky, offering attractive but elusive potential.”

As Bessen has pointed out, “invention is not implementation,” and it can take decades to work out how best to use something novel. “Major new technologies typically go through long periods of sequential innovation,” Bessen observes, adding, “Often the person who originally conceived a general invention idea is forgotten.”

The complex process associated with figuring out how to best utilize a new technology may account, at least in part, for what’s been termed the “productivity paradox” – the frequent failure of a new technology to impart significant productivity improvement. We think of this frequently in the context of digital technology; economist Robert Solow wryly observed in a 1987 New York Times book review that “You can see the computer age everywhere but in the productivity statistics.”

However, as Paul A. David, an economic historian at Stanford noted in his classic 1990 paper, “The Dynamo and the Computer,” a remarkably similar gap was present a hundred years earlier, in the history of electrification. David writes that the dawn of the 20th century, two decades after the invention of the incandescent light bulb (1879) and the installation of Edison central generating stations in New York and London (1881), there was very little economic productivity to show for it.

David goes on to demonstrate that the simple substitution of electric power for steam power in existing factories didn’t really improve productivity very much. Rather, it was the long subsequent process of iterative reimagination of factories, enabled by electricity, that allowed the potential of this emerging technology to be fully expressed.  

A similar point is made by Northwestern economic historian Robert Gordon in his 2016 treatise The Rise and Fall of American Growth. Describing the evolution of innovation in transportation, Gordon observes that “most of the benefits to individuals came not within a decade of the initial innovation, but over subsequent decades as subsidiary and complementary sub-inventions and incremental improvements became manifest.”

As Bessen documents in Learning by Doing (2015), using examples ranging from the power loom (where efficiency improved by a factor of twenty), to petroleum refinement, to the generation of energy from coal, remarkable improvements occurred during the often-lengthy process of implementation, as motivated users figured out how to do things better — “learning by doing.”

Eric von Hippel, professor, MIT Sloan School of Management

Many of these improvements (as I’ve noted) are driven by what Massachusetts Institute of Technology professor Eric von Hippel calls “field discovery,” involving frontline innovators motivated by a specific, practical problem they’re trying to solve.

Such innovative users—the sort of people who Judah Folkman had labeled “inquisitive physicians”—play a critical role in discovering and refining new products, including in medicine; a 2006 study led by von Hippel of new (off-label) applications for approved new molecular entities revealed that nearly 60% were originally discovered by practicing clinicians.

***

What does this history of innovation mean for the emerging technology of the moment, generative AI? 

First, we should take a deep breath, and recognize that we are in the earliest days of technology evolution, and nobody knows how it’s going to play out. Not the experts developing it, not the critics bemoaning it, not the consultants trying to sell work off our collective anxiety around it.

Second, we should acknowledge that the full benefits of the technology will take some time to appear. Expectations of immediate gains in productivity by plugging in AI seem as naïve, the dynamo-for-steam substitution all over again. While there are clearly some immediate uses for generative AI, the more substantial benefits will likely require continued evolution of both technology and workflow processes.

Innovation image created on DALL-E.

Third, it’s unlikely that AI will replace most workers, but it will require many of us to change how we get our jobs done – an exciting opportunity for some, an unwelcome obligation for others.  AI will also create new categories of work, and introduce new challenges for governance, ethics, regulation, and privacy.

Fourth, and perhaps most importantly: As mind-blowing as generative AI is, the technology is not magic. It doesn’t descend from the heavens (or Silicon Valley), deus ex machina, with the ability to resolve sticky ethical challenges, untangle complex biological problems, and generally ease the woes of humanity. 

But while technology isn’t a magic answer, it’s proved a historically valuable tool, driving profound improvements in the human condition, and enabling tremendous advances in science. The discovery of the microscope, telescope, and calculus all allowed us to better understand nature, and to develop more impactful solutions.

Technology changes the world in utterly unexpected and unpredictable ways. 

How exciting to live in this moment, and to have the opportunity — and responsibility — to observe and shape the evolution of a remarkable technology like generative AI. 

Yes, there are skeptics. I have a number of friends and colleagues who have decided to sit this one out, reflexively dismissing the technology because they’ve heard it hallucinates (it does), or because of privacy concerns (a real worry), or because they’re turned off by the relentless hype (I agree!)

But I would suggest that we owe it to ourselves to engage with this technology, familiarize ourselves, through practice, with its capabilities and limitations. 

We can be the lead users generative AI — like all powerful but immature transformative technologies — requires to evolve from promise to practice.

Recent Astounding HealthTech columns on Generative AI