23
Jan
2020

Incrementalism is the new Disruption, Trust is the New Black, and Positive Change (for now) at FDA: Takeaways from the 2020 Precision Medicine World Conference

David Shaywitz

I had the privilege of serving as emcee for the “Data Science and AI” track on the first day of this week’s Precision Medicine World Conference (PMWC) in Santa Clara, CA, as well as chairing a panel discussion on data mining and visualization. 

I came away with a sense of optimism and need, organized around several key themes.

In Praise Of Incrementalism

In a day focused on technology, and featuring a number of startups, you might have expected to hear a lot about “disruption” and “disruptive innovation” – but I didn’t.  Instead, the watchword of the moment seems to be “incrementalism” – not in the dispirited sense of having minimal aspirations, but rather in the grounded (versus grandiose) sense of seeking to motivate buy-in from existing healthcare stakeholders by demonstrating a discrete and useful (if not super-sexy) benefit. 

Kaisa Helminen, the CEO of digital pathology company Aiforia Technologies (which I’ve written about here), emphasized the importance of first taking small steps, before attempting to make larger strides.  She amplified this point in a follow-up email:

“Labs should start with incremental steps in utilizing AI in digital pathology, e.g. starting with quality control (QC), workflow optimization or with a few applications that are painful for pathologists to count (e.g. counting mitosis) to get them used to the tech and to facilitate adoption.”

Similarly, Vineeta Agarwala, an impressive physician-scientist who recently joined Andreessen-Horowitz from GV, and who was previously a project manager at Flatiron, emphatically and repeatedly stressed the importance of incrementalism, even in the context of AI.  For example, she noted that at Flatiron, which focused on deriving clinical trial-like data from EHR data (see here), a key use of AI at this tech-driven company was…to determine which patient charts to spend time manually extracting the data from!  It seems unsexy, but apparently it delivered immediate benefits in operational efficiency.

Vineeta Agarwala

Grounded Health Tech Investors

A pleasant surprise at this conference was the number of VCs represented who both seemed interested in the nexus of tech and health and appeared to be approaching it in a grounded fashion, led by investors who have relevant domain experience. Greg Yap from Menlo Ventures, and Vijay Pande and Agarwala from Andreessen-Horowitz, particularly stood out. 

Pande emphasized there’s “nothing magical about AI,” and acknowledged that developing new drugs is not a fast process, as even compounds designed with the help of AI require, in his words, “the usual stuff” such as a battery of preclinical assays and extensive clinical trials.

Similarly, Agarwala described AI as simply “technologies to better learn from data,” and emphasized that “progress is going to be incremental.” Yap was perhaps even more cautious about AI, worried that we seem to be “at the peak of the AI hype cycle.”

Many (but not all) of the VC firms gravitating towards the “AI and data science” opportunity in healthcare and biopharma seem to be tech firms (Menlo Ventures, Andreessen-Horowitz, DCVC stand out) that have added domain expertise on the healthcare side, rather than healthcare VCs that have added domain expertise on the tech side; one conspicuous exception, perhaps, is Jim Tanenbaum’s Foresite Capital, a firm with deep healthcare roots that’s deliberately pursuing a technology dimension.

The Calcified Hairball Problem

The most dispiriting panel of the day, by far, was a discussion of interoperability led by Stan Huff of Intermountain, and featuring Michael Waters of the FDA, and James Tchung of Duke, describing (among other challenges) the excruciating ongoing effort required by the FDA SHIELD initiative to create a unifying schema for the representation of laboratory data. 

Hurdles seemed to be everywhere, and the realized rewards appeared uncertain at best.  The problem seemed to me to reflect the “calcified hairball system of care” to which VC Esther Dyson has famously referred. Listening to the panel describe the extensive painful effort involved in even the most basic efforts to extract meaningful information reinforced the sense that the existing system may be a virtually intractable mess; engaging with it seemed likely to result in a huge suck of time and money, with brutal political fights at every turn, and perhaps with little ultimately to show for the effort – the little juice you extract may prove not to be worth the squeeze.

Who could blame investors like Pande, then, who emphasized the value he sees for startups who think from the outset about how to collect data that (in contrast) works well with AI, and is designed from the ground up with that application in mind.  This seems to be the approach that prominent drug discovery startups like insitro (Andreessen-Horowitz-backed) and Recursion are taking, for example. 

While this doesn’t solve the problem about what to do about all the legacy data stuck in existing systems – which Tom Siebel, recall, describes as a (the?) competitive advantage of incumbent companies in an increasingly digital world — it feels like a contemporary example of what happened to factories after the arrival of electricity, as I described in this column last year. While most factories rapidly converted to electricity, established industries (due to sunk costs) were reluctant to extensively rework or reimagine their factories – they kept the design the same, and just substituted electricity for steam-power. The real beneficiaries were the emerging new industries, who had both the need and the opportunity to design work flows from the ground up, unencumbered by existing approaches. This led to the design of the modern factory. 

Similar new opportunities – where entrepreneurs can freshly leverage the power of new technology while minimizing dependency on the limitations of legacy technology – seem to represent the kind of investments that VCs like Pande are seeking out today.

Transparency and Trust

A thoughtful conversation between Atul Butte, a physician-scientist who oversees health data science for the entire University of California (UC) system (you can hear his Tech Tonics episode here) and Cora Han, UC Health’s newly-minted Chief Health Data Officer – explored why interactions with health systems and tech companies are now appearing so regularly in the news (see this WSJ, this WSJ, this WSJ, this FT, this JAMA commentary, and this JAMA commentary).   

Health systems contracting with technology companies is hardly new or unusual, Butte noted, wryly adding that it seems like only when specific names are attached to the two (such as “Ascension and Google”) that this common type of relationship is suddenly  portrayed as “sinister.” Cora suggested that factors contributing to the apparently escalating concern include (a) the potential for staggering scale, and (b) the theoretical intersection of medical and consumer data, which “seems scary.” She emphasized the foundational importance of “trusting the entities with whom you interact.”

Atul Butte

This connects with a related discussion of the role of transparency in increasing trust, a point several speakers emphasized. For example, Butte noted that if a company in stealth mode (meaning no information about it is publicly available) comes to him and asks to explore access to UC information, Butte tells them not to bother; if the company doesn’t even have a website and other basic information easily accessible, he’s not going to refer them to anyone in his organization.

Interestingly, several speakers on my panel – Helminen and Martin Stumpe (now SVP for data science at Tempus and previously the founder and head of the Cancer Pathology initiative at Google) – both emphasized the role of data visualization can play in fostering trust in technologies, especially AI, that can often seem inscrutable. 

At the same time, as Butte astutely suggested, there may be a bit of a double standard here in demanding this of technology since “physicians are also black box,” and can arrive at decisions of dubious quality via an uncertain and impenetrable process, as Atul Gawande and others have eloquently documented.

Regulation and outlook

Michael Pellini, a VC at Section32 (and former CEO of Foundation Medicine) expressed a strong sense of optimism regarding the near-term outlook for both technology itself and the approach to it he’s seen from regulators (more on this below). From a reimbursement perspective, he anticipated the outlook for therapeutics is likely going to get much worse (presumably a comment on the rising concerns around drug pricing), while diagnostics – where entrepreneurs have struggled for reimbursement for a long time, as Pellini presumably knows all too well — may see marked improvement in their future (presumably a comment on their increased ability to guide patients towards demonstrably better outcomes).

Michael Pellini

Similarly, life science VC (arguably the dean of life science VCs) Brook Byers effusively praised the commitment of the FDA to seek out improved technologies, citing two “heroes” – FDA Deputy Commissioner Amy Abernethy (see here, listen here for her Tech Tonics interview, and here on The Long Run) and FDA ophthalmology expert Malvina Eydelman.

His biggest worry, he said (a concern I share) is the sort of sentiment voiced in a recent NYT masthead editorial, urging the FDA to “Slow down on drug and device approvals.”  The Times argued,

“The F.D.A. has made several compromises in recent years — such as accepting ‘real world’ or ‘surrogate’ evidence in lieu of traditional clinical trial data — that have enabled increasingly dubious medical products to seep into the marketplace. [New FDA Commissioner] Dr. Hahn ought to take a fresh look at some of these shifting standards and commit to abandoning the ones that don’t work. That will almost certainly mean that the approval process slows down — and that’s O.K.”

To be sure, regulators have an intrinsically difficult task – if they’re too strict, promising drugs take longer to reach patients (if the medicines reach patients, or are even developed, at all); if regulators are too permissive, then patients can be exposed to harmful products before the danger is recognized.  However, as appealing as it may be to lean into the adage “first do no harm,” as critics such as the NYT are wont to do, invoking this perversion of the precautionary principle as a justification for moving slowly, it’s critical to realize the extensive harm that inaction can cause as well – as I’ve written here and elsewhere.  Regulators need to balance the totality of risk (including the harms of staunching innovation) and benefit; it’s an intrinsically difficult job given the inevitable uncertainty, and requires nuance and customization — “precision regulation” I’ve called it.

What should be avoided, as Tierney and Baumeister argue in The Power of Bad (my WSJ review here), is encouraging regulators to stomp on the brakes reflexively, driven by an outsized fear of risk, as if informed by the credo, “never do anything for the first time.”

Ultimately, what matters most (as I’ve argued) is real-world performance; a randomized clinical trial, where feasible and ethical, is the ideal approach to demonstrate the potential benefit of an intervention. But the most important parameter is what happens to actual patients taking medicines after approval.  Much of the anxiety experienced by regulators reflects the challenges gathering such data – thus once a medicine is released into the wild (even provisionally), it can be difficult to figure out if is working out as anticipated. 

Here is an opportunity. Improved ability to comprehensively gather and continuously evaluate such data as part of routine care would not only improve patient care, but could also make regulatory approvals less fraught. Visibly, we are a long way from this, yet it’s where we ought to be headed, and the direction, I’m increasingly convinced, healthcare is (slowly) starting to go.

You may also like

And Just Like That: What the Viral Adoption of a Clinical AI App Means for Pharma R&D 
Yes We Can: My Response To Skeptical Readers
Can We Pick Winners With Causal Human Biology? Vertex Makes the Case
What If You Can’t Pick Winners in R&D?