30
Jan
2020

Coronavirus Fear Spreads, CVS Feels the Heat & Black Diamond’s Sizzling IPO

Luke Timmerman, founder & editor, Timmerman Report

Last week, I was dismissive of the coronavirus outbreak stories. The internet was teeming with what struck me as alarmism. I’m still not running for the hills. I’m still rolling my eyes at the speed and volume of misinformation about coronavirus on the web.

But more facts have since emerged to make this a more urgent and serious situation.

We now know that not only can the virus be transmitted via human-to-human contact, but it can be transmitted by asymptomatic people. Most cases appear to be mild, as with flu, but there are people dying. That prompted the CDC on Jan. 27 to advise US travelers to avoid all non-essential travel to China. Within three days, by Jan. 30, the World Health Organization declared a global health emergency. The WHO said about 99 percent of the detected cases have been in China, but with this kind of infectious profile, much like how flu spreads, the new coronavirus can move around the world in a hurry. At last count, there were 7,800 confirmed cases (99 percent in China) and 170 deaths, all in China. Public health authorities around the world have been wrestling with how hard to crack down on travel, and how to set up effective quarantines. The World Health Organization yesterday declared a global health emergency, which it rarely does. We still only have 5 confirmed cases in the U.S., however.

The only thing that can spread faster than a virus like this is misinformation on the web, especially on open platforms with no gatekeepers, like Twitter and Facebook. For those interested in absorbing facts as they become available, keeping them in context, and thinking rationally about how to respond, I recommend the following sources and some particular articles from the past week. To companies with technology capable of rapid diagnosis, especially at point of care, and to vaccine developers with a real shot to respond quickly (mRNA companies especially), Godspeed.

This Week in Drug Pricing

You know the political pressure of a Medicare-for-All movement has been ratcheted up to excruciating levels, heading into the Iowa caucuses and New Hampshire primaries. Everyone knows the terrible stories of unjustifiable price increases of insulin, and the heartbreaking result of people rationing insulin, and sometimes suffering terrible complications.

This week – just days before anger about drug pricing and overall healthcare costs was due to be translated into the form of votes — CVS announced it will offer all types of diabetes medications to its members for zero out of pocket costs. Industry leaders — long frustrated with being perceived as the bad guy while being helpless to fix this particular dysfunction — applauded. “The co-pays are the most important thing that we’re trying to fix. What patients are concerned about with drug costs is what they will pay at the pharmacy. This is a good start and a step in the right direction,” said Paul Hastings, CEO of Nkarta Therapeutics, in an interview with BioCentury.

Some in industry might, privately, take some satisfaction in seeing other bad actors in U.S. healthcare get their fair share of the blame, their comeuppance for outrageous acts that harm patients and that have mostly gone unpunished.

It would be human nature to feel like spiking the football.   

But it would be petty and foolish to call this a victory, or to spend 10 seconds rolling around in the mud of schadenfraude.

The question is whether the public – left, right and center – has any patience for small bites of the apple to control healthcare costs? Will anyone care if diabetics get a break on co-pays? Is there an appetite for a 10,000-page technocratic solution that seeks to balance competing interests to expand access and control costs in a nuanced way? Or is there still so much dysfunction and gouging and flab in the current “system” that everyone keeps screaming for a blowtorch to burn the house down?

The clamor will continue for something much cheaper, more transparent, and very, very simple to understand.

For some, Medicare-for-all will sound right. For others, it might be something like a Public Option buy-in for Medicare, plus more free-market consumer friendly options a la Wal-Mart’s “Supercenter for Health.” (STAT coverage).

We’ll get a decent first read when the Iowa votes are counted.

Financings

Cambridge, Mass. and New York-based Black Diamond Therapeutics raised $201 million in an IPO priced at $19 for about 10.5 million shares. The stock jumped as high as $38 in its first day of trading on the NASDAQ. For background on this Versant Ventures-based company, founded by OSI veterans looking at allosteric binding sites for rare genetic cancers, read this in-depth TR article from December 2018.

Boston-based PureTech Health sold 2.1 million shares — a portion of its founding stake in Karuna Therapeutics — for $200 million in cash to Goldman Sachs. The neuropsychiatry company, led by Steven Paul, was the best performing IPO of 2019. It sold IPO shares in June at $16. Karuna traded yesterday at around $94.

New Haven, Conn.-based Biohaven Pharmaceuticals, a neurology drug developer, raised $250 million in a stock offering.

South San Francisco-based Denali Therapeutics, the developer of drugs for neurodegenerative diseases, raised $180 million in a stock offering.

Lexington, Mass.-based Concert Pharmaceuticals, the maker of deuterium-based drugs, raised $75 million in a stock offering.

Cambridge, Mass.-based Quench Bio raised $50 million in a Series A financing to advance its programs against severe inflammatory diseases. The company was incubated by Atlas Venture. RA Capital led the Series A round, and was joined by new investor AbbVie Ventures.

Watertown, Mass.-based Lyra Therapeutics raised $30 million in a Series C financing. Perceptive Advisors led this investment the company working on ear/nose/throat disorders.

Cambridge, Mass.-based Ohana Biosciences, a Flagship Pioneering company focused on sperm to help treat infertility, came out of stealth mode. The company didn’t say how much is being invested.

Waltham, Mass.-based ImmunoGen, the developer of antibody-drug conjugates, raised $104 million in a stock offering.

San Francisco-based Twist Bioscience, the DNA synthesis company, raised $50 million in a stock offering.

Cambridge, Mass.-based Trillium Therapeutics closed a $117 million stock offering. Existing shareholder NEA participated. The company is working to modify CD47, the ‘do not eat me’ signal that cancer cells use to escape the immune system.

Microsoft pumped $40 million into the AI for Health Initiative. Sudden Infant Death Syndrome is one area of focus. Grants will be spread among Fred Hutch, Seattle Children’s Research Institute, BRAC, Intelligent Retinal Imaging Systems, Novartis Foundation, and PATH. (Geekwire coverage).

Beyond the Good ‘Ol Boy Network

MIT has a lot of entrepreneurial faculty. About 22 percent are women. But when you look at which MIT faculty members were backed by venture capital firms to start 250 new companies, fewer than 10 percent were founded by women. If normal distribution patterns were to have held, we could have expected at least 40-50 more companies to have been founded by MIT female faculty over the past couple decades. But they weren’t. This is according to an analysis done by Sangeeta Bhatia, a biomedical research professor and entrepreneur. Seeing the results, five Boston-area VC firms, including Polaris Partners and F-Prime Capital, pledged “to do all in our power to ensure the boards of directors for companies where we hold positions of power are 25% female by the end of 2022.” Current representation is 14 percent. Better set a calendar reminder for January 2022 to see if this actually comes to fruition, or if these were just words on a page. (STAT coverage by Sharon Begley) and (Washington Post coverage by Carolyn Johnson).

The Biotechnology Innovation Organization reported on a survey of nearly 100 member companies, conducted May-June 2019, which told us some things we already know. Women and people of color are nowhere near close to equity at the higher levels of power in this industry. The survey, for starters, should see a much higher response rate in an industry that has a lot more than 100 member companies. Still, if BIO persists with some kind of annual survey with consistent methodology, it’s a least a baby step toward measuring progress over time toward racial and gender balance over time. How long, exactly, do we want to allow this sort of inequity to persist? What are you, and your organization, doing to fix it on a day-in, day-out basis?

Full-Year Financials Are Rolling In

New York-based Pfizer said full-year revenues declined 4 percent, to $51.8 billion. That performance was dragged down by Consumer Health. Full-year profits still surged 46 percent to $16.3 billion. (Bloomberg coverage).

Indianapolis-based Eli Lilly said full-year revenues inched up 4 percent to $22.3 billion. Net income was a reported $8.3 billion last year.

San Diego-based Illumina, the market leader in DNA sequencing, reported full-year revenue of $3.5 billion, a 6 percent increase over the prior year. Net income eclipsed $1 billion – a 21 percent boost from the prior year profit. Illumina forecasts even better performance in the year ahead, with revenue gains expected in the 9-11 percent range.

Cambridge, Mass.-based Biogen said revenues increased 7 percent last year to $14.4 billion. Full year profit jumped 33 percent to $5.9 billion. Investors didn’t care about the numbers in this annual report, but rather focused on the anticipation of approval for Biogen’s Alzheimer’s drug aducanumab. An increase of about $400 million to $600 million of sales, general and administrative expenses are baked into the company budget in anticipation of the commercial rollout of this drug, writes Umer Raffat of Evercore ISI. Buckle your seatbelts for controversy if that happens with this drug that failed at first, and which Biogen is now seeking to resurrect with the FDA based on a new analysis of the clinical data.

Boston-based Vertex Pharmaceuticals saw revenues leap 36.6 percent to $4.16 billion in the past year. Net income for the year came in at $1.18 billion. The company, now marketing the triple-combo Trikafta for the vast majority of cystic fibrosis patients, expects to boost revenues to between a range of $5.1 billion to $5.3 billion in 2020.

Boston-based Alexion Pharmaceuticals said revenues increased 21 percent last year to $4.99 billion. Profits surged to $2.4 billion last year, up from $77 million the prior year. Alexion’s 2020 financial forecast fell short of Wall Street expectations, “reflecting Ultomiris pricing headwinds” wrote Michael Ulz of Robert W. Baird in a note to clients.

Regulatory Action

Merck won FDA clearance to market the antibiotic fidaxomicin (Dificid) for the treatment of C. diff diarrhea in infants age six months and older.

Legal Corner

Practice Fusion, a San Francisco-based electronic health records vendor, agreed to pay $145 million to resolve ongoing criminal and civil investigations brought by the U.S. Department of Justice. The company admitted to receiving kickback payments from an opioid manufacturer, in exchange for making software modifications to increase prescribing of opioids. Wowza.

Charles Lieber, the chairman of Harvard University’s chemistry department, was arrested and charged with lying to federal investigators about his acceptance of money from the Chinese government. The feds were asking questions stemming from the ongoing investigation into Chinese government theft of US trade secrets. (See the Justice Department affidavit here). For more context on this sad story, see the WSJ.

Data That Mattered

Wilmington, Delaware-based InCyte said it passed a Phase III clinical trial with ruxolitinib topical cream for patients with atopic dermatitis. The drug is a JAK inhibitor reformulated for this skin condition. Data will be presented at a medical meeting.

Cambridge, Mass.-based Acceleron Pharma said that sotatercept, a drug candidate for pulmonary arterial hypertension, passed a Phase II study. The drug is a selective ligand trap for members of the TGF-beta superfamily to rebalance BMPR-II signaling. Acceleron plans to release data at a meeting, but the company said it passed on the primary endpoint (primary vascular resistance) and the key secondary endpoint of 6-minute walk test.

Personnel File

Philadelphia-based Century Therapeutics named Joseph Jimenez, the former CEO of Novartis, to its board of directors. Greg Russotti, formerly vice president of cell therapy technical development at Celgene, was named chief technology officer. For more on Century Therapeutics and its induced pluripotent stem cell-based platform for cell therapy, listen to chief strategy officer Janelle Anderson on The Long Run podcast (Aug. 2019).

Eli Lilly hired Anat Hakim as general counsel. She comes from Wellcare Health Plans.

Boston-based Decibel Therapeutics said Laurence Reid, formerly CEO of Warp Drive Bio, has been hired to replace Steve Holtzman as CEO. Holtzman, a veteran of Biogen, Infinity Pharmaceuticals, Millennium Pharmaceuticals and more over a long career, is retiring. Decibel also announced some reshuffling of its R&D priorities, to put emphasis on regenerative medicines for the inner ear.

Synlogic added Michael Burgess to its board. He’s the president of R&D at Turnstone Biologics.

San Francisco-based Syapse cut 10 percent of its workforce after Roche ended a precision oncology collaboration. (STAT coverage).

Foster City, Calif.-based Sutrovax named Jim Wassil as chief operating officer.

Deals

San Diego-based Conatus Pharmaceuticals said it’s folding itself into Histogen, in a deal that values Conatus at $35 million. The new Histogen will focus on regenerative medicines for dermatology and orthopedic indications.

Worth a Read

26
Jan
2020

Challenging Core Assumptions, Tech Backlash Paves The Way for More Thoughtful HealthTech

David Shaywitz

Digital transformation (as I recently discussed), and the implementation of emerging technologies more generally, is routinely pitched by enthusiasts like Tom Siebel as both urgent and inevitable, something organizations need to embrace or risk irrelevance, if not extinction. 

Yet the “embrace or die” assertion is under increasing, and healthy, scrutiny, as the “techlash” (technology backlash) gains steam. 

“Surveillance Capitalism”: Tech As Force For Harm

Voices of concern have started to coalesce under the banner of what Harvard Business School professor emerita Shoshana Zuboff has termed “surveillance capitalism.” She synthesized and amplified this growing concern in her 700+ page 2019 book The Age of Surveillance Capitalism. For a shorter summary, I recommend reading this recent New York Times essay by Zuboff, and listening to this especially informative interview with her conducted by distinguished technology journalist Kara Swisher (of Recode and the Times).   

The core of Zuboff’s critique can be found in the story of Google itself, a company that (as described in the Recode podcast) initially came to prominence by building a phenomenally effective search engine that users appreciated. But the company struggled to make money in the early days, and “very swanky venture capitalists were threatening to withdraw support,” according to Zuboff. In an existential panic, Google apparently realized that it was sitting on a huge amount of interesting data, far more than was needed to improve the search algorithm. 

At its inception, reports Zuboff, Google had rejected online advertising as a “disfiguring force both in general on the internet and specifically for their search engine.” 

But spurred by the threat of extinction, Zuboff explains, Google declared a “State of Exception,” akin to a state of emergency, that “suspended principles” and permitted the company to contemplate previously shunned approaches. They recognized they had accumulated “collateral behavioral data that was left over from people’s searching and browsing behavior,” data that had been set aside, and considered waste. But upon further review, says Zuboff, Google engineers realized there was great predictive power in the combination of this data exhaust plus computation: the ability to predict a piece of future behavior — in this case, where someone is likely to click — and sell this information to advertisers. 

The result, according to Zuboff, was a radical transformation of online advertising, turning it into a market “trading in behavioral futures,” while claiming “private human experience” in the process.  “We thought that we search Google,” writes Zuboff, “but now we understand that Google searches us.”

As this model caught on, Zuboff explains, tech companies accrued exceptional influence, due to “extreme asymmetries of knowledge and power.” Over time, these companies began to “seize control of information and learning itself.”

These technology companies, asserts Zuboff, “rely on psychic numbering and messages of inevitability to conjure the helplessness, resignation, and confusion that paralyze their pray.” She argues “the most treacherous hallucination of them all” is “the belief that privacy is private.” It’s not, she argues, because “the effectiveness of … private or public surveillance and control systems depends upon the pieces of ourselves that we give up – or that are secretly stolen from us.”

Notably, Swisher strongly shares these privacy concerns, even writing a year-end commentary in the Times last December entitled “Be Paranoid About Privacy,” urging us to “take back our privacy from tech companies – even if that means sacrificing convenience.” She writes, “We trade the lucrative digital essence of ourselves for much less in the form of free maps or nifty games or compelling communications apps.” Adds Swisher, “It’s up to us to protect ourselves.”  

(In contrast to some health tech execs I know, Swisher views Europe’s General Data Protection Regulation [GDPR] and California’s recently-enacted Consumer Privacy Act as positive developments.)

Both Siebel and Zuboff seem to agree on the power of the emerging technology. They vehemently disagree about whether it’s a force for good or ill. 

The Pinker Perspective: Cautious Optimism

But another perspective is that both Siebel and Zuboff overstate at least the near-term power and utility of technology by accepting as a given that the impetus to collect every possible piece of data about every possible thing will soon result in remarkably precise predictions.

This is what Siebel promises, and Zuboff fears.

In contrast, I found myself agreeing with the more grounded viewpoint Harvard psychologist Steven Pinker offered in a 2019 discussion with Sapiens author Yuval Noah Harari (who was making the case for surveillance capitalism).

In recent years, Pinker has attracted controversy by arguing (in his 2018 book Enlightenment Now, and elsewhere) that despite endless lamentations and prophecies of doom, life is actually getting better, and is on a trajectory to improve still more. 

Besides Pinker, this encouraging perspective has been recently discussed by a number of authors including Hans Rosling (Factfulness), Andrew MacAfee (The Second Machine Age, More From Less – my Wall Street Journal review here), and John Tierney and Roy Baumeister (The Power of Bad – my Wall Street Journal review here).

Pinker says he’s not losing sleep about emerging technologies, in large part because he suspects the rate and extent of technological progress has been significantly overstated. Consider human genetic engineering, he says, where frightening concerns had been raised about engineering people with a gene that made them smarter or better athletes. That turned out to be a wild oversimplification, he argues – many genes impact most traits, and since genes tend to be involved in many functions, there’s a good chance any intervention would do at least as much harm as good. The limitations of genetic data is also something Denny Ausiello and I anticipated in this 2000 New York Times “Week in Review” commentary, and something Andreessen-Horowitz partner Jorge Conde thoughtfully reflects on in this recent a16z podcast.

Returning to AI, Pinker notes that “predicting human behavior based on algorithms” is “not a new idea,” nor one likely to immediately destroy the planet.  “I suspect,” Pinker says, “we’ll have more time than we think simply because even if the human brain is a physical system, which I believe it is, it’s extraordinary complex, and we’re nowhere close to being able to micromanage it even with artificial intelligence algorithms. The AI algorithms are very good at playing video games and captioning pictures, but they are often quite stupid when it comes to low probability combinations of events that they haven’t been trained on… even the simple problems turn out to be harder than we think.”

He adds, “When it comes to hacking human behavior – it’s all the more complex. Not because there’s anything mystical or magic about the human brain – it’s an organ – but an organ that ‘s subject to fantastic non-linearities and chaos and unpredictability and the algorithm that will control our behavior isn’t going to be arriving any time soon.”

In a 2018 op-ed, Pinker notes the “vast incremental progress the world has enjoyed in longevity, healthy, wealthy, and education,” and adds that technology “is not the reason that our species must some day face the Grim Reaper. Indeed, technology is our best hope for cheating death, at least for a while.” 

He describes threats such as “the possibility that we will be annihilated by artificial intelligence” as “the 21st century version of the Y2K bug,” which was associated with apocalyptic prophesies, yet ultimately had negligible impact.

In a particularly interesting exchange between Harari and Pinker, Harari expressed concern that the surveillance state was turning our lives into a continuous, extremely stressful job interview, suggesting we’re heading to the point where everything we do every moment of our lives could be surveilled, recorded, and analyzed in a way that could impact future employment.

Pinker, in response, noted that “One of the most robust findings in psychology is that actuarial decision making – statistical decision making — is more reliable than human intuition, clinical decision making.  We’ve known this for 70 years but we typically don’t do what would be more rational.” In this example, it would be rational to scrap job interviews, and use statistically-informed predictors instead.  Even though we know job interviews are subject to bias and error, Pinker points out, we still use them, and don’t “hand it over to algorithms.” 

Of course, many technophiles – and technophobes — would say this is exactly what’s already occurring.

The Taleb Quadrant

There’s actually a fourth quadrant to consider – which I think of as represented by Nassim Taleb, who is critical (as he articulates with particular clarity in Antifragile) of what he sees as our worship of new technology, not because he fears it’s about to immediately lead to the end of life as we know it, but rather because he thinks our increased interconnectivity places us at greater risk of a catastrophic failure – i.e. make us far more fragile. He trusts approaches that have stood the test of time “things that have been around, things that have survived,” and worries about our “neomania – the love of the modern for it’s own sake.”

Implications for Health Tech

While perhaps inconvenient for some health tech entrepreneurs in the short term, the increasingly robust discussion about the impact of technology represents a positive development for the field.

Why positive? Because it creates the intellectual space needed to challenge tech assertions and assumptions, while demanding rigorous proofs of value. 

I incline towards Pinker’s perspective. Technology, in my view, offers us real hope in our efforts to maintain health and forestall and combat illness. Figuring out how to derive meaningful benefit from the technology will not be nearly as easy nor as rapid as consultants promise. As we work through these challenges, we need to be thoughtful and deliberate, and consider the right kind of guardrails we want to put in place as we bring ever-more powerful technologies to bear in our healthcare system. The hurdles we must clear – technological, social and political in nature – as we create systems that can meaningfully intervene and improve upon what we have in healthcare are enormous. We would be foolish to underestimate the work ahead – and even more foolish not to embrace the challenges and get going.

23
Jan
2020

Incrementalism is the new Disruption, Trust is the New Black, and Positive Change (for now) at FDA: Takeaways from the 2020 Precision Medicine World Conference

David Shaywitz

I had the privilege of serving as emcee for the “Data Science and AI” track on the first day of this week’s Precision Medicine World Conference (PMWC) in Santa Clara, CA, as well as chairing a panel discussion on data mining and visualization. 

I came away with a sense of optimism and need, organized around several key themes.

In Praise Of Incrementalism

In a day focused on technology, and featuring a number of startups, you might have expected to hear a lot about “disruption” and “disruptive innovation” – but I didn’t.  Instead, the watchword of the moment seems to be “incrementalism” – not in the dispirited sense of having minimal aspirations, but rather in the grounded (versus grandiose) sense of seeking to motivate buy-in from existing healthcare stakeholders by demonstrating a discrete and useful (if not super-sexy) benefit. 

Kaisa Helminen, the CEO of digital pathology company Aiforia Technologies (which I’ve written about here), emphasized the importance of first taking small steps, before attempting to make larger strides.  She amplified this point in a follow-up email:

“Labs should start with incremental steps in utilizing AI in digital pathology, e.g. starting with quality control (QC), workflow optimization or with a few applications that are painful for pathologists to count (e.g. counting mitosis) to get them used to the tech and to facilitate adoption.”

Similarly, Vineeta Agarwala, an impressive physician-scientist who recently joined Andreessen-Horowitz from GV, and who was previously a project manager at Flatiron, emphatically and repeatedly stressed the importance of incrementalism, even in the context of AI.  For example, she noted that at Flatiron, which focused on deriving clinical trial-like data from EHR data (see here), a key use of AI at this tech-driven company was…to determine which patient charts to spend time manually extracting the data from!  It seems unsexy, but apparently it delivered immediate benefits in operational efficiency.

Vineeta Agarwala

Grounded Health Tech Investors

A pleasant surprise at this conference was the number of VCs represented who both seemed interested in the nexus of tech and health and appeared to be approaching it in a grounded fashion, led by investors who have relevant domain experience. Greg Yap from Menlo Ventures, and Vijay Pande and Agarwala from Andreessen-Horowitz, particularly stood out. 

Pande emphasized there’s “nothing magical about AI,” and acknowledged that developing new drugs is not a fast process, as even compounds designed with the help of AI require, in his words, “the usual stuff” such as a battery of preclinical assays and extensive clinical trials.

Similarly, Agarwala described AI as simply “technologies to better learn from data,” and emphasized that “progress is going to be incremental.” Yap was perhaps even more cautious about AI, worried that we seem to be “at the peak of the AI hype cycle.”

Many (but not all) of the VC firms gravitating towards the “AI and data science” opportunity in healthcare and biopharma seem to be tech firms (Menlo Ventures, Andreessen-Horowitz, DCVC stand out) that have added domain expertise on the healthcare side, rather than healthcare VCs that have added domain expertise on the tech side; one conspicuous exception, perhaps, is Jim Tanenbaum’s Foresite Capital, a firm with deep healthcare roots that’s deliberately pursuing a technology dimension.

The Calcified Hairball Problem

The most dispiriting panel of the day, by far, was a discussion of interoperability led by Stan Huff of Intermountain, and featuring Michael Waters of the FDA, and James Tchung of Duke, describing (among other challenges) the excruciating ongoing effort required by the FDA SHIELD initiative to create a unifying schema for the representation of laboratory data. 

Hurdles seemed to be everywhere, and the realized rewards appeared uncertain at best.  The problem seemed to me to reflect the “calcified hairball system of care” to which VC Esther Dyson has famously referred. Listening to the panel describe the extensive painful effort involved in even the most basic efforts to extract meaningful information reinforced the sense that the existing system may be a virtually intractable mess; engaging with it seemed likely to result in a huge suck of time and money, with brutal political fights at every turn, and perhaps with little ultimately to show for the effort – the little juice you extract may prove not to be worth the squeeze.

Who could blame investors like Pande, then, who emphasized the value he sees for startups who think from the outset about how to collect data that (in contrast) works well with AI, and is designed from the ground up with that application in mind.  This seems to be the approach that prominent drug discovery startups like insitro (Andreessen-Horowitz-backed) and Recursion are taking, for example. 

While this doesn’t solve the problem about what to do about all the legacy data stuck in existing systems – which Tom Siebel, recall, describes as a (the?) competitive advantage of incumbent companies in an increasingly digital world — it feels like a contemporary example of what happened to factories after the arrival of electricity, as I described in this column last year. While most factories rapidly converted to electricity, established industries (due to sunk costs) were reluctant to extensively rework or reimagine their factories – they kept the design the same, and just substituted electricity for steam-power. The real beneficiaries were the emerging new industries, who had both the need and the opportunity to design work flows from the ground up, unencumbered by existing approaches. This led to the design of the modern factory. 

Similar new opportunities – where entrepreneurs can freshly leverage the power of new technology while minimizing dependency on the limitations of legacy technology – seem to represent the kind of investments that VCs like Pande are seeking out today.

Transparency and Trust

A thoughtful conversation between Atul Butte, a physician-scientist who oversees health data science for the entire University of California (UC) system (you can hear his Tech Tonics episode here) and Cora Han, UC Health’s newly-minted Chief Health Data Officer – explored why interactions with health systems and tech companies are now appearing so regularly in the news (see this WSJ, this WSJ, this WSJ, this FT, this JAMA commentary, and this JAMA commentary).   

Health systems contracting with technology companies is hardly new or unusual, Butte noted, wryly adding that it seems like only when specific names are attached to the two (such as “Ascension and Google”) that this common type of relationship is suddenly  portrayed as “sinister.” Cora suggested that factors contributing to the apparently escalating concern include (a) the potential for staggering scale, and (b) the theoretical intersection of medical and consumer data, which “seems scary.” She emphasized the foundational importance of “trusting the entities with whom you interact.”

Atul Butte

This connects with a related discussion of the role of transparency in increasing trust, a point several speakers emphasized. For example, Butte noted that if a company in stealth mode (meaning no information about it is publicly available) comes to him and asks to explore access to UC information, Butte tells them not to bother; if the company doesn’t even have a website and other basic information easily accessible, he’s not going to refer them to anyone in his organization.

Interestingly, several speakers on my panel – Helminen and Martin Stumpe (now SVP for data science at Tempus and previously the founder and head of the Cancer Pathology initiative at Google) – both emphasized the role of data visualization can play in fostering trust in technologies, especially AI, that can often seem inscrutable. 

At the same time, as Butte astutely suggested, there may be a bit of a double standard here in demanding this of technology since “physicians are also black box,” and can arrive at decisions of dubious quality via an uncertain and impenetrable process, as Atul Gawande and others have eloquently documented.

Regulation and outlook

Michael Pellini, a VC at Section32 (and former CEO of Foundation Medicine) expressed a strong sense of optimism regarding the near-term outlook for both technology itself and the approach to it he’s seen from regulators (more on this below). From a reimbursement perspective, he anticipated the outlook for therapeutics is likely going to get much worse (presumably a comment on the rising concerns around drug pricing), while diagnostics – where entrepreneurs have struggled for reimbursement for a long time, as Pellini presumably knows all too well — may see marked improvement in their future (presumably a comment on their increased ability to guide patients towards demonstrably better outcomes).

Michael Pellini

Similarly, life science VC (arguably the dean of life science VCs) Brook Byers effusively praised the commitment of the FDA to seek out improved technologies, citing two “heroes” – FDA Deputy Commissioner Amy Abernethy (see here, listen here for her Tech Tonics interview, and here on The Long Run) and FDA ophthalmology expert Malvina Eydelman.

His biggest worry, he said (a concern I share) is the sort of sentiment voiced in a recent NYT masthead editorial, urging the FDA to “Slow down on drug and device approvals.”  The Times argued,

“The F.D.A. has made several compromises in recent years — such as accepting ‘real world’ or ‘surrogate’ evidence in lieu of traditional clinical trial data — that have enabled increasingly dubious medical products to seep into the marketplace. [New FDA Commissioner] Dr. Hahn ought to take a fresh look at some of these shifting standards and commit to abandoning the ones that don’t work. That will almost certainly mean that the approval process slows down — and that’s O.K.”

To be sure, regulators have an intrinsically difficult task – if they’re too strict, promising drugs take longer to reach patients (if the medicines reach patients, or are even developed, at all); if regulators are too permissive, then patients can be exposed to harmful products before the danger is recognized.  However, as appealing as it may be to lean into the adage “first do no harm,” as critics such as the NYT are wont to do, invoking this perversion of the precautionary principle as a justification for moving slowly, it’s critical to realize the extensive harm that inaction can cause as well – as I’ve written here and elsewhere.  Regulators need to balance the totality of risk (including the harms of staunching innovation) and benefit; it’s an intrinsically difficult job given the inevitable uncertainty, and requires nuance and customization — “precision regulation” I’ve called it.

What should be avoided, as Tierney and Baumeister argue in The Power of Bad (my WSJ review here), is encouraging regulators to stomp on the brakes reflexively, driven by an outsized fear of risk, as if informed by the credo, “never do anything for the first time.”

Ultimately, what matters most (as I’ve argued) is real-world performance; a randomized clinical trial, where feasible and ethical, is the ideal approach to demonstrate the potential benefit of an intervention. But the most important parameter is what happens to actual patients taking medicines after approval.  Much of the anxiety experienced by regulators reflects the challenges gathering such data – thus once a medicine is released into the wild (even provisionally), it can be difficult to figure out if is working out as anticipated. 

Here is an opportunity. Improved ability to comprehensively gather and continuously evaluate such data as part of routine care would not only improve patient care, but could also make regulatory approvals less fraught. Visibly, we are a long way from this, yet it’s where we ought to be headed, and the direction, I’m increasingly convinced, healthcare is (slowly) starting to go.

23
Jan
2020

Much Ado About Coronavirus and 23andMe Hits the Wall

Luke Timmerman, founder & editor, Timmerman Report

This was a quiet week in biopharma. Review it in your weekly Frontpoints.

This Week in Alarmism

Every year, we hear about an infectious disease scare that generates massive coverage, way out of proportion with the actual threat posed. Remember Mad Cow disease, SARS, Bird flu, Ebola, etc. etc? I’ve been in newsrooms providing saturation coverage of such things, and seen befuddled looks from colleagues when I’ve spoken up in meetings that it’s mostly BS alarmism we should ignore.

This week, we had a coronavirus scare out of China. First, we were scared of the possibility it could be spread via human-to-human contact. Then we heard of an infected individual arriving in the United States (not far from me, in Everett, Wash.). We’ve heard that a single carrier infected 14 healthcare workers, making this pneumonia-like bug rather contagious. As of Thursday, a reported 25 people have died from the infection, making this non-trivial. But then the World Health Organization decided not to declare it a global health emergency. Somewhere in the middle there, we saw some diligent researchers get to work on potentially quick vaccination strategies that make logical sense (mRNA from Moderna), and others that are almost surely just shameless headline chasers with vaporware who are seeking a quick hit for their stock price.

The thing to remember here is that what’s scariest about any new infectious disease threat is the fear of the unknown. We are learning about this one at a rapid clip. I’ll stay calm for now, and let the facts come in before reaching any sweeping conclusion.

Financings

  • Cambridge, Mass.-based Blueprint Medicines, the developer of drugs for rare cancers, raised $325 million in a stock offering.
  • Vancouver, BC-based Zymeworks, an antibody engineering company, raised $279 million in a stock offering.
  • Seattle-based Adaptive Biotechnologies, the immune profiling company, raised $212 million in a stock offering.
  • Philadelphia-based Adaptimmune, the cancer immunotherapy company, raised $84 million in a stock offering.
  • San Diego-based Cidara Therapeutics, the anti-infectives company, raised $30 million.
  • London-based Autolus, the T-cell therapy company, raised $80 million.
  • Chi-Med raised $110 million.
  • Immuneering raised $20 million.
  • Xenon Pharmaceuticals raised $60 million.

Annals of Manufacturing

Deerfield and The Discovery Labs outlined a rather large investment in a cell and gene therapy contract manufacturing facility in greater Philadelphia. The lack of capacity to meet anticipated future demand is the play here.

Personnel File

23andMe, citing slumping demand for personal genetic tests, laid off 100 workers (14 percent of its workforce).

Emeryville, Calif.-based Zymergen, a materials science company, named Jay Flatley and Sandra Peterson to its board. Flatley is the former CEO of Illumina, and Peterson is former group worldwide chair of Johnson & Johnson.

Krystal Biotech, a Pittsburgh-based gene therapy company, named Jennifer Chien as chief commercial officer. She was VP and head of genetic diseases at Sanofi Genzyme.

San Diego-based Kura Oncology said chief medical officer Antonio Gualberto is leaving the company, and being replaced by Bridget Martell on an acting basis.

Cambridge, Mass.-based Acceleron Pharma said John Quisel, its EVP and chief business officer, is leaving to become a startup CEO.

San Diego-based Turning Point Therapeutics named Garry Nicholson to its board. He’s a former  president of Pfizer oncology.

Worth a Read

  • Bangladesh’s Dynamic Duo Battle Global Health Inequity. Gates Notes. (Bill Gates)
  • Q&A With Gates Foundation CEO Sue Desmond-Hellmann. STAT staff.
  • The FDA Continues to Struggle with the Implications of Approving Sarepta’s Drugs. STAT. (Matthew Herper)
  • Epic’s CEO is Urging Hospitals to Oppose Rules that Would Make it Easier to Share Medical Data. CNBC. (Chrissy Farr)
  • Hospitals Give Tech Giants Access to Detailed Medical Records. WSJ. (Melanie Evans)
  • Cheap Drug May Alleviate Treatment-Resistance in Leukemia. Science Daily. (Karolinska Institut)
  • The Revolution Comes to Davos. NYT. (Tim Wu)

Regulatory Action

Cambridge, Mass.-based Epizyme won the green light from FDA to start selling tazemetostat (Tazverik) for patients with epithelioid sarcoma. The drug is the first EZH2 inhibitor. The drug generated an Overall Response Rate of 15 percent in the 62 patients tested. Epizyme chose not to disclose the price in its approval press release.

Horizon Therapeutics won FDA clearance to sell teprotumumab-trbw (Tepezza) for thyroid eye disease. Approval came ahead of the agency’s Mar. 8 review deadline.

Medtronic won FDA approval for the world’s smallest pacemaker.

Tweetworthy

22
Jan
2020

False Heroes: Pharmacy Benefit Managers and the Patients They Prey On

Peter Kolchinsky

[Editor’s Note: this is an excerpt from “The Great American Drug Deal.” The book is now available on Amazon.]

By Peter Kolchinsky

It’s hard to know when actual prices for a particular drug really do go up, because there is so little transparency in pricing. A lot of the public discourse on pricing is based on “list prices,” which no one – neither patients nor payers – actually pays.

As is the case with cars and anything on Amazon, everything is always on some kind of sale or subject to discounts of one type of another.

In the world of pharmaceuticals, these discounts are called “rebates” and often take the form of payments from the drug company back to the insurer. The particulars of a rebate that a drug company offers to an insurer – its magnitude and how it varies according to market share – are kept confidential, essentially based on the age-old sales tactic of “Because you’re special, I’ll give you a special price, but don’t tell the other guy.”

Pharmacy Benefit Managers, or PBMs, are the companies who negotiate with drug companies on behalf of payers (and some PBMs are actually owned by insurance companies, so one canthink of them as just agents of payers), and – importantly – retain a portion of the rebates that pass through them. In effect, PBMs profit from the very high list prices they purport to heroically negotiate down. A biopharmaceutical company offering a lower list price without a rebate would threaten the PBM business model, so PBMs discourage the tactic by not rewarding it. Instead they encourage drug companies to keep publicly known list prices high and give an ever bigger confidential rebate to the PBM, from which the PBMs siphon off their own rent before passing on the lower net price to the payer while boasting, “behold what I have negotiated for you!”

Let’s take a closer look at the numbers to see how all this works (or doesn’t).

In 2018, although list prices for branded drugs increased by 5.5 percent, net prices (what drug companies actually get after discounts and rebates) were essentially flat compared to the year before, having come in nominally 0.3 percent higher, though really lower when adjusted for inflation. So increased prices of some drugs were more than offset by the savings from other drugs going generic. Indeed, total spending (what the US is paying, in total for drugs) is increasing, by about 4.4 percent in 2018 from the prior year, but it’s because more patients are being treated. That should be good news. That’s what progress looks like!

Of course, none of that matters if you are a patient who can’t afford what your physician prescribes—and there are all too many people out there who can identify with this. A major part of the solution requires lowering or eliminating out-of-pocket costs, as discussed in Chapter 4, but it’s worth exploring just how much waste there is in the middle zone between drug companies and patients due to payers’ and PBMs’ tactics.

In 2018, US drug spending based on list prices was $479 billion, yet net drug price spending was $344 billion, approximately 28 percent lower. That means that, even if we stuck to “cost sharing” but simply linked what patients pay to net prices that PBMs negotiate instead of list prices, patient costs would be reduced by 28 percent, saving around $17 billion of the $61 billion in out-of-pocket costs Americans paid in 2018.* Insurance companies and Medicare count on that $17 billion extra from patients to pad their own budgets, allowing them to charge slightly lower premiums/taxes, a perverse kind of insurance policy since it means that the sick subsidize the healthy.

Realistically, being able to negotiate secret rebates is a useful tactic for playing drug companies off one another, as PBMs have done with Gilead, AbbVie and Merck to drive down the cost of hepatitis C cures in recent years. However, right now, some patients are increasingly bearing an unfair burden, and most Americans are being misled about the true costs of important medicines.

To understand why and how, let’s begin with a quick rebate primer.

Rebates and How they Impact Patients

Imagine if an agent offered to help you buy a car and promised that you would only need to  pay her 20 percent of whatever she saved you. You buy a car that is listed at $40,000 by the dealership, but you only end up having to pay $30,000 after your agent negotiates on your behalf. Your agent has saved you $10,000 and retains $2,000 as her fee, so really the car cost you $32,000, and you saved $8,000. That’s still good.

Now, imagine that a car dealership decides to cut out the middleman and list those same cars at $30,000, the same amount the dealership would have received after giving discounts to agents. That would be cheaper than going through an agent since you haven’t to pay the $2,000 fee. That agent won’t direct buyers to that dealership because their prices leave no room for the dealership to offer any discounts, which means the agent won’t earn her commission. If anything, agents will encourage dealerships to raise their list prices, either directly or tacitly. If the agent can pressure the dealership to raise the list price of  that car to $50,000, the agent will be able to negotiate it down by 40 percent to $30,000, earn a $4,000 commission, and come out looking like a hero to the buyer, though the car would now functionally cost $34,000!

This is what’s going on in the drug industry, and it is a big reason why list prices are increasing. The question, of course, is why don’t biopharmaceutical companies bypass the PBMs and sell their products directly to insurance companies? Yes, any company that did  so would be ostracized by the agent community, but why should that matter?

The unfortunate truth is that as PBMs have grown, they have amassed wide influence. They have entrenched themselves as middlemen with massive bargaining power, which stems from how concentrated the PBM market has become. The top three PBMs, Express Scripts, CVS/Caremark, and United’s OptumRx, represent 80 percent of the PBM market and serve insurance plans covering half of the US population.

So, what’s the big deal? PBMs keep a piece of the rebate, but at the end of the day, they are saving patients money, and that’s what matters….right? And that’s the problem: saving patients money matters, but this system doesn’t actually do that. Though rebates save money for society as a whole, currently rebates actually increase the true share of costs patients shoulder.

 

*Consider that saving patients 28 percent by lowering drug prices by 28 percent would render the entire biopharmaceutical industry a non-profit and shutter innovation. So pegging patients’ out-of-pocket expenses to net prices instead of list prices is a much more surgical solution, which payers would compensate for with a tiny increase in premiums, less than 1 percent, though could also absorb by slashing their own bureaucracy.

16
Jan
2020

Understanding The Ideology Of Digital Transformation

David Shaywitz

The phrase resounding in corporations these days is “digital transformation.”

What does that really mean?

According to proponents, digital transformation reflects the assertion that in order to remain competitive in the modern era, organizations need to radically rethink their approach to how they collect, manage, and analyze information. 

Change is clearly afoot, but the ideology informing this hasn’t been entirely clear, beyond the vague sense that it seems to be driven by an energized alliance of technology and management consultants.

Recently, on the recommendation of a former colleague (DNAnexus CEO Dick Daly), I finally got my hands on what feels like the sourcebook for digital transformation, or at least a clear, contemporary expression of what digital transformation is and why consultants are pushing it.

The 2019 book – appropriately entitled Digital Transformation – is written by Tom Siebel. He’s a billionaire tech entrepreneur who has spent his career developing enterprise technology, and is currently the CEO of c3.ai, a firm that (besides sponsoring NPR), provides enterprise AI. That puts him in position to both support and benefit from companies undergoing digital transformation. 

So of course it’s easy to dismiss Siebel’s book for being exactly what it is – an elaborate white paper that seeks to create a burning platform, motivating executives to urgently adopt the the sort of changes that would clearly benefit Siebel’s business. (Proceeds from the book itself apparently go to charity, according to the jacket cover.)

However, it would be a mistake to reflexively dismiss the book as a self-serving exercise. Much of Digital Transformation rings true, and resonates with so much I’ve seen and heard in multiple organizations. It feels like an extremely relevant and timely read, written by someone who understands both business and technology, and speaks to issues that every organization I know is trying to manage. 

Having said that, there’s very little in the book specifically about biopharma and healthcare, and much of what’s there seems unlikely to resonate with many domain experts. I suspect this disconnect reflects the lack of progress to date in these industries, combined with Siebel’s limited first-hand experience here.

The Burning Platform

First, the burning platform. According to Siebel, the intersection of four significant “technology vectors” – cloud computing, big data, artificial intelligence (AI), and the internet of things (IOT) – is driving such profound change in the environment in which organizations live that businesses face as “mass extinction event.” Companies are fading from relevance at unprecedented rates, CEO tenures are growing ever shorter, and private equity firms are piling up increasing amounts of dry power, ready to pounce on corporations perceived as laggards.  Companies, argues Siebel, “are facing a life-or-death situation.”

In case this is still too subtle, Siebel writes, in a chapter on AI in the defense industry:

“AI will fundamentally determine the fate of the planet. This is a category of technology unlike any that preceded it, uniquely able to harness vast amounts of data unfathomable to the human mind to drive precise, real-time decision-making for virtually any task.” He adds that as the US and China engage “in a war for AI leadership,” the “fate of the world hangs in the balance.”

Of course, motivating change requires not just a reason to change (unambiguously provided here), but also a direction forward – in this case drawing inspiration from a transformational event in the earth’s history:

“Recall how the Great Oxidation Event’s cyanobacteria and oxygen resulted in new processes of oxygenic respiration. Today, cloud computing, big data, IoT, and AI are coming together to form new processes, too.  Every mass extinction is a new beginning. Changing a core competency means removing and revolutionizing key corporate body parts. That’s what digital transformation demands.”

Siebel reviews the distinction noted by organization theorist (and author of Crossing the Chasm) Geoffrey Moore. Moore draws the distinction between a company’s core – what creates differentiation, e.g. Tiger Woods’s golf skill – and a company’s context – everything else, such as marketing. Thus, Woods may make a lot of money from marketing, but his core, his competitive advantage, is how he plays golf. At a level of simplification, says Siebel, core is often viewed as intellectual property, while context is often outsourced. Siebel argues that many companies have digitized their context competencies, but not their core – but that is exactly what’s required, he argues. 

Such change constitutes a difficult process that often requires a strenuous re-thinking of the underlying business, creating “something faster, strong, and more efficient that can do the same job in a totally different way – or do entirely new things.” 

The key opportunity, Siebel argues, is for companies to “use data to reinvent their business models.”  The change required is profound – and, argues Siebel, it must be driven by the CEO, rather than by the chief information officer or anyone else.

According to Siebel, “implementing a digital transformation agenda means your organization will build, deploy, and operate dozens, perhaps hundreds or even thousands, of AI and IoT applications across all aspects of your organization, from human resources and customer relationships to financial processes, product design, maintenance, and supply chain operations. No operation will be untouched.”

The Four Technology Vectors

The four technologies shaping our future, according to Siebel, are cloud computing, big data, AI, and IoT. In a nutshell:

  • Cloud computing provides convenient access for all businesses to essentially unlimited compute and storage, with major providers (Amazon Web Services [AWS], Microsoft’s Azure, Google Cloud) routinely providing robust security and continuously improving resources, characterized by the “rapid innovation of microservices” such as Google’s TensorFlow designed to “accelerate machine learning.” Adds Siebel, “not a week goes by without another announcement of yet another useful microservice” from a leading cloud vendor.
  • Big data refers not so much to the raw quantity of data collected, managed, and analyzed, but really to the mindset towards data – the idea of collecting everything, versus just a sample; in other words, “complete data.” As Siebel nicely puts it, the “significance of the big data phenomenon is less about the size of the data set we are addressing than the completeness of the data set and the absence of sampling error.“ (Whether this is achievable, or impossibly hindered by either technical or social/political barriers, is a topic we’ll return to shortly.)
  • AI involves computers tackling problems that normally require human intelligence. Machine learning (ML) is a subset of AI that involves teaching computers to learn from experience, rather than pre-defined rules. ML might be used to train an algorithm to assess whether an image has a cat or not; this process tends to require a lot of “feature engineering,” where data scientists and domain experts determine what are the key parameters to feed into the algorithm to help it become more accurate.  Deep learning is a subset of ML where “the important features are not predefined by data scientists but instead learned by the algorithm.”
  • IoT is the idea of connecting “any device equipped with adequate processing and communication capability to the internet, so it can send and receive data” – essentially, the “convergence and control of physical infrastructure by computers.”

These four technologies, Siebel observes, present “powerful new capabilities and possibilities. But they also create significant new challenges and complexities for organizations, particularly in pulling them together into a cohesive technology platform.” Not surprisingly, “many organizations struggle to develop and deploy AI and IoT applications and scale and consequently never progress beyond experiments and prototypes.”

Digital Transformation: Implications For Healthcare

Digital transformation, Siebel asserts, will “improve human life.” How? Though “very early disease detection and diagnosis, genome-specific preventive care, extremely precise surgeries performed with the help of robots, on-demand and digital health care, AI-assisted diagnoses, and dramatically reduced costs of care.”

Skeptical about whether healthcare – characterized famously by Esther Dyson as “calcified hairball” system of care – can be disrupted? Siebel’s rejoinder (cited multiple times) is that in January 2018, when Amazon, Berkshire Hathaway and JP Morgan Chase announced their intention to enter the market, “$30B of market capitalization was erased from the 10 largest U.S. healthcare companies” in a single day of trading. While these stocks recovered almost immediately, the market reaction, according to Siebel, emphasizes the industry’s vulnerability.

Cloud

While Siebel doesn’t offer specific examples of healthcare and the cloud, he shares his view that executives who less than a decade ago proclaimed “our data will never reside in the public cloud” – something I personally heard from a number of healthcare leaders even five years ago – are now delivering a very different message that is “equally clear and exclamatory: ‘…we have a cloud-first strategy. All new applications are being deployed in the cloud.  Existing applications will be migrating to the cloud. But understand, we have a multi-cloud strategy [to avoiding vendor lock-in].’”  While healthcare was among the last to the cloud, it seems many health organizations have finally gotten the message.

Big Data

Siebel highlights the potential value to precision medicine of being able to access “the medical histories and genome sequences of the U.S. population.” His point, it seems, is that “big data” thinking enable us to contemplate considering the data of each person, rather than generalizing from a sampling of people.  Actually acquiring anything approaching such a complete data collection, of course, is a non-trivial real-world challenge, as most in biopharma and healthcare recognize — and often lament. In biopharma, technical (as well as financial) limitations may stymie efforts to collect and subsequently analyze all possible information in human beings and other complex biological systems.

AI

Siebel is clearly taken by the potential of AI in healthcare, while acknowledging “the health care industry is just starting to unlock value from AI. Significant opportunities exist for health care companies to use machine learning to improve patient outcomes, predict chronic diseases, prevent addiction to opioids and other drugs, and improve disease coding accuracy.”

He suggests machine learning algorithms can be used “to predict the likelihood someone will have a heart attack, based on medical records and other data inputs – age, gender, occupation, geography, diet, exercise, ethnicity, family history, healthy history, and so on – for hundreds of thousands of patients who have suffered heart attacks and millions who have not.” (This again assumes it’s possible to get one’s hands on enough of the relevant data to train the algorithm. That’s profoundly difficult in today’s environment, beset by the problems of data interoperability, patient data hoarding by hospitals, proprietary EHRs that can’t/won’t talk substantively to each other, and an ecosystem of stakeholders who aren’t inclined to share data.)

Applications for deep learning in healthcare, according to Siebel, include “medical image diagnostics, automated drug discovery, disease prediction, bone-specific medical protocols, preventive medicine,” though additional detail isn’t provided.

Perhaps especially relevant to medical practitioners, Siebel suggests that “the ability to apply AI to all the data in a dataset” means that “there is no longer the need for an expert hypothesis of an event’s cause.  Instead the AI algorithm is able to learn the behavior of complex systems directly from data generated by those systems….The implications are significant…An experienced physician [in Siebel’s future world, presumably] is no longer required to predict the onset of diabetes in a patient.”  Instead, this information can be gleaned “from data by the computer – more quickly and with much greater accuracy.” I am aware of glimmers of progress in this area, which has been discussed for over a decade.

IOT

Siebel suggests that connected devices “give doctors the opportunity to track patient health remotely in order to improve health outcomes and reduce costs.  By harnessing all these data, IoT supports doctors in predicting risk factors for their patients.”  He notes that pacemakers “can be read remotely and can issue alarms to doctors and patients, warning if a heartbeat is irregular.” He reports that the “wearable industry has given people the ability to easily track all sorts of health-related metrics.” Combining wearable information with clinical data, he observes, “can create a holistic view of the patient, allowing doctors to deliver better care.”

So far so good, right? But Siebel isn’t through. “Soon,” he contends, “humans will have tens or hundreds of ultra-low-power computer wearables and implants continuously monitoring and regulating blood chemistry, blood pressure, pulse, temperature, and other metabolic signals. These devices will be able to connect via the internet to cloud-based services – such as medical diagnostic services – but will also have sufficient local computing and AI capabilities to collect and analyze data and make real-time decisions.”

I’m not sure even most quantified selfers would embrace such a future; if anything, this vision seems to evoke folk singer-songwriter Arlo Guthrie’s memorable description of his military physical examination during the Vietnam War era, where “they was inspecting, injecting every single part of me, and they was leaving no part untouched.”

Siebel points out that large sets of IoT-generated data can “uncover insights and make predictions,” such as using “AI predictive analytics to find potential barriers to medication treatment and identify potential contraindications. This gives doctors the tools to more effectively support patients, improve outcomes, reduce relapse, and enhance quality of life.”

He continues, “Imagine pill bottles that track adherence to prescribed medications, alerting doctors and users when patients fail or forget to take their medication. Also in development are smart pills that can transmit information on vital signs after being ingested.” (I’m sure Otsuka can envision this quite clearly….)

Finally, if you’re not creeped out yet by this degree of monitoring, Siebel, in pointing out that “data generated everywhere through an organization can have value,” reports that today, “Insurance companies…work with mining and hospitality companies to add sensors to their workforces in order to detect anomalous physical movements that could, in turn, help predict worker injuries and avoid claims.”

In this vision of digital transformation the future of both work and health apparently involves, and certainly aspires to, ever-more detailed monitoring and assessment of every facet of existence. It’s a vision that sounds like total, continuous surveillance.

Not only is this approach exceedingly, absurdly, invasive, but it may not even deliver the cost-savings Siebel repeatedly promises, as my Tech Tonics co-host Lisa Suennen points out:

“Tech can only reduce healthcare costs when financial interests are aligned,” Suennen reminds us.  “Digital products for early diagnosis can just as easily lead to excessive testing and treatment when the impetus is to increase utilization (which increases cost).  It is true that technology such as AI and robotics have the potential to lead to cost-reductions in healthcare, but there is far more to it than technology alone.  As with all technology, it is a tool, not a solution.  When the solution one is solving for is to increase revenue, the tool can work just as well in the hands of someone who benefits from increased cost.”

In short, Siebel’s perspective on the ideal future state of healthcare feels both dissonant (I’m not sure most people wanted to be constantly monitored for failure, like IoT-enabled equipment constantly surveyed by a technician) and elusive (based on the challenges of gathering even modest amounts of integrated health data in one place); moreover, as Suennen argues, it may not even deliver the beneficial economics Siebel anticipates.

Digital Transformation: Implications For Organizational Change

In contrast, Siebel’s observations on barriers for organizations contemplating digital transformations seem thoughtful and highly relevant, particularly regarding data, people, and prioritization.

Data

Siebel premise is that “successful digital transformation hinges critically on an organization’s ability to extract value from big data,” and a key initial challenge is how to organize all the data in the first place.  But the good news, argues Siebel, is that large established companies are starting on their journey with one key advantage: they’re already sitting on a lot of data (though unlocking value from these data might be another story).

Argues Siebel, “incumbent organizations have a major advantage over startups and new entrants from other sectors. Incumbents have already amassed a large amount of historical data, and their sizable customer bases and scales of operations are ongoing sources of new data.”

He acknowledges, “Of course, there remain the considerable challenges of accessing, unifying, and extracting value from all these data.  But incumbents begin with a significant head start.”

The challenge is what to do with all these legacy data.  The temptation is to put it all in one place, a so-called data lake or data swamp. Not smart, Siebel argues.

“Storing large amounts of disparate data by putting it all in one infrastructure location does not reduce data complexity any more than letting data sit in siloed enterprise systems. For AI applications to extract value from disparate data sets typical requires significant manipulation such as normalizing and deduplicating data,” Siebel observes, adding “the key big data challenge “is to represent all existing data as a unified, federated image.”

People

To operate in this brave new world requires comfort with both the data and the emerging ways of thinking about data. Writes Siebel: “Generating value requires individuals in the enterprise who are able to understand all these data, comprehend the IT infrastructure used to support these data, and then relate the data sets to business cases and value drivers. The resulting complexity is substantial.”

Interestingly, and (based on my experiences over the years) perceptively, Siebel calls out what he describes as a common mistake: overconfident CIO’s who mistakenly (in his view) believe they can assemble the required data and analytics structures on their own, DIY-style. Siebel says he’s observed this sort of misplaced confidence from the time he was at Oracle, selling enterprise application software, and realized that their biggest barrier wasn’t competitors, but the CIO who wanted to solve the problem DIY, and, according to Siebel, generally failed. (Again: take with a grain of salt, given Siebel’s obvious interest in selling enterprise software.)

Siebel notes that companies obviously require more than just data experts – they also need “translators” who “can bridge the divide between AI practitioners and the business.  [Such translators] understand enough about management to guide and harness AI talent effectively, and they understand enough about AI to ensure algorithms are properly integrated into business practices.”

But what it seems like companies need most of all, according to Siebel, is a ton of consultants – or as he politely refers to them, partners: “In a digitally transforming world,” he says, “partners play a bigger role than in the past.” He explicitly writes companies should involve management consultants for strategy, software partners for technology, professional services to build apps, and change management partners to get people to use to the new tech.  Suddenly, you can begin to understand why “digital transformation” is so broadly embraced: it’s like an Oprah giveaway but for consultants (YOU get more consulting work, and YOU get more consulting work, and YOU get more consulting work…).

Priorities

While Siebel’s advice regarding consultants feels a bit self-serving, his advice about prioritization seems spot-on, and certainly aligns with what I’ve been suggesting, as well as with the advice that experts I admire, like Jim Manzi, seem to be offering.

Above all, says Siebel, focus on business needs, not abstract, highfalutin aims. “Work incrementally to get wins and capture business value,” he emphasizes. Much as Vizzini, in The Princess Bride, famously advises “never get involved in a land war in Asia,” Siebel’s counsels (perhaps for similar reasons) “Do not get enmeshed in endless and complicated approaches to unify data. Build use cases that generate measurable economic benefit first and solve the IT challenges later.” He also suggests adopting a “phased approach to projects,” seeking opportunities to “deliver demonstrable ROI one step at a time, in less than a year.”

He notes that “Many organizations get hopelessly mired in complex ‘data lake’ projects that drag on for years at great expense and yield little or no value.” There are many examples, he says, of companies wasting big money on such projects. He cites multiple examples of companies wasting years with “outside consultants to build a unified data model, only to see no results at all.”

While the use-case first approach “may sound like heresy to a CIO,” Siebel says that this approach “allows for focus on the value drivers.”  The emphasis of a digital transformation strategy, he argues, should be “creating and capturing economic value.”  Fulfilling this value mandate requires thoughtful roadmap and prioritization, “identifying and prioritizing functions or units that can benefit most from transformation.”

Finally, counsels Siebel, use common sense. “If a project does not seem to make sense, it’s because it doesn’t make sense. If it appears incomprehensible, it is likely impossible. If you do not personally understand it, don’t do it.”

Figuring out how to apply this admonition to use common sense to areas like healthcare and biopharma – where the benefits touted by technologists often don’t seem sensible (as both Derek Lowe and I observed this week), but in some cases, could be truly transformative — represents both the challenge and the opportunity of our moment.

15
Jan
2020

Atomwise and EQRx: Two Contrasting Strategies for the R&D Inefficiency Problem

David Shaywitz

Pharma innovation expert Bernard Munos captures the inherent inefficiency of drug development with two fascinating statistics he recently shared with me. 

First, for large pharmas, the average cost of developing a new drug (simply based on the total R&D costs divided by the total number of new drugs approved for sale) works out to about $5B per drug. It’s an astronomical number, and one that keeps growing to a worrisome degree. The Munos analysis encompasses both the cost of failures and what he calls the cost of scale. In contrast, the actual cost to get a single drug approved for smaller companies – an analysis that omits the cost of failure because it doesn’t look at the many small companies that tried to advance drugs and failed – works out to a bit over half a billion dollars, or about 10-fold less.

One implication of these data is that in large pharmas, drug discovery seems terribly inefficient, with huge amounts of money going into products that never become approved drugs. Another implication, says Munos, is that large pharmas are, theoretically, quite vulnerable to disruption, since they “need every day of their patent life to recover that cost and fund an ever-growing R&D budget that keeps producing the same output.” That’s another way of saying their existing operating model requires extracting all available revenue from existing approved products.

It hasn’t escaped anyone’s notice that it would behoove pharma to make R&D more efficient, as even small increases in the rate of success at any stage would be expected to translate into improved overall R&D efficiency. However, achieving such efficiency gains has remained remarkably elusive, despite the hundreds of millions of dollars that have been spent on management consultants, and despite the execution of continuously refreshed restructuring initiatives generally driven by said consultants.

Two very different companies making news at JPM20 say they have an approach that could make a dent in the R&D statistics: Atomwise, the AI-for-drug discovery company led by Abraham Heifets, and EQRx, former VC Alexis Borisy’s ultra-buzzy, on-Zeitgeist fast-follower newco. Both seem to be focused on dramatically different aspects of drug development, yet they share a commonality in their approach that’s worth a closer look.

Atomwise

San Francisco-based Atomwise, founded in 2012, seeks to use AI to accelerate the identification of promising molecular compounds, with a particular emphasis on drugging the undruggable.   In the last week, they’ve announced a new partnership with the accelerator BioMotiv, and the extension of a 2017 collaboration with Bayer.

Atomwise’s thesis is that while the overall probability of success (POS) for any early stage compound is quite low, the actual POS is naturally much higher if you remove a key aspect of the risk; one way of accomplishing this is by targeting something you are certain will have an impact on disease, if only you could access it.  The thinking is that often, new disease targets represent, at best, hopeful, educated guesses, but still involve a huge amount of biological risk – as well as the many other risks (such as safety, tolerability, clinical efficacy) associated with getting a new chemical entity all the way to the point of FDA approval. 

Heifets argues that his platform, like CRISPR, is valuable precisely because it enables drug developers to physiologically manipulate established targets in a way that was previously unachievable.  As he writes, “the excitement around CRISPR, protein degradation, and RNA-targeting techniques is justified because these techniques offer us the chance to drug fundamentally new targets that were not otherwise attainable by other methods,” adding “The future of drug discovery is in using new technologies to drug the undruggable.”

Munos, for his part, worries that the targets Atomwise is attacking are not as de-risked as the company may assume. “There is no such thing as a validated, undruggable target,” he notes, explaining “the only validation that can be trusted is that which comes with a drug approval. Before that, targets may be interesting or promising, but they are not validated.”  He adds, “Most of the clinical trials that fail aim at targets that are thought to be validated.  Yet toxicity and insufficient efficacy are the most common causes of trial failure.” Munos’s comments echo the old pharma saw that the definition of a validated target is one where there’s already a drug with $1 billion in sales.

EQRx

Cambridge, Mass.-based EQRx, announced this week, represents a response to the problem of costly drugs. Borisy, a former partner at Third Rock Ventures, says he sees a market opportunity in pursuing established targets, and essentially undercutting pricey first-to-market products. His thought is that by focusing on established mechanisms, you can make new drugs for much less money, because you anticipate a far lower failure rate (you know the target is both relevant and targetable) than the typical innovator company. This first requires making a new chemical entity that eludes the innovator’s original patents. Then, presumably, EQRx can perhaps also design more efficient clinical studies by leaning on established examples. 

You can think of Borisy’s approach as “pre-generics,” perhaps (with apologies to the pre-cogs of Minority Report), although he aspires to make drugs that are somewhat better than first-in-class products. The economic argument is that his reduced costs and development time will enable him to get new molecules onto the market before the first-in-class product goes generic, and to sell this fast-follower at an aggressive low price, but that still allows for significant gross profit margins. Borisy expects to be able to do this for multiple products. As Luke described it earlier this week, “the idea at EQRx is to use the bursting knowledge of biological targets and new treatment modalities to make fast-follower patented drugs that are sold at radically cheaper prices – maybe 50, 60, 70 percent cheaper than others in a given class.”

While noting the profound transformative potential EQRx would have if successful, by cutting deeply into pharma’s anticipated revenue over the patent life of an approved drug, Munos nevertheless remains skeptical:

“Given the long lead time of drug R&D, in order to reach the market before the pioneer drug becomes generic, the ‘fast-follower’ must get going long before the drug it follows gets approved. And if the lead drug stumbles, so does the fast-follower. EQRx apparently thinks it can tweak the fast-follower model by waiting until a drug has been approved — thus validating its mechanism — before it gets going and still reach the market long enough before the lead drug loses its patent. This would require an improvement in the speed of drug R&D that has never been seen before despite pharma’s decades of relentless efforts at process improvement (e.g., six sigma). It would be a monumental achievement.”

A Shared Focus on De-risking

While Atomwise and EQRx are focused on very different problems, both are leveraging a similar strategy: improve the overall probability of success by attacking something that’s already (somewhat) de-risked.  For Atomwise, this means creating a new compound for an established target that no one’s been able to drug, and drug it for the first time; for EQRx, this means creating a new molecule for an important target that’s already been drugged, and doing it faster/better/cheaper. 

Each is betting that while the overall economics around new drug development are dispiriting, the value proposition for a candidate drug that’s derisked can be far more promising.  Both companies, as Munos points out, face real challenges as they strive to deliver at the scale necessary to make the still-difficult math work. 

In some ways, Atomwise may have the easier lift.  Even if only a few compounds are ultimately successful, the individual drugs could support the growth of the company (assuming the company retains adequate economics in the products, which will apparently be developed by partners – this is a critical consideration). Atomwise could succeed even if the platform doesn’t meaningfully alter the grim R&D statistics for the industry as a whole. 

EQRx has not gone into significant technical detail about how, exactly, it will go about achieving its needed gains in speed and cost. But whatever technologies it brings to bear will have to be remarkable to achieve its founding promise. EQRx has to deliver multiple fast-followers through all phases of compound development and clinical testing, with enough speed, enough economy, and a high enough success rate. That’s a very high hurdle, though also a worthy ambition.

10
Jan
2020

Entering JPM20 With a Grounded, Yet Hopeful, View of Health Tech

David Shaywitz

Health tech seems balanced precariously between excessive optimism and excessive skepticism, between the promise that emerging technology is poised to disrupt health like it has so many other areas, and the painful recognition that many idealistic technologists misunderstood both the scientific and human dimensions of the inordinately complex problems to be solved in both health care services and the development of novel therapeutics.  

It seems like a healthy, motivating tension, provided we can muster both the mental clarity to resist the hype and the intestinal fortitude to outlast the despair. 

Technology takes a long time to work through, and figure out how to effectively implement.   You can see this in biotech, as I wrote after JPM2018, and discussed recently: today, most leading pharmaceutical companies are aggressively investing in gene therapy and cell therapy, approaches that seemed like fantastical (astounding?) science fiction for years, before a tractable path forward seemed to crystalize before us in just the last decade. Even today, in these areas, we’re still pretty early in the implementation phase; these approaches have demonstrated potential, but generally remain remarkably difficult to execute at scale and successfully commercialize.

What remains clear are the same imperatives that have motivated healthcare innovators for years: the urgent need for profound improvement in the way we practice medicine and discover and develop novel therapeutics.

Clinical Medicine: Crying Out For Improvement

Clinical medicine, as a leading oncologist recently explained to me, remains as much of an art as a science. The aspiration of a learning healthcare system, a perennial talking point, continues to remain an elusive goal; even today, with all our data-gathering and analytic capabilities, so much relevant information is never adequately captured, studied, and fed forward to help the next patient. We need to do a much better job of leveraging the volume of clinical experience to accelerate learning and identify improved approaches to care that could, perhaps incrementally, but in the aggregate, transformatively, improve the care we provide to our patients, which is still largely driven by eminence, intuition, and a litany of cognitive biases. 

At the same time, there is also an essential role in medicine for experience and intuition: medicine is the defining example of “fractionated expertise.” For those unfamiliar with the jargon, this is where professionals exhibit demonstrable expertise in some of their activities but not others; I’ve written about this here and here.

The elusive challenge in medicine is figuring how to leverage data without (further) degrading what I continue to believe many patients, especially those with serious and/or chronic conditions, still want (and certainly deserve) from their doctors: an authentic, human relationship, highly attuned to individual emotional subtlety.  Such physicians partner with patients in a way that’s responsive to the complexity of their needs – rather than just based on what a coarse algorithm might spit out based on population-level data.  The goal is developing the data and the doctors so that we continue to have empathic, inquisitive clinicians with the scientific sophistication to understand the patient’s unique illness, and who are driven to go to the next level, accessing the sort of ready information that can help physicians best tailor treatments to their patients.

Drug Discovery & Development: Crying Out For Improvement

Meanwhile, drug discovery and development seems to be as difficult, and capricious, as ever.  Despite the many highly touted advances in biological technology, including the ability to engineer therapeutics with greater intentionality (see here), the failure rate remains staggering. No technology has come along to dramatically improve upon the painful reality that only about one out of 10 drug candidates entering clinical trials (already a steep hurdle) is able to successfully run the gauntlet, and emerge as an approved, commercial product that can be prescribed for patients. Leaders of biomedical R&D teams, appropriately, still regard it as a miracle when a novel drug actually makes it all the way to regulatory approval. Failures at all stages of development continue to abound, challenges I’ve discussed in this space (here). 

Every aspect of this process cries out for improvement, from figuring out how to precisely target different conditions at a molecular level to developing suitable candidate molecules and intelligent combinations to precisely matching these candidate therapeutics with the patients most likely to benefit, to identifying these patients and efficiently conducting clinical studies, to, perhaps most importantly, as I discussed in a 2019 Clinical Pharmacology and Therapeutics commentary, understanding how approved drugs are actually functioning in the real world, and learning how to improve and optimize effectiveness. 

Given all the concerns about drug prices, there is an urgent need to figure out how to do R&D far more efficiently, or confront the possibility of a devastating slowdown in biomedical innovation if investors decide the rewards from the occasional, rare success no longer justify the high-risk, long-term investment required, and take their dollars to dog food, ad-tech, scooters, or some other less consequential domain.

A definitional question facing pharma companies as they contemplate digital and data science is whether or not to embrace “digital exceptionalism.” This view posits that digital and data approaches are sufficiently distinct that they require a separate locus of expertise. For example, consultancies sending biopharma companies off on a “digital transformation journey” often position the appointment of a chief digital/data R&D officer as an important milestone.  Not everyone thinks it should be.  As one tech expert with extensive pharma experience recently explained to me, “the world, the science and the market are evolving.  If the core technologies are truly quant/technical, the head quant should be the CSO.  If not, manage it traditionally via biostats, biomedical informatics and so on,” adding “Digital tools are just tools.”  In other words, just as you wouldn’t have a “Chief PCR Officer,” does it make sense to have a chief digital officer, and to consider digital/data as a separate and distinct organizational capability? 

On the other hand, you could argue, quite reasonably, that while ultimately digital and data capabilities will be seamlessly integrated, right now, these approaches tend not to be either familiar or intuitive; thus having a core group comfortable with these approaches represents an important and useful temporizing measure.

From “Data Science” to “Science”

Ultimately, as I recently discussed, digital and data science will have the greatest impact when these methods permeate the way biomedical science is done.  The good news here is that data science is capturing the interest of undergraduates, and beyond.  My middle school daughter – in a California public school – recently devised a small data science-type study for a class science project, receiving support and encouragement from her well-informed teacher.  I imagine that as data science becomes inextricably part of more and more scientific domains, learning about it will become as routine as learning about the Krebs cycle and molecular biology, and as familiar to tomorrow’s high school students as squinting at plankton under a microscope or dissecting an unfortunate frog was to students in my generation.

Increasingly, we are likely to think about computation in health the routine way we think about it in other domains.  A recent, characteristically fascinating Ben Thompson column discussed how to think about a technology becoming pervasive.  He noted that at the turn of the 20th century, there was an explosion of new American car companies: 233 new companies were founded between 1900 and 1909, and an additional 168 in the decade after that.  However, the number of new entrants then crashed precipitously, and the big three American automakers in the 1920s – GM, Ford, Chrysler – retained their position of dominance for over 50 years. 

Critically, as he points out, “Just because the proliferation of new car companies ground to a halt, though, does not mean that the impact of the car slowed in the slightest: indeed, it was primarily the second half of the century where the true impact of the automobile was felt in everything from the development of suburbs to big box retailers and everything in-between. Cars were the foundation of society’s transformation, but not necessarily car companies.” (emphasis added)

The interesting part is that Thompson (perhaps controversially) argues that “today’s cloud and mobile companies — Amazon, Microsoft, Apple, and Google — may very well be the GM, Ford, and Chrysler of the 21st century. The beginning era of technology, where new challengers were started every year, has come to an end; however, that does not mean the impact of technology is somehow diminished: it in fact means the impact is only getting started.”

He adds that consumer startups take the presence of Microsoft, Apple, Google, and Amazon (MAGA?) as “an assumption, and seek to transform society in ways that were previously impossible when computing was a destination, not a given. That is exactly what happened with the automobile: its existence stopped being interesting in its own right, while the implications of its existence changed everything.”

Perhaps it’s not too much of a stretch to suggest we’re at a similar place in healthcare, where key aspects of the computational infrastructure can now be thought of as a given (even though of course improvements will continue to occur), and rather than wait for some future magic tech to descend from the sky (or Silicon Valley) deus-ex-machina style and magically solve all our healthcare challenges, we need to embrace the imperfect but exceptionally powerful technologies of today and really focus on applying them creatively and pragmatically both to care delivery and to pharmaceutical research and development. 

Hopefully, this year’s JPM health tech discussions will focus less on audacious future promises about how technology is poised to disrupt/eat/transform healthcare, and provide concrete examples of how emerging technologies are meaningfully engaging with care providers and drug developers to deliver tangible benefits to real world users. 

Dazzling only the technologists who are developing the technology, while VC backers proclaim its historical inevitability, feels so last decade — and perhaps just a tad onanistic.
9
Jan
2020

Sanofi’s New CEO Captures Pharma’s Grounded View of Health Tech

David Shaywitz

Since taking over as Sanofi’s CEO in September, Paul Hudson has been blunt in his assessment of health technology decisions, and indecisions, made by previous management.

Early in his tenure, Hudson took square aim at his company’s once-heralded $500 million collaboration with Verily on Onduo. This partnership was started in 2016 and intended to help diabetics better manage their condition.  This relationship has now been restructured, with Sanofi’s day-to-day operational involvement significantly pared back.

“It was a determined effort to get into the ecommerce component around diabetes and to try and build on the customer relationship with Verily,” Hudson said at a presentation in December coinciding with his first 100 days as CEO.

Paul Hudson, CEO, Sanofi

He went on to explain the reasoning for the decision:

“It’s a much harder nut to crack. It’s a much longer process. And whilst we’re excited about the work being done at Onduo, I think we were over-invested. So we’ve stepped back. We’re still an investor … but we won’t put any more operational expense in above where we are because we have other things to do with the investment.”

In case there was any confusion whether this repositioning simply reflected Sanofi’s retreat from diabetes, Hudson published a commentary in Fortune this week that clearly describes his stance on health tech more generally. It’s an unvarnished, pragmatic vision that will not surprise regular readers of this column, but is nonetheless a welcome public perspective from an industry leader, acknowledging as it does the current state of affairs at the intersection of tech and drug discovery.

While highlighting the great potential of “the transformative power of digital technology,” and acknowledging that pharma “lags behind other highly regulated industries,” Hudson then offers an unusually grounded perspective.

For starters, he invokes a version of the advice med students will remember from House of God (“At a cardiac arrest, the first procedure is to take your own pulse”), advising pharma companies to “pause and develop a strategic vision for adopting new tech.” 

His distinctly unsexy recommendations include:

  • the need to “prioritize data management if we want to get the most out of our AI investments”
  • “organizing interoperable data pools from which we can pull out patterns and trends”
  • the use of cloud-based data systems to “streamline regulatory submissions by using a common data storage platform.”

After offering somewhat vague and familiar recommendations about culture (learning from failure, figuring out how to engage more effectively with automation technology like aliquoting robots), Hudson returns in full voice to his anti-hype message.

 “Companies are too often rushing to appear to be ahead of the curve, pursuing bold partnerships and investing in ‘trending’ technologies that are undeniably impressive but aren’t necessarily addressing critical medical needs,” he wrote. Hudson holds his fire on Google, but explicitly calls out an Apple Watch study that lacked a control arm, citing a critique by Larry Husten in STAT

In case anyone missed his point, Hudson observes, “It’s easy to succumb to the temptation to partner with the company that will build us the splashy tool, rather than work with the company whose outcomes align with our own objectives and whose capabilities fill in our gaps. But some businesses are making the right long-term choice.” 

He applauds “using analytics and A.I. to match patients to clinical trials, potentially reducing the time to find patients from many months to days or even minutes,” noting such efforts “may not seem like a breakthrough innovation, but it is a critical contribution to accelerating the process of getting medicines to patients.”

Not only did Hudson’s message resonate with me, it’s consistent with what pharma R&D executives have been saying behind the scenes for the last several years. A few  brave tech executives, like Jim Manzi, have been saying this more recently (listen here) – but Hudson’s willingness to offer such a grounded perspective in some ways gives permission for others in the industry to engage tech more realistically and usefully without the risk of appearing to be a Luddite or curmudgeon.

When a leader of a company like Sanofi stops spouting platitudes in public about digital transformation, it throws the brakes on an unproductive series of interactions that stem from the hype cycle. No longer should we expect untethered promises buttressed by dubious partnerships with marquis tech brands, with minimal internal buy-in from the researchers in the trenches actually tasked with discovering and developing new impactful medicines.

What’s really remarkable is just how far Sanofi, under Hudson, seems not to be leaning on tech as either a proxy or a vehicle for innovation. In a December press release pegged to the 100 Days announcements, and entitled “Sanofi CEO unveils new strategy to drive innovation and growth,” there are exactly zero mentions of either “digital” or “technology.” The only health tech mention in the entire release I could find was a collaboration with Aetion around real-world data (“an enterprise-wide collaboration that will integrate Sanofi’s real-world data platform, DARWIN, with the Aetion Evidence Platform® to advance more efficient use of real-world evidence,” the release said.)

(For more on Aetion see this piece from 2018 featuring  co-founder Sebastian Schneeweiss, and our recent TechTonics interview with CEO Carolyn Magill; for more on real world evidence see this 2018 overview and this 2019 commentary).

Perhaps Sanofi’s apparent pull-back from tech is an overreaction, but I tend to see it as a useful and much-needed recalibration, emphasizing the prioritization of palpable impact versus championing tech for tech’s sake. 

This is a perspective that startups aspiring to sell into the pharma ecosystem will do well to understand. The gist is pretty simple. If you want to succeed with health tech for pharma, buzzwords aren’t going to cut it – tangible impact is required.