Techlash Offers Health And Tech Opportunity To Reset Relationship, Rediscover Mutual Respect

David Shaywitz

Technology companies are experiencing a staggering reversal of reputational (though not financial) fortune; their stature seems reduced with each successive news cycle.  Gone is the halo many tech companies once enjoyed.

The implicit (and often explicit) assumption that tech innovation inevitably makes the world a better place has been replaced by real concerns that the picture may be far more mixed, a skeptical reaction that’s been called the “techlash.” 

Profound privacy worries abound — not surprising, given that the underlying business model of several leading companies is hoovering up all your personal information and using these data to precisely target ads. While behavior modification has always been the goal of advertising, we are only just beginning to see what the world looks like when precision-targeted digital advertising (of products, political campaigns etc.) becomes strikingly effective at influencing individual and group behavior.  (If only precision medicine was nearly as effective!). 

The idea that we should aspire to measure everything everywhere about everyone all the time, as some digital transformers cheerfully advocate, seems an increasingly questionable goal. The surveillance aspects, we now understand, are overtly invasive, while the benefits to the surveilled seem uncertain at best. 

The Wall Street Journal recently profiled a lobbyist for Oracle with a knack for providing real-time demonstrations of just how much data Oracle rivals like Google routinely collect.  During one meeting with Australian regulators, according to the Journal, an official noted that the Google operating system on a phone “was running for 13 minutes while we had been meeting, and had talked to Google at least 418 times….Second by second sensor readings, going straight to Google.”

One can’t help but wonder: What are Google and other data-hungry companies doing with our information – and who else may be accessing it? 

The incredibly detailed information gathered by large tech companies enables them to sell precisely targeted ads. The ability they have to measure engagement so effectively makes it easier to iteratively optimize such messaging, an approach that has now become an important component of political campaigns.  Deepa Seetharaman, writing in the Journal, recently described how the Trump campaign learned how to leverage social media remarkably effectively in 2016 (this episode of The Journal podcast especially recommended), and a captivating Atlantic piece “The Billion Dollar Disinformation Campaign to Re-elect the President” by McKay Coppins suggests these approaches will be an even larger factor in the 2020 campaign (as Coppins discusses in this episode of The Bulwark podcast).

While platforms selling ads clearly benefit by capturing rich data from apps and devices, the upside for users is debatable – even when users are the ones capturing the information for themselves. 

Consider the experience of many quantified selfers, such as former Wired editor Chris Anderson.  “After many years of self-tracking everything (activity, work, sleep),” he tweeted in 2016, “I’ve decided it’s ~pointless. No non-obvious lessons or incentives :(”.

We have also seen how technology, even where successful along some dimensions, can generate significant unanticipated consequences, including exacerbating problems they promised to solve.  One conspicuous example: ride-sharing apps like Uber and Lyft promised to reduce traffic burden, yet data suggest these apps have actually made traffic worse, according to fascinating recent article by Eliot Brown in the Journal. Brown also cites Facebook and Juul as other “solutions” that seem to have created serious new problems.

Hype, Disappointment, Resolution: All Part Of Tech Adoption Cycle

For those seeking to bring technology to health, the concerns surfaced by the techlash are not surprising, considering the historical arc of technology adoption, and need not staunch progress, if we respond thoughtfully rather than reflexively to these developments.

As Carlota Perez has described (see here), the path of novel technologies from initial emergence to widespread adoption seems to follow a predictable contour, which includes considerable hype and energy towards the start, as it becomes clear that there’s likely “something” to the technology but it’s not yet clear what this “something” is or might be. 

The British economist Christopher Freeman, in his introduction to Perez’s 2002 book, wrote, “the uncertainty which inevitably accompanies such revolutionary developments, means that many of the early expectations will be disappointed, leading to the collapse of bubbles created by financial speculation as well as technological euphoria or ‘irrational exuberance.’” 

In the Perez scheme, such disappointment regularly precedes the point at which the technology starts to become stably integrated in institutions and gains widespread acceptance, and it’s likely that we’ll see the same phenomenon with the application to healthcare of the four technology vectors entrepreneur Tom Siebel associates with digital transformation: cloud, big data, AI, and internet of things (IoT), as I recently discussed.

Arguably the best news about the increasingly pervasive concerns about technology is the grounded and often difficult dialog it forces about the range of consequences. In today’s more skeptical climate, it’s acceptable to view new technologies through a lens of healthy skepticism without being tarred as a mere “luddite,” a non-“it-getter,” and an exemplar of the sort of stodgy old-school thinking that’s about to be disrupted and displaced.

Needed: Authentic Mutual Respect

What we desperately need is authentic engagement between health and tech, from a position of mutual respect.  This theme has emerged from a number of recent conversations I’ve had, including just this past Friday, at a fireside chat I conducted in Philadelphia at the 2020 Wharton Health Care Conference with FDA Deputy Director Amy Abernethy, who has considerable experience bringing these worlds together – in academia, industry, and now government.

David Shaywitz in conversation with FDA deputy commissioner Amy Abernethy at the Wharton Healthcare Conference, Feb. 14, 2020.

My key takeaway from Abernethy – and others, including Recursion CEO Chris Gibson and Brandon Allgood, CTO of Numerate (now Integral Health) (the three of us discussed effective health/tech collaboration in January in San Francisco at a pre-JPM panel I moderated at the East/West CEO Conference), and J&J’s Sean Khozin (listen to our recent Tech Tonics podcast) – is the urgent need to get the working relationship between health talent and tech talent right, ideally by avoiding a “vendor mentality,” and ensuring the key stakeholders are on equal footing.

Getting health and tech experts collaborating effectively remains a critical stuck point for successful integration of health and technology – and arguably may represent an area where startups have a significant advantage.  Generally speaking, in tech companies, engineers are dominant, and many non-engineers, including physicians, tend to feel like second-class citizens; I hear this all the time from physicians and life-scientists connected with large tech companies, as well as tech-driven health startups. 

In many hospitals, the situation tends to be reversed, with physicians dominant in the hierarchy, and tech experts relegated to the role of service providers.  In pharma R&D, drug hunters and developers tend to rule the roost; the key data experts, statisticians, while acknowledged to be essential, are too often treated as service functions, called in to crunch numbers for sample size calculations, say, then sent back to their cubicles. (Of course there are exceptions, but this is the recurrent power dynamic I’ve seen throughout the industry.)

When one tribe is clearly dominant, it creates conditions that make it difficult to have thoughtful, dynamic back-and-forth between professionals of complementary skills. Without true mutual respect for what people bring to the table, it’s hard to imagine tackling effectively the highly complex problems at the nexus of health and tech. 

Solving this problem isn’t going to be easy, given the reluctance of those with power to relinquish it, but mutual authentic recognition of the value of both perspectives will go a long way. 

In our conversation at Wharton, Abernethy described an approach from Flatiron that she thought was effective: hackathons (internal software development competitions) where each team had to include not just programmers but also domain experts, whose knowledge was critical for success. 

The idea, more broadly, is that as people work together towards shared goals, their distinct skills become far more appreciated, and collaboration far more valued.  Right now, this is most apparent in startups (where the success of each person vitally depends upon everyone pulling in the same direction), while large organizations tend to be characterized by lip service to collaboration (especially when it’s a C-suite directive – see here, here), but intrinsic resistance to change, especially change that seems threatening or pointless.  From the perspective of many entrenched stakeholders in pharma and in healthcare systems, the change that “digital transformation” champions seems to offer a bit of both.

On the other hand, the folks I know in pharma R&D are there because they want to create impactful medicines to help patients; the people I know in care systems truly aspire to help patients. But all these experienced stakeholders – understandably — are very skeptical about grandiose promises; health and disease are inordinately complicated, and durably improving either drug discovery or care delivery is far more challenging than those outside these systems tend to recognize or acknowledge.  

Progress at the intersection of health and technology is achievable, but will also require established stakeholders and technology developers to collaborate effectively and respectfully, and evolve practical, tangible solutions.  As sociologist Robert Nisbet presciently wrote over a half century ago, “human beings do not come together to be together: they come to do something together.”  In this spirit, successful partnership between health and tech experts seems most likely to occur not just by co-locating such collaborators in a building or on a team, but rather, when the concerted engagement of both is required to accomplish something both meaningful and difficult.

I’m confident we will begin to see real success stories here – there are just too many real opportunities for just this sort of collaboration.  The interesting question is whether the biggest strides will come from legacy organizations that have managed to “digitally transform,” despite the huge political hurdles, or from startups that have managed to achieve traction despite the lack of legacy relationships and established capabilities. I would love to see both – and at this point, would be happy to see either.


As Scientists, It’s Our Duty to Speak Up as the 2020 Election Nears

Jessica Sagers, head of engagement, RA Capital

As a cellular biologist deciding between staying in academia and taking a job in biotech, I was conditioned to ignore anything that sounded like “a problem for businesspeople.” I grew the cells; I didn’t price the drugs. But as I transitioned from bench research to a career in biotech investing, I realized that I’d been wearing blinders.

Our experiments are important, but the world we create for ourselves out of petri dishes and manuscripts can serve to distract us from the reality that we are accountable for the impact of our work, at least if we want any of it to matter. So it’s our job to care what the public thinks about science and technology.

The public foots the bill for much of the academic research that fuels basic discoveries. Maintaining government funding for biomedical innovation through organizations like the National Institutes of Health is, luckily, one area where both political parties usually agree. But misconceptions about what the NIH does—and doesn’t do—are shocking.

I’ve met with Congresspeople who genuinely believe that NIH researchers spend their time inventing drugs, which companies then somehow purchase and profit from, essentially selling taxpayer-funded goods back to consumers at exorbitant costs. Of course that sounds unfair! And anyone the NIH has ever funded knows it isn’t true. During my time as an NIH investigator, my group was excited to investigate a previously unknown role for a signaling pathway involved in tumor growth. The fact that the most effective dose of a molecule we tested to target this pathway was nearly 20X the maximum concentration achievable in human serum was barely a footnote in our published paper.

I now work at a biotech investment fund, helping to start companies that will bring therapies to patients. I know from personal experience that the questions I answered as a basic scientist are different than the ones biotech companies work to solve—and can explain why that difference is important.

Academic investigators are responsible for taking the first steps toward detangling the complex processes that make up our bodies and our world. And then it’s up to the private sector to figure out how to turn those concepts into products that will actually reach patients and improve human health—by inventing a whole new drug, reformulating a toxic molecule that emerged from an academic lab into one that can be taken safely, dreaming up a release mechanism that restricts a compound to only where it’s needed in the body, ensuring a medicine is safe for children to consume, and by physically creating, testing, packaging, shipping, and manufacturing the medicines we take today.

It’s easy to forget that the jobs that await scientists in industry are entirely dependent on the willingness of our society to support that industry. If society, in its justified outrage over patients not being able to afford medicines, decides that it isn’t worth the cost to develop a gene therapy for a rare disease, then we as scientists are denied the opportunity to design the experiments, run the clinical trials, or ultimately cure those patients. And if we find ourselves protesting in the street over drug prices, so upset about the flaws in our current system that we embrace policies that usher in the downfall of our own industry, we will be putting ourselves out of jobs and patients out of cures.

That doesn’t mean that we aren’t allowed to get angry about injustice in the system—far from it. But many real injustices—and viable solutions—are not being discussed. Here are five points of advocacy that could change the conversation:

  • Eliminating or capping out-of-pocket drug costs for patients.

When your roommate visits the pharmacy and comes back $200 lighter, she may curse the drug company she thinks is charging too much, but she’s not actually complaining about the price of her drug—she’s complaining about how much she was asked to pay at the counter. That’s not a pricing problem, that’s an insurance problem.

Picture a new, tricky-to-manufacture drug that costs $200,000, prescribed to a patient who is required by her insurance to pay 20% in coinsurance after hitting her deductible. To this patient, that drug effectively costs $40,000. Now let’s say we lower all drug prices by 50% across the board—a massive win for drug price control supporters! The drug now costs $100,000…and your roommate’s still on the hook for $20K.

Lowering drug prices will not make new, lifesaving drugs affordable for patients because, to the vast majority of folks, a $40,000 bill is as unaffordable as a $20,000 bill. The solution is not upfront price controls, which would broadly discourage investment in new drugs, but eliminating or capping out-of-pocket drug costs for patients.

What we need is insurance reform.

Drug development will not stop being expensive and complicated, but America’s spending on branded drugs makes up just 1.3% of GDP. To protect patients, let’s outlaw out-of-pocket costs the same way we outlawed discrimination based on preexisting conditions. How to pay for that? If we can find just $61B in our budget, we can wipe out the burden of all out-of-pocket costs for drugs in one fell swoop. Given that our healthcare system spends $800B/year on administrative costs, including chasing after patients to pay surprise bills, this is more than feasible.

  • Advocating for universal insurance coverage.

Americans functionally believe that healthcare is a human right, though not everyone is willing to admit it. This sentiment is at the root of why we feel so outraged when a patient is blocked from receiving the medicines they need. Advocating for universal insurance coverage does not have to mean that we embrace a single-payer system, but we need to recognize that our healthcare system is not designed to serve people who do not have insurance. The first step on the road to a more equitable reality is ensuring fair access for all.

  • Working to combat fanatical narratives that threaten public health, such as those endorsed by anti-vaxxers and pseudo-wellness theorists.

Part of the reason extremist groups emerge is because informed voices don’t speak as loudly, clearly, or convincingly as conspiracy theorists or mommy bloggers. Scientists who self-righteously slam others for engaging with the public on social media miss the point—doing this work is part of our job. What makes a blogger feel relatable is the fact that she posts jokes and no-makeup selfies…right next to captions about how “toxins” are causing her “leaky gut” and “adrenal fatigue.” But scientists don’t enjoy that same degree of relatability, and that makes our advice seem suspect. In our digital world, social media is the primary way ideas spread. Diverse, sincere engagement matters, and adding our voices to the conversation could make the biggest difference. Come prepared with facts – we are scientists, after all – but don’t be afraid to let your personality shine through.

  • Correctly recognizing the NIH as a partner in innovation, not a replacement for industry.

This point is especially important for NIH-supported scientists to make, as I have and continue to do. It would do America no favors and preserve the development of no drugs if Congress slashed $100B from drug spending and transferred $50B to NIH. No doubt more research would be done, but the NIH spends only a tiny fraction of its budget on clinical testing of drugs and completely lacks the infrastructure to take that function over from industry. And we’ve all seen the bureaucracy that academics must navigate to advance big projects. Individual and small-team research is well suited for academia. But what comes next requires the discipline, structures, metrics, best practices, scale, and accountability of industry. Anyone who has ever seen their academic science translated into a clinical-stage drug candidate can attest to that—and should do so publicly. 

  • Educating ourselves about current health policy proposals and lending our voices to the public debate.

As scientists, we are skilled analytical thinkers. So let’s not put our noise-canceling headphones on and duck into the tissue culture hood when debates about our industry hit the front page. Download the text of bills like HR.3 and draw your own conclusions about what its imposition would mean for early-stage innovation. Read smartly written books on drug policy. Take free courses on the business of biotechnology. And then call your senator or write an opinion editorial.

A lot would need to happen before any policy proposal compromises the integrity of America’s scientific enterprise. But it’s not inconceivable. I’d say it’s a good reason for scientists to engage seriously in talk of a renewed Biotech Social Contract—to talk about these ideas with their neighbors, use social media in productive ways, and write thoughtful, informed op-eds to make our voices heard.

In this election year, I’m ready to get out and fight for science, patients, and the industry I love. Are you with me?


The Cell Therapy Puzzle: Jane Grogan on The Long Run

Today’s guest on The Long Run is Jane Grogan.

Jane is the chief scientific officer of South San Francisco-based Arsenal Bio.

Jane Grogan, chief scientific officer, Arsenal Bio

Like the name suggests, Arsenal is pulling together a stockpile of potent tools of modern biology.

As the company describes itself:

Arsenal will integrate technologies such as CRISPR-based genome engineering, scaled and high throughput target identification, synthetic biology, and machine learning to advance a new paradigm to discover and develop immune cell therapies, initially for cancer.

There are more than a couple powerful technologies packed into that tight little description. How will they be integrated together in a clever way, to deliver the ultimate product – “programmable” cell therapies that are safer, more effective, and even cheaper and more widely available? That’s a very tall order. It’s a vision that will take many years to realize, if ever.

Jane is great person to discuss this moment in science and technology, when entrepreneurs are able to dream big along these lines. She’s an immunologist by training. She came to Arsenal last year after a long and successful career in research at Genentech.

You may also recognize her voice. While at Genentech, Jane founded and hosted the Two Scientists Walk Into a Bar podcast. As you’ll hear in this episode, she’s been practicing her science communication skills for a long time. It shows.

Now, please join me and Jane Grogan on The Long Run.


The Long Run is sponsored by:



Scientists at the Movies: Relay Therapeutics CEO Sanjiv Patel on The Long Run

Today’s guest on The Long Run is Sanjiv Patel.

Sanjiv is the CEO of Cambridge, Mass.-based Relay Therapeutics.

Relay is among a new crop of drug discovery companies driven by advances in computational chemistry.

Sanjiv Patel, CEO, Relay Therapeutics

What does that mean?

As I wrote in Timmerman Report a little over a year ago:

“The basic concept is all about starting with high-quality crystallography images, and using them to create “movies” of a protein target, instead of just a snapshot. With a more fluid, dynamic and biologically realistic starting point for drug discovery, computer-aided simulations take on a whole different meaning. Relay’s team looks at how those dynamic proteins behave when binding with different shapes and sizes of small-molecule chemical compounds.”

Scientists at the movies. Grab your popcorn!

Seriously, this is a vision that techno-optimists have touted for decades. It hasn’t materialized. As Sanjiv told me a year ago, there was a ‘false dawn.’ That’s another phrase for ‘premature hype.’

But the past few years, the picture has brightened. Relay raised $400 million in a Series C deal in December 2018. Sanjiv, a former Allergan executive who could have stayed in a high-powered Big Pharma job, came to this startup opportunity instead. Another computational drug discovery company, Schrodinger, has also had success in raising private capital, in creating promising drug candidates with partners. It’s now teed up to go public this year.

For people – or companies — who don’t yet subscribe to Timmerman Report, I’m lifting the paywall on my December 2018 story on Relay Therapeutics, and on an in-depth January 2019 interview with Schrodinger CEO Ramy Farid. These articles are examples of what TR subscribers get – in-depth coverage of scientific trends that puts you ahead of the curve. These articles will help support your understanding of what I discuss in today’s show with Sanjiv. After reading, I’ll hope you’ll consider purchasing a subscription to get more of this kind of exclusive, in-depth biotech coverage throughout the year.

Now, please join me and Sanjiv Patel on The Long Run.

The Long Run is sponsored by:


Challenging Core Assumptions, Tech Backlash Paves The Way for More Thoughtful HealthTech

David Shaywitz

Digital transformation (as I recently discussed), and the implementation of emerging technologies more generally, is routinely pitched by enthusiasts like Tom Siebel as both urgent and inevitable, something organizations need to embrace or risk irrelevance, if not extinction. 

Yet the “embrace or die” assertion is under increasing, and healthy, scrutiny, as the “techlash” (technology backlash) gains steam. 

“Surveillance Capitalism”: Tech As Force For Harm

Voices of concern have started to coalesce under the banner of what Harvard Business School professor emerita Shoshana Zuboff has termed “surveillance capitalism.” She synthesized and amplified this growing concern in her 700+ page 2019 book The Age of Surveillance Capitalism. For a shorter summary, I recommend reading this recent New York Times essay by Zuboff, and listening to this especially informative interview with her conducted by distinguished technology journalist Kara Swisher (of Recode and the Times).   

The core of Zuboff’s critique can be found in the story of Google itself, a company that (as described in the Recode podcast) initially came to prominence by building a phenomenally effective search engine that users appreciated. But the company struggled to make money in the early days, and “very swanky venture capitalists were threatening to withdraw support,” according to Zuboff. In an existential panic, Google apparently realized that it was sitting on a huge amount of interesting data, far more than was needed to improve the search algorithm. 

At its inception, reports Zuboff, Google had rejected online advertising as a “disfiguring force both in general on the internet and specifically for their search engine.” 

But spurred by the threat of extinction, Zuboff explains, Google declared a “State of Exception,” akin to a state of emergency, that “suspended principles” and permitted the company to contemplate previously shunned approaches. They recognized they had accumulated “collateral behavioral data that was left over from people’s searching and browsing behavior,” data that had been set aside, and considered waste. But upon further review, says Zuboff, Google engineers realized there was great predictive power in the combination of this data exhaust plus computation: the ability to predict a piece of future behavior — in this case, where someone is likely to click — and sell this information to advertisers. 

The result, according to Zuboff, was a radical transformation of online advertising, turning it into a market “trading in behavioral futures,” while claiming “private human experience” in the process.  “We thought that we search Google,” writes Zuboff, “but now we understand that Google searches us.”

As this model caught on, Zuboff explains, tech companies accrued exceptional influence, due to “extreme asymmetries of knowledge and power.” Over time, these companies began to “seize control of information and learning itself.”

These technology companies, asserts Zuboff, “rely on psychic numbering and messages of inevitability to conjure the helplessness, resignation, and confusion that paralyze their pray.” She argues “the most treacherous hallucination of them all” is “the belief that privacy is private.” It’s not, she argues, because “the effectiveness of … private or public surveillance and control systems depends upon the pieces of ourselves that we give up – or that are secretly stolen from us.”

Notably, Swisher strongly shares these privacy concerns, even writing a year-end commentary in the Times last December entitled “Be Paranoid About Privacy,” urging us to “take back our privacy from tech companies – even if that means sacrificing convenience.” She writes, “We trade the lucrative digital essence of ourselves for much less in the form of free maps or nifty games or compelling communications apps.” Adds Swisher, “It’s up to us to protect ourselves.”  

(In contrast to some health tech execs I know, Swisher views Europe’s General Data Protection Regulation [GDPR] and California’s recently-enacted Consumer Privacy Act as positive developments.)

Both Siebel and Zuboff seem to agree on the power of the emerging technology. They vehemently disagree about whether it’s a force for good or ill. 

The Pinker Perspective: Cautious Optimism

But another perspective is that both Siebel and Zuboff overstate at least the near-term power and utility of technology by accepting as a given that the impetus to collect every possible piece of data about every possible thing will soon result in remarkably precise predictions.

This is what Siebel promises, and Zuboff fears.

In contrast, I found myself agreeing with the more grounded viewpoint Harvard psychologist Steven Pinker offered in a 2019 discussion with Sapiens author Yuval Noah Harari (who was making the case for surveillance capitalism).

In recent years, Pinker has attracted controversy by arguing (in his 2018 book Enlightenment Now, and elsewhere) that despite endless lamentations and prophecies of doom, life is actually getting better, and is on a trajectory to improve still more. 

Besides Pinker, this encouraging perspective has been recently discussed by a number of authors including Hans Rosling (Factfulness), Andrew MacAfee (The Second Machine Age, More From Less – my Wall Street Journal review here), and John Tierney and Roy Baumeister (The Power of Bad – my Wall Street Journal review here).

Pinker says he’s not losing sleep about emerging technologies, in large part because he suspects the rate and extent of technological progress has been significantly overstated. Consider human genetic engineering, he says, where frightening concerns had been raised about engineering people with a gene that made them smarter or better athletes. That turned out to be a wild oversimplification, he argues – many genes impact most traits, and since genes tend to be involved in many functions, there’s a good chance any intervention would do at least as much harm as good. The limitations of genetic data is also something Denny Ausiello and I anticipated in this 2000 New York Times “Week in Review” commentary, and something Andreessen-Horowitz partner Jorge Conde thoughtfully reflects on in this recent a16z podcast.

Returning to AI, Pinker notes that “predicting human behavior based on algorithms” is “not a new idea,” nor one likely to immediately destroy the planet.  “I suspect,” Pinker says, “we’ll have more time than we think simply because even if the human brain is a physical system, which I believe it is, it’s extraordinary complex, and we’re nowhere close to being able to micromanage it even with artificial intelligence algorithms. The AI algorithms are very good at playing video games and captioning pictures, but they are often quite stupid when it comes to low probability combinations of events that they haven’t been trained on… even the simple problems turn out to be harder than we think.”

He adds, “When it comes to hacking human behavior – it’s all the more complex. Not because there’s anything mystical or magic about the human brain – it’s an organ – but an organ that ‘s subject to fantastic non-linearities and chaos and unpredictability and the algorithm that will control our behavior isn’t going to be arriving any time soon.”

In a 2018 op-ed, Pinker notes the “vast incremental progress the world has enjoyed in longevity, healthy, wealthy, and education,” and adds that technology “is not the reason that our species must some day face the Grim Reaper. Indeed, technology is our best hope for cheating death, at least for a while.” 

He describes threats such as “the possibility that we will be annihilated by artificial intelligence” as “the 21st century version of the Y2K bug,” which was associated with apocalyptic prophesies, yet ultimately had negligible impact.

In a particularly interesting exchange between Harari and Pinker, Harari expressed concern that the surveillance state was turning our lives into a continuous, extremely stressful job interview, suggesting we’re heading to the point where everything we do every moment of our lives could be surveilled, recorded, and analyzed in a way that could impact future employment.

Pinker, in response, noted that “One of the most robust findings in psychology is that actuarial decision making – statistical decision making — is more reliable than human intuition, clinical decision making.  We’ve known this for 70 years but we typically don’t do what would be more rational.” In this example, it would be rational to scrap job interviews, and use statistically-informed predictors instead.  Even though we know job interviews are subject to bias and error, Pinker points out, we still use them, and don’t “hand it over to algorithms.” 

Of course, many technophiles – and technophobes — would say this is exactly what’s already occurring.

The Taleb Quadrant

There’s actually a fourth quadrant to consider – which I think of as represented by Nassim Taleb, who is critical (as he articulates with particular clarity in Antifragile) of what he sees as our worship of new technology, not because he fears it’s about to immediately lead to the end of life as we know it, but rather because he thinks our increased interconnectivity places us at greater risk of a catastrophic failure – i.e. make us far more fragile. He trusts approaches that have stood the test of time “things that have been around, things that have survived,” and worries about our “neomania – the love of the modern for it’s own sake.”

Implications for Health Tech

While perhaps inconvenient for some health tech entrepreneurs in the short term, the increasingly robust discussion about the impact of technology represents a positive development for the field.

Why positive? Because it creates the intellectual space needed to challenge tech assertions and assumptions, while demanding rigorous proofs of value. 

I incline towards Pinker’s perspective. Technology, in my view, offers us real hope in our efforts to maintain health and forestall and combat illness. Figuring out how to derive meaningful benefit from the technology will not be nearly as easy nor as rapid as consultants promise. As we work through these challenges, we need to be thoughtful and deliberate, and consider the right kind of guardrails we want to put in place as we bring ever-more powerful technologies to bear in our healthcare system. The hurdles we must clear – technological, social and political in nature – as we create systems that can meaningfully intervene and improve upon what we have in healthcare are enormous. We would be foolish to underestimate the work ahead – and even more foolish not to embrace the challenges and get going.


Incrementalism is the new Disruption, Trust is the New Black, and Positive Change (for now) at FDA: Takeaways from the 2020 Precision Medicine World Conference

David Shaywitz

I had the privilege of serving as emcee for the “Data Science and AI” track on the first day of this week’s Precision Medicine World Conference (PMWC) in Santa Clara, CA, as well as chairing a panel discussion on data mining and visualization. 

I came away with a sense of optimism and need, organized around several key themes.

In Praise Of Incrementalism

In a day focused on technology, and featuring a number of startups, you might have expected to hear a lot about “disruption” and “disruptive innovation” – but I didn’t.  Instead, the watchword of the moment seems to be “incrementalism” – not in the dispirited sense of having minimal aspirations, but rather in the grounded (versus grandiose) sense of seeking to motivate buy-in from existing healthcare stakeholders by demonstrating a discrete and useful (if not super-sexy) benefit. 

Kaisa Helminen, the CEO of digital pathology company Aiforia Technologies (which I’ve written about here), emphasized the importance of first taking small steps, before attempting to make larger strides.  She amplified this point in a follow-up email:

“Labs should start with incremental steps in utilizing AI in digital pathology, e.g. starting with quality control (QC), workflow optimization or with a few applications that are painful for pathologists to count (e.g. counting mitosis) to get them used to the tech and to facilitate adoption.”

Similarly, Vineeta Agarwala, an impressive physician-scientist who recently joined Andreessen-Horowitz from GV, and who was previously a project manager at Flatiron, emphatically and repeatedly stressed the importance of incrementalism, even in the context of AI.  For example, she noted that at Flatiron, which focused on deriving clinical trial-like data from EHR data (see here), a key use of AI at this tech-driven company was…to determine which patient charts to spend time manually extracting the data from!  It seems unsexy, but apparently it delivered immediate benefits in operational efficiency.

Vineeta Agarwala

Grounded Health Tech Investors

A pleasant surprise at this conference was the number of VCs represented who both seemed interested in the nexus of tech and health and appeared to be approaching it in a grounded fashion, led by investors who have relevant domain experience. Greg Yap from Menlo Ventures, and Vijay Pande and Agarwala from Andreessen-Horowitz, particularly stood out. 

Pande emphasized there’s “nothing magical about AI,” and acknowledged that developing new drugs is not a fast process, as even compounds designed with the help of AI require, in his words, “the usual stuff” such as a battery of preclinical assays and extensive clinical trials.

Similarly, Agarwala described AI as simply “technologies to better learn from data,” and emphasized that “progress is going to be incremental.” Yap was perhaps even more cautious about AI, worried that we seem to be “at the peak of the AI hype cycle.”

Many (but not all) of the VC firms gravitating towards the “AI and data science” opportunity in healthcare and biopharma seem to be tech firms (Menlo Ventures, Andreessen-Horowitz, DCVC stand out) that have added domain expertise on the healthcare side, rather than healthcare VCs that have added domain expertise on the tech side; one conspicuous exception, perhaps, is Jim Tanenbaum’s Foresite Capital, a firm with deep healthcare roots that’s deliberately pursuing a technology dimension.

The Calcified Hairball Problem

The most dispiriting panel of the day, by far, was a discussion of interoperability led by Stan Huff of Intermountain, and featuring Michael Waters of the FDA, and James Tchung of Duke, describing (among other challenges) the excruciating ongoing effort required by the FDA SHIELD initiative to create a unifying schema for the representation of laboratory data. 

Hurdles seemed to be everywhere, and the realized rewards appeared uncertain at best.  The problem seemed to me to reflect the “calcified hairball system of care” to which VC Esther Dyson has famously referred. Listening to the panel describe the extensive painful effort involved in even the most basic efforts to extract meaningful information reinforced the sense that the existing system may be a virtually intractable mess; engaging with it seemed likely to result in a huge suck of time and money, with brutal political fights at every turn, and perhaps with little ultimately to show for the effort – the little juice you extract may prove not to be worth the squeeze.

Who could blame investors like Pande, then, who emphasized the value he sees for startups who think from the outset about how to collect data that (in contrast) works well with AI, and is designed from the ground up with that application in mind.  This seems to be the approach that prominent drug discovery startups like insitro (Andreessen-Horowitz-backed) and Recursion are taking, for example. 

While this doesn’t solve the problem about what to do about all the legacy data stuck in existing systems – which Tom Siebel, recall, describes as a (the?) competitive advantage of incumbent companies in an increasingly digital world — it feels like a contemporary example of what happened to factories after the arrival of electricity, as I described in this column last year. While most factories rapidly converted to electricity, established industries (due to sunk costs) were reluctant to extensively rework or reimagine their factories – they kept the design the same, and just substituted electricity for steam-power. The real beneficiaries were the emerging new industries, who had both the need and the opportunity to design work flows from the ground up, unencumbered by existing approaches. This led to the design of the modern factory. 

Similar new opportunities – where entrepreneurs can freshly leverage the power of new technology while minimizing dependency on the limitations of legacy technology – seem to represent the kind of investments that VCs like Pande are seeking out today.

Transparency and Trust

A thoughtful conversation between Atul Butte, a physician-scientist who oversees health data science for the entire University of California (UC) system (you can hear his Tech Tonics episode here) and Cora Han, UC Health’s newly-minted Chief Health Data Officer – explored why interactions with health systems and tech companies are now appearing so regularly in the news (see this WSJ, this WSJ, this WSJ, this FT, this JAMA commentary, and this JAMA commentary).   

Health systems contracting with technology companies is hardly new or unusual, Butte noted, wryly adding that it seems like only when specific names are attached to the two (such as “Ascension and Google”) that this common type of relationship is suddenly  portrayed as “sinister.” Cora suggested that factors contributing to the apparently escalating concern include (a) the potential for staggering scale, and (b) the theoretical intersection of medical and consumer data, which “seems scary.” She emphasized the foundational importance of “trusting the entities with whom you interact.”

Atul Butte

This connects with a related discussion of the role of transparency in increasing trust, a point several speakers emphasized. For example, Butte noted that if a company in stealth mode (meaning no information about it is publicly available) comes to him and asks to explore access to UC information, Butte tells them not to bother; if the company doesn’t even have a website and other basic information easily accessible, he’s not going to refer them to anyone in his organization.

Interestingly, several speakers on my panel – Helminen and Martin Stumpe (now SVP for data science at Tempus and previously the founder and head of the Cancer Pathology initiative at Google) – both emphasized the role of data visualization can play in fostering trust in technologies, especially AI, that can often seem inscrutable. 

At the same time, as Butte astutely suggested, there may be a bit of a double standard here in demanding this of technology since “physicians are also black box,” and can arrive at decisions of dubious quality via an uncertain and impenetrable process, as Atul Gawande and others have eloquently documented.

Regulation and outlook

Michael Pellini, a VC at Section32 (and former CEO of Foundation Medicine) expressed a strong sense of optimism regarding the near-term outlook for both technology itself and the approach to it he’s seen from regulators (more on this below). From a reimbursement perspective, he anticipated the outlook for therapeutics is likely going to get much worse (presumably a comment on the rising concerns around drug pricing), while diagnostics – where entrepreneurs have struggled for reimbursement for a long time, as Pellini presumably knows all too well — may see marked improvement in their future (presumably a comment on their increased ability to guide patients towards demonstrably better outcomes).

Michael Pellini

Similarly, life science VC (arguably the dean of life science VCs) Brook Byers effusively praised the commitment of the FDA to seek out improved technologies, citing two “heroes” – FDA Deputy Commissioner Amy Abernethy (see here, listen here for her Tech Tonics interview, and here on The Long Run) and FDA ophthalmology expert Malvina Eydelman.

His biggest worry, he said (a concern I share) is the sort of sentiment voiced in a recent NYT masthead editorial, urging the FDA to “Slow down on drug and device approvals.”  The Times argued,

“The F.D.A. has made several compromises in recent years — such as accepting ‘real world’ or ‘surrogate’ evidence in lieu of traditional clinical trial data — that have enabled increasingly dubious medical products to seep into the marketplace. [New FDA Commissioner] Dr. Hahn ought to take a fresh look at some of these shifting standards and commit to abandoning the ones that don’t work. That will almost certainly mean that the approval process slows down — and that’s O.K.”

To be sure, regulators have an intrinsically difficult task – if they’re too strict, promising drugs take longer to reach patients (if the medicines reach patients, or are even developed, at all); if regulators are too permissive, then patients can be exposed to harmful products before the danger is recognized.  However, as appealing as it may be to lean into the adage “first do no harm,” as critics such as the NYT are wont to do, invoking this perversion of the precautionary principle as a justification for moving slowly, it’s critical to realize the extensive harm that inaction can cause as well – as I’ve written here and elsewhere.  Regulators need to balance the totality of risk (including the harms of staunching innovation) and benefit; it’s an intrinsically difficult job given the inevitable uncertainty, and requires nuance and customization — “precision regulation” I’ve called it.

What should be avoided, as Tierney and Baumeister argue in The Power of Bad (my WSJ review here), is encouraging regulators to stomp on the brakes reflexively, driven by an outsized fear of risk, as if informed by the credo, “never do anything for the first time.”

Ultimately, what matters most (as I’ve argued) is real-world performance; a randomized clinical trial, where feasible and ethical, is the ideal approach to demonstrate the potential benefit of an intervention. But the most important parameter is what happens to actual patients taking medicines after approval.  Much of the anxiety experienced by regulators reflects the challenges gathering such data – thus once a medicine is released into the wild (even provisionally), it can be difficult to figure out if is working out as anticipated. 

Here is an opportunity. Improved ability to comprehensively gather and continuously evaluate such data as part of routine care would not only improve patient care, but could also make regulatory approvals less fraught. Visibly, we are a long way from this, yet it’s where we ought to be headed, and the direction, I’m increasingly convinced, healthcare is (slowly) starting to go.


False Heroes: Pharmacy Benefit Managers and the Patients They Prey On

Peter Kolchinsky

[Editor’s Note: this is an excerpt from “The Great American Drug Deal.” The book is now available on Amazon.]

By Peter Kolchinsky

It’s hard to know when actual prices for a particular drug really do go up, because there is so little transparency in pricing. A lot of the public discourse on pricing is based on “list prices,” which no one – neither patients nor payers – actually pays.

As is the case with cars and anything on Amazon, everything is always on some kind of sale or subject to discounts of one type of another.

In the world of pharmaceuticals, these discounts are called “rebates” and often take the form of payments from the drug company back to the insurer. The particulars of a rebate that a drug company offers to an insurer – its magnitude and how it varies according to market share – are kept confidential, essentially based on the age-old sales tactic of “Because you’re special, I’ll give you a special price, but don’t tell the other guy.”

Pharmacy Benefit Managers, or PBMs, are the companies who negotiate with drug companies on behalf of payers (and some PBMs are actually owned by insurance companies, so one canthink of them as just agents of payers), and – importantly – retain a portion of the rebates that pass through them. In effect, PBMs profit from the very high list prices they purport to heroically negotiate down. A biopharmaceutical company offering a lower list price without a rebate would threaten the PBM business model, so PBMs discourage the tactic by not rewarding it. Instead they encourage drug companies to keep publicly known list prices high and give an ever bigger confidential rebate to the PBM, from which the PBMs siphon off their own rent before passing on the lower net price to the payer while boasting, “behold what I have negotiated for you!”

Let’s take a closer look at the numbers to see how all this works (or doesn’t).

In 2018, although list prices for branded drugs increased by 5.5 percent, net prices (what drug companies actually get after discounts and rebates) were essentially flat compared to the year before, having come in nominally 0.3 percent higher, though really lower when adjusted for inflation. So increased prices of some drugs were more than offset by the savings from other drugs going generic. Indeed, total spending (what the US is paying, in total for drugs) is increasing, by about 4.4 percent in 2018 from the prior year, but it’s because more patients are being treated. That should be good news. That’s what progress looks like!

Of course, none of that matters if you are a patient who can’t afford what your physician prescribes—and there are all too many people out there who can identify with this. A major part of the solution requires lowering or eliminating out-of-pocket costs, as discussed in Chapter 4, but it’s worth exploring just how much waste there is in the middle zone between drug companies and patients due to payers’ and PBMs’ tactics.

In 2018, US drug spending based on list prices was $479 billion, yet net drug price spending was $344 billion, approximately 28 percent lower. That means that, even if we stuck to “cost sharing” but simply linked what patients pay to net prices that PBMs negotiate instead of list prices, patient costs would be reduced by 28 percent, saving around $17 billion of the $61 billion in out-of-pocket costs Americans paid in 2018.* Insurance companies and Medicare count on that $17 billion extra from patients to pad their own budgets, allowing them to charge slightly lower premiums/taxes, a perverse kind of insurance policy since it means that the sick subsidize the healthy.

Realistically, being able to negotiate secret rebates is a useful tactic for playing drug companies off one another, as PBMs have done with Gilead, AbbVie and Merck to drive down the cost of hepatitis C cures in recent years. However, right now, some patients are increasingly bearing an unfair burden, and most Americans are being misled about the true costs of important medicines.

To understand why and how, let’s begin with a quick rebate primer.

Rebates and How they Impact Patients

Imagine if an agent offered to help you buy a car and promised that you would only need to  pay her 20 percent of whatever she saved you. You buy a car that is listed at $40,000 by the dealership, but you only end up having to pay $30,000 after your agent negotiates on your behalf. Your agent has saved you $10,000 and retains $2,000 as her fee, so really the car cost you $32,000, and you saved $8,000. That’s still good.

Now, imagine that a car dealership decides to cut out the middleman and list those same cars at $30,000, the same amount the dealership would have received after giving discounts to agents. That would be cheaper than going through an agent since you haven’t to pay the $2,000 fee. That agent won’t direct buyers to that dealership because their prices leave no room for the dealership to offer any discounts, which means the agent won’t earn her commission. If anything, agents will encourage dealerships to raise their list prices, either directly or tacitly. If the agent can pressure the dealership to raise the list price of  that car to $50,000, the agent will be able to negotiate it down by 40 percent to $30,000, earn a $4,000 commission, and come out looking like a hero to the buyer, though the car would now functionally cost $34,000!

This is what’s going on in the drug industry, and it is a big reason why list prices are increasing. The question, of course, is why don’t biopharmaceutical companies bypass the PBMs and sell their products directly to insurance companies? Yes, any company that did  so would be ostracized by the agent community, but why should that matter?

The unfortunate truth is that as PBMs have grown, they have amassed wide influence. They have entrenched themselves as middlemen with massive bargaining power, which stems from how concentrated the PBM market has become. The top three PBMs, Express Scripts, CVS/Caremark, and United’s OptumRx, represent 80 percent of the PBM market and serve insurance plans covering half of the US population.

So, what’s the big deal? PBMs keep a piece of the rebate, but at the end of the day, they are saving patients money, and that’s what matters….right? And that’s the problem: saving patients money matters, but this system doesn’t actually do that. Though rebates save money for society as a whole, currently rebates actually increase the true share of costs patients shoulder.


*Consider that saving patients 28 percent by lowering drug prices by 28 percent would render the entire biopharmaceutical industry a non-profit and shutter innovation. So pegging patients’ out-of-pocket expenses to net prices instead of list prices is a much more surgical solution, which payers would compensate for with a tiny increase in premiums, less than 1 percent, though could also absorb by slashing their own bureaucracy.


Seeking to Understand Dr. King’s Vision of Unity, at Our Divided Moment

Rob Perez

[Editor’s Note: a version of this essay was published on Martin Luther King Jr. Day on LinkedIn, and has been edited and republished with permission of the author.]

On this day where we celebrate Dr. Martin Luther King, I thought I’d share a few thoughts about race.

I’m fascinated with how race/ethnicity impacts how we interact with each other. Always have been, from my upbringing in Los Angeles, to my career as an executive and investor in biopharmaceuticals, to my role as founder and chairman of Life Science Cares, an organization that tries to make the world a better place for people of all races who are impacted by poverty. Regardless of our life’s journey, we all have our racial biases, so please indulge me this chance to share some of mine.

My perspective is an unusual one. I’m kind of a racial undercover agent. I’m a person who identifies as African American, mainly because my mixed-race parents originate from the south (New Orleans) which was segregated at the time. In those days, if you had even the slightest bit of “negro” blood, you were classified as “colored”. That meant segregated schools, bathrooms, movie theaters, institutional racism and economic disadvantages, racist police…the whole nine. The genetic facts say that I am actually ethnically mixed, more of a gumbo (the dietary staple of Louisiana Creoles) of genetic roots, with ancestors from Africa, Western Europe, and a little of just about everything else (including Ashkanazi Jew…L’chaim!)

While my parents experienced overt racism during their formative years, my experience was different. More subtle, but still significant in shaping my point of view on my place in society, and how I relate to others who are different. 

Although my family has always proudly identified as black, I don’t look it. I’m light skinned, with hazel eyes, married to an extraordinary woman who is a child of first generation Italian/German immigrants, and I have a Spanish surname. People usually mistake me for Cuban, Puerto Rican, Caucasian. Rarely (if ever) do people see me as an African American.  

So to say I have an unusual perspective into how race is lived in America is an understatement. I’ve often said that my life is like the old Saturday Night Live skit, where Eddie Murphy goes undercover as a white person, to get a behind the scenes look at what real life is like in white America. There’s the moment when he’s bewildered that the mortgage broker he’s meeting insists on giving him money to buy a house. No need to check credit history for a white guy! Just take the money! 

It’s funny, but more like tragicomedy. Like some of the best comedy, it tells us something true about our world that is otherwise hard for us to see.

Like the Eddie Murphy character undercover, I’ve seen Americans as they really are…with their guard down. Suffice it to say that, on occasion, it can be really ugly. Not all the time, or even most of the time, but enough to give me a sense that what many Americans (even educated Americans) of all ethnicities want us to believe about their views towards people of different races, are not always what they seem.

That’s why on this day of seemingly everyone posting their love and admiration for Dr. King, I admittedly pause and wonder how much of his philosophy they really appreciate, accept, or even take the time to understand.

For example, in his famous Letter From a Birmingham Jail, Dr. King writes about his disappointment in the “white moderate, who is more devoted to order than to justice.” And that “shallow understanding from people of good will is more frustrating than absolute misunderstanding from people of ill will.”

I wonder what Dr. King would think of the present day “moderates” of all colors, who disapprove but ultimately accept the racist, xenophobic and divisive principles espoused by some in power, in order to benefit from policies that serve their own economic, religious or social interests.  

America, since its founding, has been at best a contradiction, at worst outright hypocrisy, when it comes to race. It is well documented that when Thomas Jefferson wrote the historic words of the Declaration of Independence, “…We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable rights, that among these are life, liberty, and the pursuit of happiness,” he not only owned slaves, but is believed by some to have been served by one of his slaves at the very time he was committing these words to paper! 

That’s like finding out the author of the definitive text on being a vegan was holding a Big Mac as he wrote the book. 

Our willingness to excuse, overlook or trade-off the oppression of others solely because of the color of their skin, especially if it benefits us economically or socially, is as American as baseball and apple pie.

Many people remain comfortable with this cafeteria approach to racial oppression. As Jefferson said, “Justice is in one scale, self-preservation in the other”, or in today’s words, “I don’t like some of this leader’s words and actions, but I vote for him because I think the country is better off on the whole.”

For many people of color, there is no trade-off/choice. The country isn’t better off when one of the consequences of that choice calls for the inequality and dehumanization of an entire race. Those in power can claim it’s not personal, it’s just politics. The person who swings the baseball bat may not think it’s personal, but to the person who takes the Louisville Slugger to the head, it’s hard to see it any other way. 

We have many issues of great import in our country which people can debate, analyze and compromise. Fiscal policy, health care, even gun control, are all issues that allow for nuance and tolerance of different views. To me, bigotry, on the other hand, is more of a litmus test issue. If even a small part of your political agenda calls for treating me and others as less of a human because of the color of my skin, my country of origin, whether I have a penis, or the gender of the person I love, there is no way I can look past that point of view to find common ground with you on other issues. 

In this time of extraordinary division in our country, and with an election season looming, I fear that identity politics and subtle bigotry will continue to be used to garner support. Fear of the inevitable restructuring of our society as a more diverse, brown, ethnic US is, IMHO, inherently threatening to many who have enjoyed the historical (“natural”) order of things, even if they did not participate actively in the ugliness that made it that way. It is my great hope that we will take more time to seek to understand each other on issues of race, and appreciate the deeply personal nature of its impact on all of us. 

Since it’s the order of the day, I’ll leave you with one of my own favorite quotes from Dr. King.

“We must learn to live together as brothers or perish together as fools.”  Rev. Dr. Martin Luther King Jr., March 22, 1964, St. Louis


Understanding The Ideology Of Digital Transformation

David Shaywitz

The phrase resounding in corporations these days is “digital transformation.”

What does that really mean?

According to proponents, digital transformation reflects the assertion that in order to remain competitive in the modern era, organizations need to radically rethink their approach to how they collect, manage, and analyze information. 

Change is clearly afoot, but the ideology informing this hasn’t been entirely clear, beyond the vague sense that it seems to be driven by an energized alliance of technology and management consultants.

Recently, on the recommendation of a former colleague (DNAnexus CEO Dick Daly), I finally got my hands on what feels like the sourcebook for digital transformation, or at least a clear, contemporary expression of what digital transformation is and why consultants are pushing it.

The 2019 book – appropriately entitled Digital Transformation – is written by Tom Siebel. He’s a billionaire tech entrepreneur who has spent his career developing enterprise technology, and is currently the CEO of c3.ai, a firm that (besides sponsoring NPR), provides enterprise AI. That puts him in position to both support and benefit from companies undergoing digital transformation. 

So of course it’s easy to dismiss Siebel’s book for being exactly what it is – an elaborate white paper that seeks to create a burning platform, motivating executives to urgently adopt the the sort of changes that would clearly benefit Siebel’s business. (Proceeds from the book itself apparently go to charity, according to the jacket cover.)

However, it would be a mistake to reflexively dismiss the book as a self-serving exercise. Much of Digital Transformation rings true, and resonates with so much I’ve seen and heard in multiple organizations. It feels like an extremely relevant and timely read, written by someone who understands both business and technology, and speaks to issues that every organization I know is trying to manage. 

Having said that, there’s very little in the book specifically about biopharma and healthcare, and much of what’s there seems unlikely to resonate with many domain experts. I suspect this disconnect reflects the lack of progress to date in these industries, combined with Siebel’s limited first-hand experience here.

The Burning Platform

First, the burning platform. According to Siebel, the intersection of four significant “technology vectors” – cloud computing, big data, artificial intelligence (AI), and the internet of things (IOT) – is driving such profound change in the environment in which organizations live that businesses face as “mass extinction event.” Companies are fading from relevance at unprecedented rates, CEO tenures are growing ever shorter, and private equity firms are piling up increasing amounts of dry power, ready to pounce on corporations perceived as laggards.  Companies, argues Siebel, “are facing a life-or-death situation.”

In case this is still too subtle, Siebel writes, in a chapter on AI in the defense industry:

“AI will fundamentally determine the fate of the planet. This is a category of technology unlike any that preceded it, uniquely able to harness vast amounts of data unfathomable to the human mind to drive precise, real-time decision-making for virtually any task.” He adds that as the US and China engage “in a war for AI leadership,” the “fate of the world hangs in the balance.”

Of course, motivating change requires not just a reason to change (unambiguously provided here), but also a direction forward – in this case drawing inspiration from a transformational event in the earth’s history:

“Recall how the Great Oxidation Event’s cyanobacteria and oxygen resulted in new processes of oxygenic respiration. Today, cloud computing, big data, IoT, and AI are coming together to form new processes, too.  Every mass extinction is a new beginning. Changing a core competency means removing and revolutionizing key corporate body parts. That’s what digital transformation demands.”

Siebel reviews the distinction noted by organization theorist (and author of Crossing the Chasm) Geoffrey Moore. Moore draws the distinction between a company’s core – what creates differentiation, e.g. Tiger Woods’s golf skill – and a company’s context – everything else, such as marketing. Thus, Woods may make a lot of money from marketing, but his core, his competitive advantage, is how he plays golf. At a level of simplification, says Siebel, core is often viewed as intellectual property, while context is often outsourced. Siebel argues that many companies have digitized their context competencies, but not their core – but that is exactly what’s required, he argues. 

Such change constitutes a difficult process that often requires a strenuous re-thinking of the underlying business, creating “something faster, strong, and more efficient that can do the same job in a totally different way – or do entirely new things.” 

The key opportunity, Siebel argues, is for companies to “use data to reinvent their business models.”  The change required is profound – and, argues Siebel, it must be driven by the CEO, rather than by the chief information officer or anyone else.

According to Siebel, “implementing a digital transformation agenda means your organization will build, deploy, and operate dozens, perhaps hundreds or even thousands, of AI and IoT applications across all aspects of your organization, from human resources and customer relationships to financial processes, product design, maintenance, and supply chain operations. No operation will be untouched.”

The Four Technology Vectors

The four technologies shaping our future, according to Siebel, are cloud computing, big data, AI, and IoT. In a nutshell:

  • Cloud computing provides convenient access for all businesses to essentially unlimited compute and storage, with major providers (Amazon Web Services [AWS], Microsoft’s Azure, Google Cloud) routinely providing robust security and continuously improving resources, characterized by the “rapid innovation of microservices” such as Google’s TensorFlow designed to “accelerate machine learning.” Adds Siebel, “not a week goes by without another announcement of yet another useful microservice” from a leading cloud vendor.
  • Big data refers not so much to the raw quantity of data collected, managed, and analyzed, but really to the mindset towards data – the idea of collecting everything, versus just a sample; in other words, “complete data.” As Siebel nicely puts it, the “significance of the big data phenomenon is less about the size of the data set we are addressing than the completeness of the data set and the absence of sampling error.“ (Whether this is achievable, or impossibly hindered by either technical or social/political barriers, is a topic we’ll return to shortly.)
  • AI involves computers tackling problems that normally require human intelligence. Machine learning (ML) is a subset of AI that involves teaching computers to learn from experience, rather than pre-defined rules. ML might be used to train an algorithm to assess whether an image has a cat or not; this process tends to require a lot of “feature engineering,” where data scientists and domain experts determine what are the key parameters to feed into the algorithm to help it become more accurate.  Deep learning is a subset of ML where “the important features are not predefined by data scientists but instead learned by the algorithm.”
  • IoT is the idea of connecting “any device equipped with adequate processing and communication capability to the internet, so it can send and receive data” – essentially, the “convergence and control of physical infrastructure by computers.”

These four technologies, Siebel observes, present “powerful new capabilities and possibilities. But they also create significant new challenges and complexities for organizations, particularly in pulling them together into a cohesive technology platform.” Not surprisingly, “many organizations struggle to develop and deploy AI and IoT applications and scale and consequently never progress beyond experiments and prototypes.”

Digital Transformation: Implications For Healthcare

Digital transformation, Siebel asserts, will “improve human life.” How? Though “very early disease detection and diagnosis, genome-specific preventive care, extremely precise surgeries performed with the help of robots, on-demand and digital health care, AI-assisted diagnoses, and dramatically reduced costs of care.”

Skeptical about whether healthcare – characterized famously by Esther Dyson as “calcified hairball” system of care – can be disrupted? Siebel’s rejoinder (cited multiple times) is that in January 2018, when Amazon, Berkshire Hathaway and JP Morgan Chase announced their intention to enter the market, “$30B of market capitalization was erased from the 10 largest U.S. healthcare companies” in a single day of trading. While these stocks recovered almost immediately, the market reaction, according to Siebel, emphasizes the industry’s vulnerability.


While Siebel doesn’t offer specific examples of healthcare and the cloud, he shares his view that executives who less than a decade ago proclaimed “our data will never reside in the public cloud” – something I personally heard from a number of healthcare leaders even five years ago – are now delivering a very different message that is “equally clear and exclamatory: ‘…we have a cloud-first strategy. All new applications are being deployed in the cloud.  Existing applications will be migrating to the cloud. But understand, we have a multi-cloud strategy [to avoiding vendor lock-in].’”  While healthcare was among the last to the cloud, it seems many health organizations have finally gotten the message.

Big Data

Siebel highlights the potential value to precision medicine of being able to access “the medical histories and genome sequences of the U.S. population.” His point, it seems, is that “big data” thinking enable us to contemplate considering the data of each person, rather than generalizing from a sampling of people.  Actually acquiring anything approaching such a complete data collection, of course, is a non-trivial real-world challenge, as most in biopharma and healthcare recognize — and often lament. In biopharma, technical (as well as financial) limitations may stymie efforts to collect and subsequently analyze all possible information in human beings and other complex biological systems.


Siebel is clearly taken by the potential of AI in healthcare, while acknowledging “the health care industry is just starting to unlock value from AI. Significant opportunities exist for health care companies to use machine learning to improve patient outcomes, predict chronic diseases, prevent addiction to opioids and other drugs, and improve disease coding accuracy.”

He suggests machine learning algorithms can be used “to predict the likelihood someone will have a heart attack, based on medical records and other data inputs – age, gender, occupation, geography, diet, exercise, ethnicity, family history, healthy history, and so on – for hundreds of thousands of patients who have suffered heart attacks and millions who have not.” (This again assumes it’s possible to get one’s hands on enough of the relevant data to train the algorithm. That’s profoundly difficult in today’s environment, beset by the problems of data interoperability, patient data hoarding by hospitals, proprietary EHRs that can’t/won’t talk substantively to each other, and an ecosystem of stakeholders who aren’t inclined to share data.)

Applications for deep learning in healthcare, according to Siebel, include “medical image diagnostics, automated drug discovery, disease prediction, bone-specific medical protocols, preventive medicine,” though additional detail isn’t provided.

Perhaps especially relevant to medical practitioners, Siebel suggests that “the ability to apply AI to all the data in a dataset” means that “there is no longer the need for an expert hypothesis of an event’s cause.  Instead the AI algorithm is able to learn the behavior of complex systems directly from data generated by those systems….The implications are significant…An experienced physician [in Siebel’s future world, presumably] is no longer required to predict the onset of diabetes in a patient.”  Instead, this information can be gleaned “from data by the computer – more quickly and with much greater accuracy.” I am aware of glimmers of progress in this area, which has been discussed for over a decade.


Siebel suggests that connected devices “give doctors the opportunity to track patient health remotely in order to improve health outcomes and reduce costs.  By harnessing all these data, IoT supports doctors in predicting risk factors for their patients.”  He notes that pacemakers “can be read remotely and can issue alarms to doctors and patients, warning if a heartbeat is irregular.” He reports that the “wearable industry has given people the ability to easily track all sorts of health-related metrics.” Combining wearable information with clinical data, he observes, “can create a holistic view of the patient, allowing doctors to deliver better care.”

So far so good, right? But Siebel isn’t through. “Soon,” he contends, “humans will have tens or hundreds of ultra-low-power computer wearables and implants continuously monitoring and regulating blood chemistry, blood pressure, pulse, temperature, and other metabolic signals. These devices will be able to connect via the internet to cloud-based services – such as medical diagnostic services – but will also have sufficient local computing and AI capabilities to collect and analyze data and make real-time decisions.”

I’m not sure even most quantified selfers would embrace such a future; if anything, this vision seems to evoke folk singer-songwriter Arlo Guthrie’s memorable description of his military physical examination during the Vietnam War era, where “they was inspecting, injecting every single part of me, and they was leaving no part untouched.”

Siebel points out that large sets of IoT-generated data can “uncover insights and make predictions,” such as using “AI predictive analytics to find potential barriers to medication treatment and identify potential contraindications. This gives doctors the tools to more effectively support patients, improve outcomes, reduce relapse, and enhance quality of life.”

He continues, “Imagine pill bottles that track adherence to prescribed medications, alerting doctors and users when patients fail or forget to take their medication. Also in development are smart pills that can transmit information on vital signs after being ingested.” (I’m sure Otsuka can envision this quite clearly….)

Finally, if you’re not creeped out yet by this degree of monitoring, Siebel, in pointing out that “data generated everywhere through an organization can have value,” reports that today, “Insurance companies…work with mining and hospitality companies to add sensors to their workforces in order to detect anomalous physical movements that could, in turn, help predict worker injuries and avoid claims.”

In this vision of digital transformation the future of both work and health apparently involves, and certainly aspires to, ever-more detailed monitoring and assessment of every facet of existence. It’s a vision that sounds like total, continuous surveillance.

Not only is this approach exceedingly, absurdly, invasive, but it may not even deliver the cost-savings Siebel repeatedly promises, as my Tech Tonics co-host Lisa Suennen points out:

“Tech can only reduce healthcare costs when financial interests are aligned,” Suennen reminds us.  “Digital products for early diagnosis can just as easily lead to excessive testing and treatment when the impetus is to increase utilization (which increases cost).  It is true that technology such as AI and robotics have the potential to lead to cost-reductions in healthcare, but there is far more to it than technology alone.  As with all technology, it is a tool, not a solution.  When the solution one is solving for is to increase revenue, the tool can work just as well in the hands of someone who benefits from increased cost.”

In short, Siebel’s perspective on the ideal future state of healthcare feels both dissonant (I’m not sure most people wanted to be constantly monitored for failure, like IoT-enabled equipment constantly surveyed by a technician) and elusive (based on the challenges of gathering even modest amounts of integrated health data in one place); moreover, as Suennen argues, it may not even deliver the beneficial economics Siebel anticipates.

Digital Transformation: Implications For Organizational Change

In contrast, Siebel’s observations on barriers for organizations contemplating digital transformations seem thoughtful and highly relevant, particularly regarding data, people, and prioritization.


Siebel premise is that “successful digital transformation hinges critically on an organization’s ability to extract value from big data,” and a key initial challenge is how to organize all the data in the first place.  But the good news, argues Siebel, is that large established companies are starting on their journey with one key advantage: they’re already sitting on a lot of data (though unlocking value from these data might be another story).

Argues Siebel, “incumbent organizations have a major advantage over startups and new entrants from other sectors. Incumbents have already amassed a large amount of historical data, and their sizable customer bases and scales of operations are ongoing sources of new data.”

He acknowledges, “Of course, there remain the considerable challenges of accessing, unifying, and extracting value from all these data.  But incumbents begin with a significant head start.”

The challenge is what to do with all these legacy data.  The temptation is to put it all in one place, a so-called data lake or data swamp. Not smart, Siebel argues.

“Storing large amounts of disparate data by putting it all in one infrastructure location does not reduce data complexity any more than letting data sit in siloed enterprise systems. For AI applications to extract value from disparate data sets typical requires significant manipulation such as normalizing and deduplicating data,” Siebel observes, adding “the key big data challenge “is to represent all existing data as a unified, federated image.”


To operate in this brave new world requires comfort with both the data and the emerging ways of thinking about data. Writes Siebel: “Generating value requires individuals in the enterprise who are able to understand all these data, comprehend the IT infrastructure used to support these data, and then relate the data sets to business cases and value drivers. The resulting complexity is substantial.”

Interestingly, and (based on my experiences over the years) perceptively, Siebel calls out what he describes as a common mistake: overconfident CIO’s who mistakenly (in his view) believe they can assemble the required data and analytics structures on their own, DIY-style. Siebel says he’s observed this sort of misplaced confidence from the time he was at Oracle, selling enterprise application software, and realized that their biggest barrier wasn’t competitors, but the CIO who wanted to solve the problem DIY, and, according to Siebel, generally failed. (Again: take with a grain of salt, given Siebel’s obvious interest in selling enterprise software.)

Siebel notes that companies obviously require more than just data experts – they also need “translators” who “can bridge the divide between AI practitioners and the business.  [Such translators] understand enough about management to guide and harness AI talent effectively, and they understand enough about AI to ensure algorithms are properly integrated into business practices.”

But what it seems like companies need most of all, according to Siebel, is a ton of consultants – or as he politely refers to them, partners: “In a digitally transforming world,” he says, “partners play a bigger role than in the past.” He explicitly writes companies should involve management consultants for strategy, software partners for technology, professional services to build apps, and change management partners to get people to use to the new tech.  Suddenly, you can begin to understand why “digital transformation” is so broadly embraced: it’s like an Oprah giveaway but for consultants (YOU get more consulting work, and YOU get more consulting work, and YOU get more consulting work…).


While Siebel’s advice regarding consultants feels a bit self-serving, his advice about prioritization seems spot-on, and certainly aligns with what I’ve been suggesting, as well as with the advice that experts I admire, like Jim Manzi, seem to be offering.

Above all, says Siebel, focus on business needs, not abstract, highfalutin aims. “Work incrementally to get wins and capture business value,” he emphasizes. Much as Vizzini, in The Princess Bride, famously advises “never get involved in a land war in Asia,” Siebel’s counsels (perhaps for similar reasons) “Do not get enmeshed in endless and complicated approaches to unify data. Build use cases that generate measurable economic benefit first and solve the IT challenges later.” He also suggests adopting a “phased approach to projects,” seeking opportunities to “deliver demonstrable ROI one step at a time, in less than a year.”

He notes that “Many organizations get hopelessly mired in complex ‘data lake’ projects that drag on for years at great expense and yield little or no value.” There are many examples, he says, of companies wasting big money on such projects. He cites multiple examples of companies wasting years with “outside consultants to build a unified data model, only to see no results at all.”

While the use-case first approach “may sound like heresy to a CIO,” Siebel says that this approach “allows for focus on the value drivers.”  The emphasis of a digital transformation strategy, he argues, should be “creating and capturing economic value.”  Fulfilling this value mandate requires thoughtful roadmap and prioritization, “identifying and prioritizing functions or units that can benefit most from transformation.”

Finally, counsels Siebel, use common sense. “If a project does not seem to make sense, it’s because it doesn’t make sense. If it appears incomprehensible, it is likely impossible. If you do not personally understand it, don’t do it.”

Figuring out how to apply this admonition to use common sense to areas like healthcare and biopharma – where the benefits touted by technologists often don’t seem sensible (as both Derek Lowe and I observed this week), but in some cases, could be truly transformative — represents both the challenge and the opportunity of our moment.


Atomwise and EQRx: Two Contrasting Strategies for the R&D Inefficiency Problem

David Shaywitz

Pharma innovation expert Bernard Munos captures the inherent inefficiency of drug development with two fascinating statistics he recently shared with me. 

First, for large pharmas, the average cost of developing a new drug (simply based on the total R&D costs divided by the total number of new drugs approved for sale) works out to about $5B per drug. It’s an astronomical number, and one that keeps growing to a worrisome degree. The Munos analysis encompasses both the cost of failures and what he calls the cost of scale. In contrast, the actual cost to get a single drug approved for smaller companies – an analysis that omits the cost of failure because it doesn’t look at the many small companies that tried to advance drugs and failed – works out to a bit over half a billion dollars, or about 10-fold less.

One implication of these data is that in large pharmas, drug discovery seems terribly inefficient, with huge amounts of money going into products that never become approved drugs. Another implication, says Munos, is that large pharmas are, theoretically, quite vulnerable to disruption, since they “need every day of their patent life to recover that cost and fund an ever-growing R&D budget that keeps producing the same output.” That’s another way of saying their existing operating model requires extracting all available revenue from existing approved products.

It hasn’t escaped anyone’s notice that it would behoove pharma to make R&D more efficient, as even small increases in the rate of success at any stage would be expected to translate into improved overall R&D efficiency. However, achieving such efficiency gains has remained remarkably elusive, despite the hundreds of millions of dollars that have been spent on management consultants, and despite the execution of continuously refreshed restructuring initiatives generally driven by said consultants.

Two very different companies making news at JPM20 say they have an approach that could make a dent in the R&D statistics: Atomwise, the AI-for-drug discovery company led by Abraham Heifets, and EQRx, former VC Alexis Borisy’s ultra-buzzy, on-Zeitgeist fast-follower newco. Both seem to be focused on dramatically different aspects of drug development, yet they share a commonality in their approach that’s worth a closer look.


San Francisco-based Atomwise, founded in 2012, seeks to use AI to accelerate the identification of promising molecular compounds, with a particular emphasis on drugging the undruggable.   In the last week, they’ve announced a new partnership with the accelerator BioMotiv, and the extension of a 2017 collaboration with Bayer.

Atomwise’s thesis is that while the overall probability of success (POS) for any early stage compound is quite low, the actual POS is naturally much higher if you remove a key aspect of the risk; one way of accomplishing this is by targeting something you are certain will have an impact on disease, if only you could access it.  The thinking is that often, new disease targets represent, at best, hopeful, educated guesses, but still involve a huge amount of biological risk – as well as the many other risks (such as safety, tolerability, clinical efficacy) associated with getting a new chemical entity all the way to the point of FDA approval. 

Heifets argues that his platform, like CRISPR, is valuable precisely because it enables drug developers to physiologically manipulate established targets in a way that was previously unachievable.  As he writes, “the excitement around CRISPR, protein degradation, and RNA-targeting techniques is justified because these techniques offer us the chance to drug fundamentally new targets that were not otherwise attainable by other methods,” adding “The future of drug discovery is in using new technologies to drug the undruggable.”

Munos, for his part, worries that the targets Atomwise is attacking are not as de-risked as the company may assume. “There is no such thing as a validated, undruggable target,” he notes, explaining “the only validation that can be trusted is that which comes with a drug approval. Before that, targets may be interesting or promising, but they are not validated.”  He adds, “Most of the clinical trials that fail aim at targets that are thought to be validated.  Yet toxicity and insufficient efficacy are the most common causes of trial failure.” Munos’s comments echo the old pharma saw that the definition of a validated target is one where there’s already a drug with $1 billion in sales.


Cambridge, Mass.-based EQRx, announced this week, represents a response to the problem of costly drugs. Borisy, a former partner at Third Rock Ventures, says he sees a market opportunity in pursuing established targets, and essentially undercutting pricey first-to-market products. His thought is that by focusing on established mechanisms, you can make new drugs for much less money, because you anticipate a far lower failure rate (you know the target is both relevant and targetable) than the typical innovator company. This first requires making a new chemical entity that eludes the innovator’s original patents. Then, presumably, EQRx can perhaps also design more efficient clinical studies by leaning on established examples. 

You can think of Borisy’s approach as “pre-generics,” perhaps (with apologies to the pre-cogs of Minority Report), although he aspires to make drugs that are somewhat better than first-in-class products. The economic argument is that his reduced costs and development time will enable him to get new molecules onto the market before the first-in-class product goes generic, and to sell this fast-follower at an aggressive low price, but that still allows for significant gross profit margins. Borisy expects to be able to do this for multiple products. As Luke described it earlier this week, “the idea at EQRx is to use the bursting knowledge of biological targets and new treatment modalities to make fast-follower patented drugs that are sold at radically cheaper prices – maybe 50, 60, 70 percent cheaper than others in a given class.”

While noting the profound transformative potential EQRx would have if successful, by cutting deeply into pharma’s anticipated revenue over the patent life of an approved drug, Munos nevertheless remains skeptical:

“Given the long lead time of drug R&D, in order to reach the market before the pioneer drug becomes generic, the ‘fast-follower’ must get going long before the drug it follows gets approved. And if the lead drug stumbles, so does the fast-follower. EQRx apparently thinks it can tweak the fast-follower model by waiting until a drug has been approved — thus validating its mechanism — before it gets going and still reach the market long enough before the lead drug loses its patent. This would require an improvement in the speed of drug R&D that has never been seen before despite pharma’s decades of relentless efforts at process improvement (e.g., six sigma). It would be a monumental achievement.”

A Shared Focus on De-risking

While Atomwise and EQRx are focused on very different problems, both are leveraging a similar strategy: improve the overall probability of success by attacking something that’s already (somewhat) de-risked.  For Atomwise, this means creating a new compound for an established target that no one’s been able to drug, and drug it for the first time; for EQRx, this means creating a new molecule for an important target that’s already been drugged, and doing it faster/better/cheaper. 

Each is betting that while the overall economics around new drug development are dispiriting, the value proposition for a candidate drug that’s derisked can be far more promising.  Both companies, as Munos points out, face real challenges as they strive to deliver at the scale necessary to make the still-difficult math work. 

In some ways, Atomwise may have the easier lift.  Even if only a few compounds are ultimately successful, the individual drugs could support the growth of the company (assuming the company retains adequate economics in the products, which will apparently be developed by partners – this is a critical consideration). Atomwise could succeed even if the platform doesn’t meaningfully alter the grim R&D statistics for the industry as a whole. 

EQRx has not gone into significant technical detail about how, exactly, it will go about achieving its needed gains in speed and cost. But whatever technologies it brings to bear will have to be remarkable to achieve its founding promise. EQRx has to deliver multiple fast-followers through all phases of compound development and clinical testing, with enough speed, enough economy, and a high enough success rate. That’s a very high hurdle, though also a worthy ambition.


A New Cholesterol-Lowering Drug at a Low Price: Tim Mayleben on The Long Run

Today’s guest on The Long Run is Tim Mayleben.

Tim is the CEO of Ann Arbor, Michigan-based Esperion Therapeutics.

Tim Mayleben, president and CEO, Esperion Therapeutics

Esperion is bucking a few of the trends you’ve seen in biotech the past decade. It has developed a cholesterol-lowering drug, bempedoic acid. The drug is currently under review by regulators in the US and Europe. It is expected to be cleared for sale in 2020, likely on its own, and in a combo form with generic ezetimibe (once known under Merck’s brand name Zetia).

Instead of aiming the new drug at a targeted niche of patients with a rare disease, or certain genetic characteristics – the popular thing over the past decade — this is a drug being aimed at the masses. We have a lot of people in the US with high LDL cholesterol who are at high risk of heart attack, stroke, and death from cardiovascular disease.

Esperion is entering a crowded marketplace. On one end, are the cheap, convenient, generic, orally-available statins. These drugs were once Big Pharma’s bread and butter. On the other end, with a greater ability to bring down LDL cholesterol – but also with higher, brand-name price tags – are the PCSK9-directed antibody drugs. The overpricing of the PCSK9 class was a disaster (which I anticipated in a column back in 2012).

Esperion has studied that tale, and has sought to learn from it.

Heading into the 2020s, how does Esperion seek to carve out a niche for itself and compete? It does have a different scientific mechanism than others in the class of cholesterol-lowering drugs, but that’s not the main feature here.

The big idea — wait for it – is by offering a potent, brand-name cholesterol-lowering drug at a low price. At least by today’s standards. It’s best to listen to Tim explain his thinking on price, which he does toward the end of the show. But without giving too much away, he believes it’s the right thing to do for patients, and for society. It’s also going to allow Esperion to make plenty of money and reward its investors – all of these goals can be achieved simultaneously. Maybe, just maybe, this is a drug that could still compete in a new world governed by something like Medicare-for-All.

Before we go into all of that, you’ll hear about Tim’s story. He’s not a scientist. He encountered some real challenges to get where he is today. Clearly, some of the values he picked up early in life have an influence on the decisions he and Esperion are making today.

Now, please join me and Tim Mayleben on The Long Run.


EQRx Taps Zeitgeist, Raises $200m For Innovative Drugs at Aggressive Low Prices

Alexis Borisy is smart. As in high IQ.

But there’s more to biopharma than that.

“As my grandmother used to say, ‘You can be smart, smart, smart…but dumb,’” Borisy said.

Alexis Borisy, chairman and CEO, EQRx

Some old wisdom is part of what’s driving Cambridge, Mass.-based EQRx. The company, started by the former Third Rock Ventures partner and backed by $200 million of “smart money” in a Series A venture capital round, has plenty of IQ. The industry has no shortage of that.

What’s different is that EQRx sees – and is seeking to attack — the weak underbelly of biopharma industry EQ, as in emotional intelligence quotient.

Not only has the industry committed many egregious pricing offenses over the past couple decades, there are standard operating practices (opacity on prices, de facto permanent patenting strategies, shady stalling of generics and biosimilars to name a few) that scream of arrogance and amorality.

Tone-deaf for too long, the industry needs to do better by patients. Pitchforks are out, rightly.

The founders, and investors, in this startup are well aware that drug discovery is entering a golden age. But price gouging has damaged the industry’s standing so much that it just might kill the golden goose.

That would be dumb.

The idea at EQRx is to use the bursting knowledge of biological targets and new treatment modalities to make fast-follower patented drugs that are sold at radically cheaper prices – maybe 50, 60, 70 percent cheaper than others in a given class.

Borisy, 47, the former partner at Third Rock Ventures, believes the latest science and technology tools can be connected to a new biopharma business model that society can live with. He has the credibility to take on this task. (See July 2018 analysis of the TRV public portfolio). He was the founding CEO of Foundation Medicine and Blueprint Medicines, and has his fingerprints on more portfolio successes.

There’s a window of opportunity to undercut first-movers on price to that extent, and still make a lot of money “a ton of money” for investors, Borisy said. There’s reason to believe a pharma company making what he calls “equivalars” – new chemical entities that are equally good, or slightly better, than first-movers in a category – but which could still fetch 80 percent gross margins.

During an interview last week, Borisy was enthusiastic – more passionate than I’ve ever heard him in the dozen or so years I’ve known him – about creating new drugs with fast and lean development plans. About forming productive working relationships with payers and providers and cost-effectiveness research outfits like ICER. About recruiting smart drug hunters, computational people, and business thinkers willing to reinvent musty old models. About raising tons of capital beyond the original $200 million to accomplish an audacious objective of rolling out a first drug in five years, then 10 drugs in a decade, and dozens and dozens more within 15 years.

He kept making analogies to JetBlue and Amazon. These are companies in quite different industries. Both found flab in their industries, ruthlessly cut into it, passed on savings to consumers, lived on narrower margins, and still built thriving businesses.

For a guy who loves science, and has thought about traditional company-building for so long, low-cost drugs may sound like heresy. If even faintly successful in the next five years, it would invite the full competitive wrath of entrenched companies with billions of dollars on the balance sheet. There are easier things to do in life.

But while taking some time off at his Cape Cod house to chill out after his run at Third Rock, Borisy said this was an idea that he couldn’t shake.

Melanie Nallicheri, president and COO, EQRx

“I tried to put it down, but couldn’t,” he said. He invited friends over to talk, to flesh out the idea, think about who would need to be involved, and what it would take. Melanie Nallicheri, formerly the chief business officer at Foundation Medicine, was one of those people. She’s now the president and chief operating officer.

Why, I asked Borisy, would he go all-in on this?

“It’s the right thing to do,” he said.

A few of the key decision-makers involved are:

  • Alexis Borisy, chairman and CEO; former Third Rock Ventures partner
  • Melanie Nallicheri, president and COO; former Foundation Medicine chief business officer
  • Robert Forrester, CXO; former CEO of Verastem Oncology
  • Sue Hager, SVP of corporate affairs and citizenship; former chief communications officer, Foundation Medicine
  • Peter Bach, co-founder and advisor
  • Sandra Horning, co-founder and advisor
  • (Borisy hinted that more people in the payer and cost-effectiveness community will be getting involved, but aren’t yet ready to be announced).

Investors include:

  • GV
  • Arch Venture Partners
  • Section32
  • Casdin Capital
  • A16Z
  • Nextech
  • Arboretum Ventures

When I asked Borisy if there was hesitancy among investors about putting money into a company that’s explicitly about showing self-restraint and charging less than it could for new medicines, the answer was No. Some investors who wanted in couldn’t get in. At least for the A round.

Essentially, VCs are watching the political tea leaves carefully, and see candidates having a lot of success railing against drugmakers, as part of the rationale for a Medicare-for-All program. The kind of price controls that would be necessary for such a program are anathema to many biopharma investors.

But a market-based company that undercuts complacent incumbents who are overcharging?

It’s unorthodox, but conceivable.

“We are fitting the zeitgeist of the moment,” Borisy said.

Borisy wasn’t saying exactly which sub-category has jumped to the top of the development list, but oncology is one category ripe for price competition. Rare diseases is another possibility. Small-molecules and certain kinds of biologics can be made quickly at miniscule costs, making these the modalities of choice for disruption.

To get further perspective, I corresponded with GV partner Krishna Yeshwant and Peter Bach, the director of the Center for Health Policy and Outcomes at Memorial Sloan Kettering Cancer Center in New York, and a prominent critic of pricing abuses.

To give TR readers a little extra context, I’ll run these exchanges in mini Q&A form below.

Timmerman Report: Was this an obvious investment for you, or did you have to get over some hurdles before writing the check?

If so, what persuaded you to go in?

Krishna Yeshwant, partner, GV

Krishna Yeshwant: I think when many people think of GV investing in healthcare they think of Health IT and Digital Health working in the payor/provider world (which we are active in of course), but as you know we have additionally been very active in biotech.  I’ve long found that the people in each of these groups don’t know each other. The people working in biotech venture and entrepreneurship don’t go to the same conferences as the payor/provider oriented investors and entrepreneurs and, with a few exceptions, generally wouldn’t be able to identify one another if they were in the same room.

I think that fracture is core to one of the large issues in the healthcare industry. Namely that the therapeutics and the payor/provider worlds are polarized. Therapeutics companies often think payors are being unfair by not reimbursing their products, while payors are often frustrated that therapeutics companies don’t clearly define the value of the drugs they are trying to bring to market.

I was excited by EQRx because I loved the idea of connecting Alexis to the payors and providers who we’ve worked closely with for years via the payor/provider side of our investment activity. I wanted to be part of brokering the conversation between these two huge parts of the healthcare industry. My hope is that through this company we can move what has been a zero sum negotiation towards a more productive partnership.

Basically, it was obvious because we wanted to work with Alexis, and wanted to work to realign these parts of the industry. But [there are] many controversial points as well – as is the case for all of our most exciting companies.

Timmerman Report: Why did you decide to get involved?

Peter Bach: A lot of factors, but I think the twin theses behind EQRX are promising.  New drugs in proven classes can plausibly be developed pretty efficiently nowadays, and although the current drug distribution and payment system is mostly upside down (higher prices can lead to greater market share), I think we are at a potentially transformational point, because the people actually paying the bills (taxpayers, patients, employers) are singling out specialty drugs as a pain point. So if we can get a compelling economic opportunity in front of them then maybe this creates the demand the rest of the system will require to change into one focused on delivering cost savings when they are available, and reward lower cost entrants with larger market share. 

LT: I might be wrong, but I don’t suspect you have many industry involvements (given your vocal criticism of high drug prices). Were you skeptical at first when approached by EQRx?

PB: I don’t have a lot of industry involvements, but I very much think my professional life is intertwined with the industry. I know how I think we should prioritize or regulate prices and to what extent access should be the metric of success rather than FDA approval might diverge from many in the industry (and I am not shy about highlighting these differences), I don’t intrinsically doubt the motives of participants in it. So I think I approach every conversation with an openness to further understanding, and in this case the proposition really aligned well with my priors regarding what opportunities had arisen that could actually play some small role in transforming the market at least towards one where lower prices of drugs enabled garnering of market share, rather than impeding it as a lot of evidence suggests is the case today. 

LT: How did the principals persuade you to get involved?

PB: I have known Alexis and also Krishna for a long time, this wasn’t about persuasion it was at least from my end a convergence around a common set of perceptions and motivations, and complimentary skills and experience. 

LT: What do you hope to accomplish by working with this company?

PB: I don’t approach things like that. I have always pursued opportunities because I think they are interesting and challenging, will give me a chance to both learn new things and employ the knowledge and skills I already have towards important objectives that will have positive spillovers for others, and frankly be around people I like.  Going into government met these standards, so did becoming a doctor, and I gauged that EQRX did as well.

LT: Do you think one company can provide enough market force to bring real downward pressure on prices, or is it going to take multiple companies following this sort of model to actually bend the cost curve down?

PB: I honestly don’t know, but because it is you, I will fall back on “the longest journey begins with a single step.”

1 2 3 32