Get In-depth Biotech Coverage with Timmerman Report.
A frequent – and frequently correct – critique of entrepreneurs bearing technology is “your solution is not my problem.”
Healthcare – among many other domains, perhaps all domains – has been beset by “solutionism,” the idea that my clever technology will solve your hideously complex problem.
But perhaps it makes no more sense to instinctively reject this mindset as it does reflexively embrace it. After all, technological advances have enabled profound scientific advances as well as contributed to a range of benefits and comforts we take for granted.
But in each instance, someone had to figure out how to apply an emerging new technology to a relevant task.
A prismatic example may be found in the invention of the Post-It note. It was developed in the 1970s by a 3M research chemist named Spencer Silver and a 3M chemical engineer, Art Fry. Silver passed away this week; the fascinating story behind the Post-It was eloquently captured in his New York Times obituary by Richard Sandomir.
In the early 1970s, Silver had been trying to make a super strong adhesive that could be used in aircraft construction. Instead, he developed something that was comparatively weak, but which stuck to surfaces, peeled easily, and was reusable. The glue was patented in 1972, but despite Silver’s efforts to highlight its potential within 3M, it didn’t get much… traction. But he reached many colleagues, including Fry, who was trying to develop new products.
One day, at choir practice, Fry found himself wishing he had had a way to mark the songs in his hymnal – the slips of paper he had been using kept falling out. But if only there was some way to make the slips stick to the pages…
The Post-It note was born.
The Post-It Note (original name: Press ‘n Peel) was introduced in test markets in 1977, and nationally in 1980. According to the Times, “There are currently more than 3,000 Post-it Brand products globally.”
When you think about how this invention happened, it’s clear that the technology – the novel adhesive – was developed first, and its properties and capabilities studied and understood. Then, the originator of the technology – Silver – went looking for something useful to do with it. It was unarguably a solution in search of a problem. And Fry eventually provided the problem.
Silver turns out to be in good company. Many accomplished entrepreneurs begin with solutions, and cast about for the right problem.
As serial entrepreneur Max Levchin, co-founder of the billion-dollar startups PayPal and Affirm, explains in Ali Tamaseb’s Super Founders (discussed here),
“Most companies that I’ve started have been these really half-baked ideas that were initially about technology. I don’t often look at the world as ‘There’s a big problem; what can I bring to bear to solve it?’ Instead, I sort of say, ‘I can do this cool thing. That’s a nice hammer. What’s a nail?’ Sometimes you have hammers looking for nails and there’s no value to be built. But a lot of times, you can actually look at something new, and say, ‘Oh, cool. Artificial intelligence – it will help us to X. Or virtually reality – this will be useful for Y.’”
In some sense, you can also view the “pivots,” so central to startup evolution, as an expression of technology’s search for a problem.
“That’s a nice hammer. What’s a nail?” – Max Levchin, co-founder, PayPal and Affirm
Tamaseb cited a number of familiar examples of technology applications that took a while to find their groove. These include:
From the perspective of healthcare, the point is that figuring out how to use raw new technology in a notoriously complex domain like healthcare is inordinately difficult, but also vitally important.
Healthcare incumbents may have been right to reject many of the arrogant and ignorant technologists who showed up 10 years ago, certain their app would “solve” healthcare. Incumbents may also be right to look critically on the digital transformation imperatives consultancies are urging organizations to adopt post-haste.
But there needs to be a thoughtful middle ground between the false comforts of tech worship, on the one hand, and tech cynicism, on the other.
Like Art Fry, we need to thoughtfully engage with technology developers like Spencer Silver, ideally in an environment that cultivates such engagement and exploration, like 3M in the 1970s.
As Safi Bahcall writes in Loonshots (my Wall Street Journal review here), when 3M brought in a new CEO whose single-minded focus on efficiency squeezed out such chance encounters, innovation plummeted. It didn’t recover until another CEO restored the old system.
This lesson is especially relevant to healthcare.
To discover new medicines, it’s critical to provide “the intellectual space for tinkering and capitalizing on the chance observations and unexpected directions so important in medical research,” Nassim Taleb and I wrote in 2008.
The point: innovation blossoms in an environment and culture that affords adequate oxygen — time, space, and (critically, I’d argue) receptivity to novelty — for innovators to match promising new, and perhaps still somewhat raw technology with pressing, worthy, and suitable problems.
Have you ever met someone with Alzheimer’s disease?
Odds are you probably have. Odds are that question calls to mind the face of a beloved grandparent, neighbor, or family friend whom you’ve stopped by to visit, nervously clenching a bouquet of flowers. You greet them hopefully and wonder if they’ll remember you this time.
America’s population is aging. About 6.2 million Americans are living with Alzheimer’s disease. Over the next 30 years, that number is expected to more than double.
If you took a moment to speak with your neighbor’s caregiver on your way out the door, you also know that the devastating cost of this disease transcends the individual.
Caregivers — mostly untrained, mostly women — reduce their own social and economic activity to make time to tend to the ill, reporting up to 60 hours per week engaged in direct patient care.
Clinicians believe that “the main goal of treatment for AD is not necessarily to extend life but to improve function and maintain independence.”
So let’s think about it this way — if you were diagnosed with Alzheimer’s disease, what would it be worth to keep your mind?
What would it be worth to our society to keep 12 million more Americans functioning (and their caregivers free and productive) over the next three decades?
Turns out that last week, ICER decided on an upper bound.
“Using a similar modeling approach as our approach to modeling aducanumab, a treatment assumed to have no known harms that could maintain all patients in MCI [mild cognitive impairment] for the rest of their lives would result in threshold pricing of up to $50,000-$70,000 per year.”
Aducanumab, Biogen’s controversial monoclonal antibody that successfully clears beta-amyloid plaques from the brain, may or may not provide a clinically meaningful benefit to a subset of patients. That’s for the FDA to decide. Approvable or not, it’s certainly not the miracle drug ICER’s describing.
But the more interesting claim in this report is the organization’s insistence that $70,000/year represents the maximum value for a therapy that could successfully halt Alzheimer’s disease-related cognitive decline with zero side effects.
Don’t get us wrong — $70,000 per year would indeed yield a mind-meltingly high return (assuming successful treatment of 6 million Alzheimer’s patients, that’s $420 billion a year). That’s not only unfathomable but unrealistic — that’s almost twice what our country spends on all branded drugs in a year.
However, the idea that such a wonder drug is on the immediate horizon is as unrealistic as anticipating a $420 billion/year return on any single drug.
The development process in Alzheimer’s disease is more likely to be incremental — a 5% gain in function here stacked onto a 10% improvement there. And for drugs that get us to that goal incrementally — a few percent at a time — anchoring to the wrong number could result in the first few drugs being undervalued, reducing interest in continuing the effort to address this formidable disease.
Let’s suspend reality for a moment and imagine that the FDA approved a drug deemed clinically meaningful in a subset of patients but that still represented just one step towards the greater goal of halting all Alzheimer’s disease-related cognitive decline.
Imagine that ICER recommended a price similar to that proposed for aducanumab ($2,560/year). Now consider whether America’s insurance system would allow all eligible patients to use that drug.
Would it deter many by imposing high out-of-pocket costs or making patients buy it out of their deductibles? Would payers try to block companies from helping with patient assistance programs, imposing copay accumulators to ensure that patients felt the cost even after requiring prior authorization to ensure that the drug was appropriately prescribed?
The end result might be that such a drug is used by maybe half of all eligible patients, so in practice, the reward for its invention may end up totaling less than half of what ICER judged appropriate. And if, recognizing those barriers, a company tried to charge more upfront, ICER would be the first to protest.
Because ICER’s math doesn’t account for genericization, they’re technically arguing that it’s worth spending $420 billion a year on this wonder drug forever. Over a century, that’s $42 trillion (ignoring inflation for simplicity). Indeed, that’s in the ballpark but actually below the Alzheimer’s Association current estimate that by 2050, the US will be spending $1 trillion per year on Alzheimer’s care.
Luckily for all of us, even a $70,000 a year wonder drug would eventually go generic (or biosimilar).
So if it’s worth spending $42 trillion on this disease over a century per ICER, then it would be a great value for us to spend only $7 trillion.
To put it in context, $7T is so much money that we could reward each of 30 drugs for 15 years with as high a reward as adalimumab (Humira), the biggest drug in history (sales of $15 billion a year), after which each of those drugs would go generic.
Break it down a little further and $15 billion/year, if spread across all 6 million Alzheimer’s patients, comes out to $2,500/patient. It’s a price that might pass muster with ICER. But if it is used by 200,000 patients while branded (and then more when it’s generic, which often happens), then $15 billion a year suggests a price of $75,000 per patient.
So it would have been more relevant and worthwhile for ICER to have answered the question: What should be the upper limit on a genericizable drug that solves Alzheimer’s dementia? Given ICER’s math, the answer would be something closer to $400,000/year ($42 trillion over 15 years, divided by 6 million patients). And that’s before factoring in caregiver spillover, as Milliman recently did and economists at the National Bureau of Economic Research did, which would push that value even higher.
There are other variables ICER’s math ignores. Consider the value of the peace of mind that all of us would feel if we knew that we’d be spared some degree of dementia should we face an Alzheimer’s diagnosis. That’s arguably worth most of all — we won’t speculate on a value, but there are economists who do (they aren’t at ICER).
Even an incremental advance like aducanumab can benefit patients as it furthers our understanding of a disease — and can be a valuable piece of a future combination treatment or waystation to something more effective. ICER does not recognize that the drug will eventually face biosimilar competition that will drive down its price, making it much more cost-effective over the long run. ICER also fails to account for the value of delayed disease progression and the health, financial, and emotional benefits that accrue to patients and caregivers as a result of those delays.
Given these neglected variables, it’s critical that we not accept as gospel that solving Alzheimer’s dementia is only worth $70,000/year. If we do, we may never reach our goal, instead undervaluing and failing to reward the increments of progress that will allow us to get there.
Once we grasp the value of the Alzheimer’s moonshot, it becomes easier to appreciate that while the rewards of spurring that effort may seem large, they are worth it to society. This kind of math should guide our willingness to pay, but doesn’t mean that we’ll actually have to pay anything close to that. Companies face constraints on pricing, often from competition, that keep drug prices well below their societal value, as economists have demonstrated for the cholesterol-lowering drug atorvastatin (Lipitor).
The point is not to judge whether treating any one patient with any one drug is worth it, but to ask whether we’re making forward progress towards our larger goal of beating this disease.
If after 50 years of high rewards and aggressive research we find ourselves not much further along, we can revisit whether the threat and burden of Alzheimer’s is just something we have to accept in our lives. But we’re only just starting to crack the code of this disease.
For now, we believe that society has too much to gain to risk underpricing the hope of progress.
RA Capital is a registered investment adviser. This material is not intended, and should not be construed as, investment advice or recommendation to invest in any security. Likewise, this material is not intended as a solicitation to invest in any RA Capital product or service.
Many of us first experience exercise as kids playing sports.
Think back to the coach yelling at you and your teammates to run the extra lap. Maybe this helps explain why fitness and competition seem inextricably linked to so many adults, so many years later.
Digital fitness companies know this. Many digital fitness offerings lean heavily into the competitive aspect, urging us forward with leaderboards. The apps, and the coaches on the screens are always encouraging us to beat our previous times, or strive for an improved physique. A recent digital fitness company with a $1 billion valuation was recently described by Wired as “a home gym for folks who want to get ripped.”
I’m not sure this framework works for everyone. It may not even work for most of us.
Many people who would reap big health benefits from more regular exercise are actively discouraged from even getting started by this relentless emphasis on competition and maximizing performance. This aversion to contrived competition may be especially acute among the least fit — the “future former coach potatoes” — who aren’t likely to be living near the top of leaderboards any time soon.
Other potential participants — reasonably — might be inclined to focus not on achieving a new personal best one mile run, or developing Terry Crews pecs, but rather trying to maintain an existing level of performance, or attenuating age-related decline.
Yet, almost everywhere I look in the digital fitness world, I see jocks-turned entrepreneurs creating platforms that seem designed for similar athletes, participants who tend to skew relatively young, and seem motivated (or motivatable, at any rate) by competitive zeal.
Perhaps these platforms are competing for a particularly high-value demographic. (Wall Street firms are apparently populated by some serious athletes, for example.) Alternatively, developers of digital fitness companies simply may be oblivious to the opportunity to serve a large population of potential users who are turned off by the emphasis on competition.
Motivating exercise is an intrinsically difficult problem. Exercise is, by definition, hard work and pointless except as an end itself. (Unfortunately, it also happens to be good for you.)
As Daniel Lieberman, a human evolutionary biologist at Harvard University, discusses in Exercised (see here), purposeful activity was an important aspect of early human societies, generally involving the pursuit of basic needs (food, shelter, safety, and community — e.g. ceremonial dancing). But until modern times, no person, and probably no living creature, would ever engage in gratuitous activity, and expend precious calories solely for the perceived long-term benefit of doing so.
Telling people to exercise because it’s good for you may work for some especially disciplined individuals, but clearly many have yet to be inspired. Putting a stationary bike or a treadmill in front a TV set may help others, but this gets stale quickly.
A few new entrants in the digital fitness industry have shown that an engaging and immersive experience, led by an encouraging guide, can be captivating. The model for this, of course, is Cody Rigsby of Peloton, who inspires participants with his stories about topics from high school to pop culture to disease awareness, mixing in personalized shout-outs, while also adjusting the cadence and resistance of the ride.
What if you could take these aspects, and disentangle them from the leaderboard and performance component?
Imagine, for example, a treadmill program that led walkers, joggers, or cyclists on virtual tours of famous natural parks and historical sites, with progress through the tour keyed to your rate of movement. Maybe, virtually, you could stroll Versailles, walk the fields of Gettysburg, climb Masada, explore Robben Island, seeing the sites and learning history while you are exercising from home.
There are so many ways to integrate exercise, media, and education. A social component could even be woven in to foster camaraderie and community.
Existing companies have components of the sort of offering I’m envisioning. Peloton, for example, has not only recognized the value of motivators like Cody (distinguished by their extraordinary ability to engage and relate), but more recently has added scenic ride options for participants, an alternative to the usual class format.
Other fitness companies are pursuing distraction via gamification – eg Zwift for serious indoor cyclists (a good interview with co-founder and CEO Eric Min is here), and the VR-delivered Supernatural (reviewed here) – though it’s hard to imagine Grandma drawn to either one.
The company that seems most interested in targeting a diverse range of participants (diverse in age and athleticism if not in disposable income) is Apple.
First, Apple (not surprisingly) has developed remarkably good hardware – Apple Watch, for example, out-performed a number of competitors in a well-executed academic study out of Jessilyn Dunn’s group at Duke. These data are in line with my own recent disappointing experience with another wrist wearable; in my direct comparison, Apple Watch measured heart rate more reliably, while the other product produced conspicuously spurious readings on several occasions.
Second, Apple is already a media company, so it has both the resources and the mindset to make its hardware products quite interesting. Apple seems to have started developing a rapidly-expanding range of offerings, through the “Fitness+” platform, to motivate participants, including walkers (via the “Time to Walk” feature – I’ve discussed here) and older individuals (“Workouts for Older Adults”). The overall strategy feels brilliant.
I appreciate the insight that competitive athletic entrepreneurs bring to the fitness platforms so many seek to develop, and in which many competitive athletic investors seek to invest (there seems to be a proliferation of athletic-focused venture funds these days).
But as we’re envisioning the future of digital fitness, and thinking about how to leverage these emerging tools to sustain health, let’s ensure that the limits of our collective entrepreneurial imagination aren’t defined by the singular framework of competitive athletics.
Today’s guest on The Long Run is Daphne Zohar.
Daphne is the founder and CEO of Boston-based Puretech Health.
Puretech has been around since 2005, seeking to capitalize on some of the big trends in biotech.
It started out seeking to test concepts from academic labs that could be the basis for new biotech companies – what’s now commonly called the venture creation model. This work led it to start an eclectic batch of companies focused on wide-open fields like the microbiome (Vedanta Biosciences), digital therapeutics (Akili Interactive) and obesity treatment (Gelesis).
These companies and others now operate with a degree of independence, with Puretech as a top shareholder.
More recently, Puretech itself has transitioned into what could be considered a more traditional biotech R&D company. It has a thesis focused on what it calls the Brain-Immune-Gut axis, specifically on treatments that intervene in the lymphatic system. It’s now seeking to take therapies further along in clinical development on its own.
Daphne has been there through it all, as the driving force. One of her earliest supporters at the beginning of Puretech was Bob Langer, the famous bioengineering professor at MIT and prolific entrepreneur (and previous guest on The Long Run). Langer gave her an early vote of confidence as co-founder of Puretech, and he’s still on the board today.
When I asked Bob about his first impression of Daphne 15 years ago, when she was just getting started as an entrepreneur, he wrote:
“I thought she was smart, very determined, wanted to do important health related things, had definite leadership ability, and really wanted to make things happen. And she has.”
In this conversation, Daphne discusses her journey and her longstanding efforts to apply science for the betterment of human health.
Please join me and Daphne Zohar on The Long Run.
Every one of those has been reported to patients as an either/or result: positive or, more usually, negative.
These tests are capable of doing much, much more than just giving a simple yes/no answer.
The “q” in RTqPCR measures “quantitative” viral load. We know this matters a lot — and every RTqPCR provides the “q” in terms of Ct value — how many cycles it took to detect a real signal from your samples.
Why do we systematically discard this data which is absolutely critical to both public health and clinical decisions?
Without an apples-to-apples standard to compare RTqPCR results, an individual with a high 3,000,000 copies/milliliter viral load will get the same positive lab result as another with a low 50 copies/ml. The clinical and public health consequences of these two results are definitively very different.
The value of quantitative readouts has become more clear in recent months. The higher the viral load, the higher the chance of serious disease, hospital admission, and transmission to others, non-pharmaceutical interventions (NPI) notwithstanding: physical distancing; mask wearing; well-ventilated spaces; quarantine; etc.
Simply put, the higher the viral load, the higher the chance of a patient ending up in the ICU.
The lower the viral load, the more likely the patient is not capable of viral transmission. Asymptomatic patients and those with only mild symptoms are less likely to transmit the virus to others because there is less virus to be broadcast in aerosolized respiratory droplets. All patients will be recorded as positive long after any likelihood of transmissible infection. Patients recovering from COVID-19 often remain PCR-positive for days or weeks, even up to six months, when the virus being detected may be only residual virus fragments.
The first evidence these patients may get is when they take a required PCR test two days before a flight — then they are surprised to find out they are “positive.”
There is consensus that frequent, while-you-wait community testing is best to inform both individual and public health actions. Traditional laboratory high-volume RTqPCR testing is automatically disqualified – it is too expensive to be used “frequently” and too slow to be “while-you-wait” (fastest results take 12-24 hours, and during the peak of the epidemic it stretched to 7-14 days – effectively useless except to inform an historical perspective).
Rapid antigen tests are cheap enough to be used frequently; and sensitive enough (~95%) to result positive in the period of highest viral load when individuals are infectious.
How do we judge what level of viral load is infectious?
Every infectious disease has a minimum quantity of virus necessary for an infection to take hold. This varies by virus and by an individual’s immune competency. The Lab gold standard to determine this is to infect a cell culture with SARS-CoV2 virus — these experiments indicate that anywhere from 1,000 to 100,000 viral copies per milliliter are required to create a viable COVID-19 infection.
There are caveats to these in-vitro lab tests – lab grown cell targets may not be representative of cells in patients; lab growth conditions are artificially optimized; there is no innate or adaptive immune reaction in a petri dish; etc.
However, this threshold is consistent with clinical experience — very few patients with moderate to severe disease sample below this level. (See this ASU T3 Blog from October 2020 “COVID-19 Test Accuracy – when is too much of a good thing bad?” for a fuller discussion of these issues and a bibliography.)
Why even perform RTqPCR tests at all?
Because the “q” is important in two ways: informing a patient’s clinical care; and if it can help identify individuals destined to become infectious early enough to pre-empt them from transmitting to others.
The key question is just how long is the pre-infectious stage?
Is it very small as shown below (green segment) (NEJM September 2020) or is it longer?
If the ramp up is fast and steep, the chance of any pre-infectious person receiving a traditional high sensitivity test quick enough is diminishingly low. Reliable data is hard to come by, since very few people are identified early enough to initiate the frequent repeat testing required to provide it.
Recent case reports from Caltech and the Pasadena Department of Public Health suggest that this detectable pre-infectious period can be as long as four days, especially among younger individuals. This implies there’s higher value to detecting individuals before they can infect others.
This finding reinforces the need for either: rapid antigen series testing (FDA announcement) or higher sensitivity (103 viral copies/ml or better) emerging point-of-care molecular tests (i.e. true PCR systems, more sensitive than most current POC LAMP systems).
PCR tests are exquisitely sensitive — the median test can detect 1 genome in a microliter of sample: the best are 100 times more sensitive; the worst 100 times less so, but even these are still able to detect the vast majority of infected individuals.
Every test reports a “q” — the cycle threshold (Ct, aka Cq, or Cp). However, the specifics of the test protocol affect what this Ct is — the same sample tested with different protocols will likely not generate comparable Ct numbers: it will vary protocol to protocol based on differences in pre-PCR processes (sample collection; use of transport medium; cDNA generation; reagent selection and purity); locations and base content of genome regions selected; primer design to bridge those regions (e.g. off-target binding or primer-dimer formation); probe design to detect amplified product; efficiency of the PCR cycler instrument used; etc.
Across all samples run on the same protocol, Ct will accurately reflect relative viral load differences sample to sample.
Generating comparability beyond that requires each lab to publish what is called a “Standard Curve” for each test protocol it performs. This translates “apples and oranges” Ct counts to more comparable viral loads expressed in terms of number of viral copies per milliliter.
This is done by taking a sample of known viral concentration (available commercially) and running a series of 10x dilutions on the same protocol and recording the resulted Ct with each known level of viral load — for that specific test, run in that particular way, by that particular lab.
As a demonstration we plotted Ct versus viral load for a more-or-less random group of 8 assays reported in the academic literature.
The vast majority of these have similar slopes – because they all use PCR which at 100% efficiency doubles the amplified product with each cycle. However, they have very different intercept Ct counts. A reported 20Ct for the most sensitive protocol implies a relatively less transmissible 1,000 to 10,000 (103-104) copies/ml in the sample.
While an apparently identical 20Ct for the least sensitive protocol means a highly infectious 10,000,000,000 (1010) copies/ml in the sample. At 100,000 (105) viral copies/ml Ct counts vary from 18Ct to 32Ct.
To talk of viral load only in Ct terms is both misleading and effectively meaningless beyond the bounds of any one individual assay protocol, unless the standard curve is created and Ct translated to viral load. (There still remain non-analytic issues that can erode comparability, for example: SARS-CoV-2 is tissue resident so the load available to sample from the respiratory tract may not directly reflect active virus driving clinical outcomes for the patient and their risk of transmissibility.)
All labs do calculate a standard curve as part of assay/instrument calibration for FDA-cleared, quantitative assays. However, this type of calibration is not required and rarely performed for qualitative assays, even if — as with RTqPCR — assays are inherently and robustly quantitative.
Current SARS-CoV-2 assays are approved by the FDA only for yes or no answers. The FDA has never before allowed reporting of a quantitative result from a qualitative viral test without requiring the calibrated standard curve in viral copies/ml on which the yes/no answer is based, and therefore allowing cross-assay comparison.
Even though the FDA hasn’t done this before, it’s easy to see how it could clear the way for this more quantitative view.
SARS-CoV-2 calibration curves for each individual assay are straightforward to establish with the appropriate standards. Many are commercially available or have been established by clinical laboratories with an interest in robust understanding of the performance of their SARS-CoV-2 PCR assays.
However, this essential data is rarely reported — some clues appear in limit of detection claims, but very few standard curves are published outside academic literature.
It is frustrating and tragic that the major (perhaps only) advantage of qPCR is ignored. Practices must change to require that a standardized viral load measure is routinely reported to physicians, epidemiologists and patients to inform their critical decisions.
Clinical trials have given us a wealth of information about the effectiveness, and safety profile, of vaccines for COVID-19. But the work of gathering evidence, and weighing the results in the context of an ongoing pandemic, isn’t done.
The importance of developing population-based effectiveness and safety profiles associated with a mass vaccination campaign — the sort of deep datasets that go far beyond what’s possible in a controlled clinical trial — has been urgently demonstrated over the past three weeks.
Extraordinarily uncommon, but severe, adverse events have occurred with the administration of the AstraZeneca vaccine in Europe and the Johnson and Johnson (J&J) vaccine rollout here in the United States. Intense public scrutiny of these rare adverse events in adenoviral vector vaccines prompted a brief “pause” in administration of the J&J vaccine, after about 7 million doses were given in the US.
When a mass vaccination campaign is rolled out, adverse events are observed more acutely and more accurately than the slow trickle rollout that goes with any other kind of vaccine or drug distribution. The infrequent becomes more frequent because the number of people vaccinated is so large—a one-in-a-million problem becomes one per day rather than one every 2 to 6 months.
Critics of mass vaccination argue that these campaigns are fraught with difficulties. In some ways, this is true. Beyond the safety and efficacy profiles, there are logistical issues in mass production, quality control, and distribution. There are also the ongoing issues we’ve seen with limited access and certain populations being left out and feeling a continued sense of separation from the vaccination process, reinforcing a distrust of the entire vaccine effort and the people leading it.
It is also true that from a historical perspective mass vaccination campaigns have brought real risks: Guillain-Barré syndrome was associated with the swine influenza vaccination campaign in the mid-to-late 1970s.
Today, we have a new disease to study called vaccine-induced thrombotic thrombocytopenia (VITT). It’s sometimes called Thrombosis with Thrombocytopenia Syndrome (TTS). This phenomenon has now been linked to both COVID-19 adenovirus vaccines—AstraZeneca’s and to a lesser extent the J&J vaccine.
This rare event was detected because it was unusual, like Guillain-Barré syndrome with Swine influenza vaccine. The persons affected by VITT had presented with severe clots in their head, and when surgeons were looking at this and trying to treat it, these clots would reoccur right in front of their eyes. In addition, the blood cells that were involved that were usually high in clotting disorders were low and beside the clots, there was bleeding.
Rare as it may be, physicians and scientists have seen something like this before. This observation was similar to an unusual immune response to the anticoagulant heparin. Immediately, investigators in Europe—mainly in Germany and the UK—described this phenomenon in recipients of the AstraZeneca vaccine. They detected an antibody in the blood of people that activated the platelets (platelet factor-4) that causes blood to clot. This antibody seems to put the platelets in one’s body into overdrive, which then results in a simultaneous clotting of the blood and depletion of the platelets, which causes bleeding.
Clotting and bleeding at the same time—this is a very difficult condition and highly unusual.
It is a clinical condition that’s so unusual, it was instantly recognized and now it seems clear that it’s a rare side effect of the adenovirus vaccines. VITT seems to appear after the first dose, generally in younger people, mostly but not exclusively in women and usually within 4 and 14 days but as far out as 28 days post vaccination.
We’ve now learned how to diagnose this disease by doing a blood test of anti-platelet factor-4, using the sensitive ELISA assay. We can treat it by giving high doses of IV immune globulin to neutralize the autoantibody; and (sometimes) administering steroids. It can also be managed by giving other kinds of blood thinners. Crucially, patients with this syndrome can’t be given heparin, which has shown to worsen the disease.
The disease is rare but sobering; about 30% of the persons with intracranial thromboses and bleeds have died.
To date, in the United States, there are 15 cases of VITT among the 7.5 million persons receiving the J&J vaccine; 14 of the 15 cases are in women and almost all are under the age of 50, which equates to a case rate of about two cases per million vaccinated persons. A thorough review of the risk benefit of the vaccines was performed by both the CDC and the FDA and both of these organizations advised that people should be alerted about the possibility of VITT and to seek medical evaluation if they experience prolonged abdominal pain, worsening headache, or shortness of breath in the days post vaccination.
Further, the FDA and the CDC made the determination that the enormous number of lives saved by the J&J vaccine outweighed the risk of developing VITT and hence restarted the Emergency Use Authorization vaccination program.
To give you a real-world example of the kind of personal risk benefit ratio we’re considering, the data out of the CDC estimate that here in the United States, the odds of being struck by a car is about 1 in 4,292. And the odds of dying as the result of being struck by a car are about 1 in 47,273. And yet, this is a risk we all manage most every day, usually without even thinking about it. VITT, of course, is a new risk related to a new vaccine, so yes, we are all understandably cautious, but it’s important to keep the risk in perspective.
The advantage we have at this point is that we know how to diagnose and treat it, so there’s at least a potential to lessen the impact of the disease. With this knowledge in hand, is it worth it? I think at the moment we have to look at the number of deaths in our country and globally from COVID-19 and weigh the risk of this rare and serious side effect with the overwhelming benefit of the J&J vaccine to fight COVID-19 symptomatic disease, keep people out of the hospital and alive.
And at the same time, we should continue to weigh the risk versus benefit as we learn more. The regulatory authorities, and the scientific community, should continue to communicate the risks and benefits of the vaccine in real-time as we gather more evidence.
We should, and I think will, continue to use the science to drive policy. There are clear benefits for the one-dose J&J vaccine during this ongoing pandemic. Given its less stringent cold-storage requirements, the J&J vaccine is often the only viable option for hard-to-reach communities, and it’s important to remember its effectiveness has been demonstrated in a well-controlled global clinical trial. It works not only against severe disease, but against a wide variety of variants. We also have an effective adverse event surveillance system set up—and the wherewithal—to rapidly diagnose and treat people who develop VITT.
This is an ongoing and important conversation, and there is much work to be done. The blood samples from the 44,000-person Phase III clinical trial need to be evaluated and we need to determine whether a large percentage of people actually develop antiplatelet factor-4.
If so, is it just a few who get such high levels that it sets this cascade off? If it’s uniform, then we’ll have to look at it more closely and determine what really is the risk benefit ratio? Is it good news because that means we can detect it early? Or bad news because it will mean we’re going to need to continue careful monitoring? Or both?
One thing is certain: we need to spend the time, energy, and resources to continue to ensure we do good surveillance.
Dr. Larry Corey is the leader of the COVID-19 Prevention Network (CoVPN) Operations Center, which was formed by the National Institute of Allergy and Infectious Diseases at the US National Institutes of Health to respond to the global pandemic, and the Chair of the ACTIV COVID-19 Vaccine Clinical Trials Working Group. He was intimately involved in the planning of the phase 3 vaccine studies conducted under the funding auspices of Operation Warp Speed. He is past President and Director and Professor in the Vaccine and Infectious Disease Division of Fred Hutchinson Cancer Research Center, and Professor of Medicine and Virology at University of Washington.
The first iteration of the “Quantified Self” movement largely fizzled out about five years ago. Avid self-trackers, at the time, started to worry they were drowning in data, but lacking in insight.
Today, we seem to be entering Quantified Self 2.0. Once again, an expanding assortment of consumer devices promises to measure every parameter of our health and well-being.
The obvious question: “has anything changed?”
Let’s start with some context.
The “Quantified Self” movement was born in 2007, the brainchild of Wired magazine editors Gary Wolff and Kevin Kelly. The term was used to describe “a collaboration of users and tool makers who share an interest in self-knowledge through self-tracking.”
This initiative was propelled by powerful, emerging consumer technologies, as Lindsay Rothfeld captured in Mashable in 2014:
“Before things like smartphones [note the iPhone debuted in 2007] or wearables, we’d have to consult doctors or data technicians or manually log activities to determine how many calories we consume and burn. But now, with Fitbits, Fuelbands, Jawbones, and Whistles (even our dogs are tracking activity!), we can capture this data in a snap, see it updated in real time and use it to make better, more healthy decisions.”
Observing the evolution of this ecosystem back in 2011, I wrote,
“It will also be important to ensure that even as we recognize — and seek to capture, leverage, and ultimately monetize — the value associated with the collection of an ever-increasing amount of data, we also recognize that most people don’t want to be perpetually monitored (at least not intrusively). While there’s a much-discussed movement called “Quantified Self,” focused on capturing and sharing vast quantities of physiological data using sensors and other devices, this sort of excessive monitoring is almost certainly not something most of us want. One challenge will be figuring out how to capture useful physiological information in a way that offers benefit while also remaining unobtrusive and respecting privacy concerns.”
Putting a finer point on it in 2014, I noted the disconnect between the promise of digital health and the demonstrated impact. “The goal,” I wrote, “is to find solid evidence that a proposed innovation actually leads to measurably improved outcomes, or to a material reduction in cost. Not that it could or should, but that it does.”
I was not alone. After nearly a decade of escalating hype, many users started to take stock, and asked what they learned from such obsessive monitoring. Frequently, the answer turned out to be, “not much.”
Wired editor Chris Anderson, a previous acolyte of the Quantified Self movement, seemed to put the nail in the coffin, tweeting in 2016:
“After many years of self-tracking everything (activity, work, sleep) I’ve decided it’s ~pointless. No non-obvious lessons or incentives 🙁 “
It seemed like this was the end.
But instead, it may have proved to be only a short Quantified Self winter.
Today, everywhere you look, there are companies promising to quantify nearly every aspect of your behavior and habits, your physiology and activity, your physical performance and your mental health.
Consumers are now offered continuous glucose monitoring, heart rate variability assessment, and even “brainwave feedback,” via devices which are claimed, respectively, to enable improvements in metabolic health (for non-diabetics), exercise recovery, and “mental strengths and weakness” (to enhance performance on videogames).
Less clear is whether anything has substantively changed. Are we measuring parameters in a fashion that’s now more accurate? More useful? Or are we just essentially repackaging old approaches with a fancier user interface and the promise of AI, yet selling consumers the same dubious message that more data inevitably equates to better insight into how individuals can improve their day-to-day state of health and wellness?
Prior experience (to be Baysean about this) suggests we should remain skeptical. The fact that a device can generate a number and ascribe it to a particular parameter doesn’t mean that the measurement is either accurate or meaningful. We tend to measure what we can, which isn’t necessarily what we should. It’s also challenging to translate even reliable data into relevant insights, and notoriously difficult to translate actionable insight into durable behavior change.
At the same time, science evolves, technologies improve, and more importantly, entrepreneurs adapt. Sometimes — as venture capitalist Ali Tamasub highlights in Super Founders — it takes a number of tries to get it right. Google was hardly the first search engine; Facebook was not the first social network.
At a minimum, we should remain open to the possibility that on occasion, someone will crack this difficult nut, and turn the promise of data abundance into durable evidence of meaningful impact.
I embrace this hope – and look forward to the evidence.
We’re drawn to stories. We understand the world through stories – both the narratives we read, and those we create and develop for ourselves.
This is a key message from “Super Founders,” Ali Tamaseb’s soon-to-be-published analysis of the factors behind “unicorns” – startups that attain a valuation of at least a billion dollars.
A venture capitalist at the deep tech firm DCVC, Tamaseb found himself wondering whether there were features of unicorns that distinguished them at an early stage from startups that wouldn’t go on to such prodigious success. It’s a relevant question for a VC who must sort through hundreds of companies a year to identify the most promising investments.
He was aware, of course, of the many popular narratives associated with outsized startup success. One story that’s been often told is about young founders who drop out of Ivy League colleges to pursue a technology-fueled dream (think Bill Gates and Mark Zuckerberg).
A related common narrative are founders relentlessly driven to solve a problem that’s rankled them for years. Typically, successful founders are presumed to have refined their dream through an accelerator program like Y Combinator or Techstars, and are often imagined to have advanced an idea that was first-to-market, perhaps even creating a market.
A critical thinker, Tamaseb asked himself whether these narratives – and associated venture heuristics guiding investment – were true, pressure testing the conventional wisdom around startups in the way Stanford business school professor Jeffrey Pfeffer has challenged so effectively the comforting nostrums surrounding leadership and power.
Rather than simply rely on expert opinion, Tamaseb approached the question more scientifically, and did the hard work of collecting and crunching the data. He conducted a case-control type of study, comparing the attributes of the 200+ startups founded between 2005 and the end of 2018 that achieved $1 billion or greater valuations, and a similarly sized group of randomly selected startups, founded during the same time range, and who had raised at least $3M in initial funding, but who never became unicorns.
Tamaseb is admirably upfront about the limitations of this approach, which he acknowledges doesn’t have the level of rigor of an academic study, and involves a lot of subjective interpretation on his part. He also notes that the key outcome measure – unicorn status – is hardly the only or even the best definition of startup success. But it certainly captures an achievement, and one that’s highly relevant for investors and employees holding equity in the company.
So what did he learn?
Most significantly, the data highlighted the limitations of conventional wisdom. The median founder age (at the time of founding) for future unicorns in his sample, Tamasab learned, was 34. That’s far older than many might have guessed; moreover, the range was 18-68. That data point alone shows there’s no one single stereotype of what a founder looks like, or what set of experiences prime someone to start a company.
David Duffield, for example, was 64 when he founded Workday, an enterprise software company for finance and HR functions.
Educational backgrounds were also quite varied among founders. One out of three (33 percent) had an advanced degree. That figure is even higher among founders of healthcare and biotech companies. Founders who were dropouts proved to be almost vanishingly rare.
The variety in educational background of the unicorn founders was matched by a similar variety of educational experiences in the non-unicorn sample.
Interestingly, while about a third of unicorn founders attended schools ranked in the top 10, another third of founders attended schools that were not ranked in the top 100, Tamaseb reports.
What about work experience? It turned out that fewer than 50% of founders of future unicorns had significant work experience or domain expertise in the area their company would pursue; this was equally true among founders in the non-unicorn sample. However, this pattern doesn’t hold for biotech and healthcare companies, where 75% of founders had relevant experience.
As a cautionary healthcare aside: Flatiron is presented (accurately) as an example of a healthcare company where founders had no relevant domain expertise (Tempus, a precision medicine company started by founder of Groupon, might have been another interesting example). However, the key inflection point for Flatiron, as I understand it, was when Dr. Amy Abernethy — a professor at Duke University and an expert in clinical trials, cancer outcomes research, and clinical informatics — joined and guided a company that was somewhat adrift at the time. Instead, the origin story that one of the founders shares with Tamasab omits Abernethy, and focuses instead on a familiar narrative (often embraced by journalists) emphasizing the founders’ charming naivete (“We were fresh; we had no preconceived notions, no bad habits. We questioned everything”).
Perhaps the most prominent difference between founders of unicorns and founders in the non-unicorn sample was that 60% of unicorn founders had previous experience as startup founders, vs 40% of founders in the non-unicorn sample. Moreover, of the founders with previous startup experience, 70% were relatively successful in the unicorn sample, and only 25% were in the control sample. (Here, success is provisionally defined by Tamasab as achieving either a relatively modest threshold of either a $10M valuation at exit or $10M in revenue.) Doing the math, this means that 42% of the unicorn founders had previously founded a relatively successful company, vs only 10% of the founders in the control group.
Tamasab was so impressed by this distinction that he called this group of founders “Super Founders” – and selected this name as the title for his book.
It’s also the finding that seems to guide Tamasab’s current approach to investing. While “most VCs have thesis areas on industries, spaces, or ideas they like to see happen and when they pitch an opportunity to their partners,” Tamasab tells me, “they first talk about the ‘what.’ My thesis in on ‘people.’”
“One of the key signals I look for is people who have built, hacked, and sold something of value before, even if that was a small outcome in the world of mega acquisitions and venture capital. I pay more attention to this than whether someone had a big leadership role at a FAANG [Facebook, Amazon, Apple, Netflix, Google] company or a high growth rate in the past month. When I source startup deals, I source people, decide on the founder, and then see what is the idea they are working on.
I’d say the biggest difference for me is to try to not let my preconceptions about an idea, about founder’s background, how large a market is now, or competition, come into the way of backing an extraordinary founder who will go against all those odds (most of which, as the book suggest, are actually not statistically significant, so it’s not even going against the odds, but going against stereotypes).
Ideas change, companies expand to different markets or expand an existing market, competitors come and go, but a Super Founder is resourceful to go and create a giant company in the long-run.
At the end of Super Founders, Tamaseb writes:
“If there is one takeaway from this book, it should be that the path to a billion-dollar startup often begins with a bug for creating. The best preparation for starting a wildly successful company is starting a company. If you have never started a company, the best preparation for doing so is to start something, maybe a club or side hustle.”
I’d add an additional takeaway — the idea that each of us — investors, founders, and others involved in the ecosystem – need to liberate ourselves from the idea that there’s a definitive path towards success. Even the concept of Super Founders. Remember: 40% of unicorns were founded by someone who was not a previous founder.
There is not a universal archetype of a successful founder.
Of all the popular founder narratives we can leave at the curb, or at least reframe, one is certainly the notion of startups as formed to pursue a life-long mission. Despite the appeal and prevalence of these narratives, many unicorns evolved in a far more iterative and “top-down” fashion: a founder, or founding team identified “a market, a customer type, or a trend and then hunted for problems to be solved.” (The founding of Incredible Health by Dr. Iman Abuzeid – described here – represents a particularly compelling example of how this can work.)
Entrepreneur and investor Elad Gil (he co-founded Color Genomics – see here), quoted by Tamasab, offers valuable insight into why the iterative, top-down approach may be underappreciated.
“I think there’s a lot of founder myths in Silicon Valley that are kind of made up, and one of the reasons they’re made up is because it’s a more compelling story and the press wants to cover founder stories. They don’t want to cover companies. They want to cover the personal connection that the founder had going back, you know, to when they were five years old…”
In other words, narrative bias – bias in the way the media covers startups — underweights the frequency of what turns out to be a remarkably common if unsexy approach of iteratively seeking a market in a top-down fashion.
Some of the confusion may also come because many companies develop missionary zeal for their mission once they discover it – it helps drive the team and the company.
As Gil observes, “belief in mission” can be present from the outset, but “sometimes it comes later as the company is successful and people realize that they’re onto something and then it turns into their life mission.”
As Tamasab astutely recognizes: “You can be opportunity-driven but still love and have a passion for the product you are building or the customers you are serving.”
Indeed, this perspective (as I’ve discussed) is shared by Dilbert creator Scott Adams, who challenges the popular “follow your passion” trope, arguing that in his experience, “success caused passion more than passion caused success.”
At a minimum, this should free would-be founders from desperate vision-quests, anxiously searching for their life-calling.
If there’s a final takeaway, it might be Slack CEO Stuart Butterfield’s comment: “Be super lucky.”
This is echoed in a Stanford business school study Tamasab cites that asked early-stage VCs what qualities they looked for in investments, and then what qualities distinguished their successful portfolio companies. “Team” was the top answer in both cases, but both “timing” and “luck” also feature prominently — and specifically — in the look back.
Luck, as Tamasab takes pains to emphasize, also includes the privilege and associated accumulated advantage. He urges us to “acknowledge the role that luck, privilege, and access played in the success of many of these founders,” and specifically highlights examples “like having the privilege to drop out of school or work on building a startup without a salary, rather than taking a safe job to pay back student debt,” or “the privilege of coming from a family that has the right connections.”
Commenting that “while doing this research I could not help but notice the lack of diversity among these founding teams,” Tamasab writes he will “donate proceeds from this book to nonprofits and charitable causes that help with upward social mobility and diversity.”
I hope the rest of us find as much meaning and value as Tamasab clearly has from his thoughtful and considered exploration.
Today’s guest on The Long Run is James Sabry.
James is the Global Head of Pharma Partnering for Roche. He’s based in Basel, Switzerland.
He did his PhD in neuroscience at UCSF, and spent the bulk of his career in biotech in California. After leading a couple of startups, he joined Genentech in 2010 as vice president of partnering. It was a pivotal moment in the company’s history, as it was being integrated into Roche.
A lot has changed in biotech over the past decade, and James has been in a position to see it all at one of the industry’s leading companies. That includes everything from gene therapy to gene editing to cell therapy to targeted RNA medicines.
We talked in this episode about how things have changed over the years at Genentech and Roche, how James likes to approach the business development game, and what some of the megatrends are that make him optimistic about biotech over the next 20 years.
Please join me and James Sabry on The Long Run.