Get In-depth Biotech Coverage with Timmerman Report.
In late February 2020, when it became clear that COVID-19 was spreading around the world, our senior management team at Adaptive Biotechnologies took a gamble.
We had a hunch that our ability to evaluate T-cell immunity, combined with advanced bioinformatics, could help the scientific community better understand the novel coronavirus.
The entire world, it seemed, was focused on B-cell derived immunity, the antibody response, to SARS-CoV-2. We thought we had something to add. The data we could generate from our immune medicine platform and T-cell technology would capture a more vivid and information-rich picture.
With one phone call to our partners at Microsoft, the decision was made. The work we did together to build a TCR-Antigen Map for many different diseases supplied the tools and technology to take on this challenge. (TR coverage of Adaptive/Microsoft partnership, Jan. 2018.)
Peter Lee, Corporate Vice President, Research and Incubations, immediately offered us resources, including staffing, and brought in other partners like Providence, a health system with 51 hospitals, to collect patient samples. Deals were being made with phone calls and handshakes. Deals took shape in days, not months. I had never seen anything like it.
From the outset, we agreed with our partners on the importance of data sharing. No one company or institution would be able to make the pandemic disappear. We made the data available freely via an open access portal, ImmuneCODE™.
More than 20 academic and industry collaborators from 7 countries joined us. While many vaccine manufacturers received billions to accelerate research of novel platform technology, we moved forward on our own without government grants or other government support. Organizations from around the world were offering patient samples, including our cancer research partners at Hospital 12 de Octubre, i+12/CNIO in Madrid, Spain, as well as the University of Padua and Ospedale San Raffaele in Italy.
We started with a few basic questions.
First, we wanted to know whether it was possible to detect a past COVID-19 infection from T cells in the blood. Yes or No?
T cells are the first responders of the adaptive immune system and support the antibody response. These cells have specialized receptors which must be extraordinarily diverse to recognize one or a small number of the millions of disease specific antigens to which our bodies are continuously exposed.
When your body encounters a pathogen, T cells arise first and often last longer than antibodies.
Why is this important?
Antibodies tend to surge in the presence of an infection. Their concentrations diminish when the infection passes. That’s normal. We also know that although antibodies may be harder to detect a few months after the initial infection, that doesn’t necessarily mean immunity has been lost. T cells may have been primed, and may still be hanging around in the blood.
It’s up to us to see if the T cells are still there, and if they are still capable of mounting an immune response in case there’s another encounter with SARS-CoV-2.
Testing T-cell levels in response to COVID-19 may therefore be an important complement to antibody testing to identify recent or prior infections. One of our collaborators, a team led by Dan Barouch at the Center for Virology and Vaccine Research at Beth Israel Deaconess Medical Center in Boston, looked at blood samples from 20 people vaccinated with the Johnson & Johnson COVID-19 vaccine and found that while antibody concentrations dropped against certain variants of concern in the 71 days after vaccination, the T cell immunity was largely preserved.
This encouraging finding was published June 9 in Nature.
To understand how the immune system responds to a disease like SARS-CoV-2, we need to translate the diverse genetic code of T-cell receptors into data that can be analyzed both at the individual level and across entire populations.
Our partners gather the samples, we generate the sequence data from their diverse T-cell receptors, and the raw data gets uploaded into cloud computing databases. Working with Microsoft, we applied their advanced AI, cloud computing and machine learning technologies to comb through this massive data set, effectively creating a “map” of these immune cells to most diseases.
Our labs were running 24/7 with reduced capacity, taking every precaution to safeguard our employees and samples. The pressure was intense. We needed to keep piling on to that mountain of data continuously, while also figuring out faster and smarter ways to extract meaningful insights to the public health and biopharma communities.
With 5,500 blood samples from Asia, Europe and the U.S. — and only a small team at our central lab in Seattle — we knew we needed more help. It took nearly 8 months to receive all the blood samples. As things started to open up and patients were returning to the clinic for testing, we had to work even harder to process the samples.
We continued to expand our industry collaborations, initially with Labcorp, the biggest diagnostic company in the U.S., and Illumina, the market leader in DNA sequencing.
Fast forward a year to the recent FDA Emergency Use Authorization for T-Detect™ COVID, the first-ever T-cell test for individuals that can detect whether a person has had a recent or past COVID-19 infection. Based on our best understanding of the immunology, we believe this test can tell whether one has been exposed to SARS-CoV-2 months after an original diagnosis.
We can do it with high accuracy, outperforming leading antibody tests in real world studies.
The entire process, including order authorization by a virtual provider, can be completed online except for the blood draw. Samples get collected at a Labcorp facility or by a mobile phlebotomist, shipped to our labs in Seattle, and we provide a report back within 7-10 days. It costs under $300 with a blood draw and virtual provider fee. It isn’t covered by health insurance, but it is covered by healthcare and flexible spending accounts.
At this stage in the pandemic — despite widespread antibody testing and several vaccines — there are still a lot of questions about COVID-19. T cells may hold some of the answers.
For example, they may offer the ability to detect past infections for a much longer period than traditional tests. They may allow us to diagnose Long COVID (patients who have not fully recovered weeks or even months after first experiencing symptoms). It’s theoretically possible for us to draw a quantifiable connection between lingering immune perturbations, potential signs of autoimmunity, and symptoms of Long COVID. That’s something else investigators are looking into.
Patients who use T-Detect COVID have the option to opt-in to ongoing research. This will accelerate our understanding of variants, who is at more risk of getting the virus, who is protected, and the effectiveness of vaccines.
In the future, we may be able to go beyond a yes/no answer. Physicians and patients may be able to identify the source of their immune response either from natural infection or from a vaccine. This is especially important as more of the population gets vaccinated and as new variants appear which may be increasingly capable of escaping natural immunity, or vaccine-induced immunity.
One of the unique benefits of Adaptive’s T-cell testing approach is that it can be done at scale, something that was impossible just a few years ago. Historically, to test T cells, you would need a blood sample with live T cells, which have a shelf life of four to six hours. It would be impractical, if not impossible, to test hundreds of thousands of samples in that time frame.
We look at DNA samples taken from hundreds of thousands of T cells in a blood sample to find the unique signature of COVID. The blood samples can be stored at room temperature for up to 5 days or be frozen. The stability of DNA allows us to extract the information needed days or weeks after the sample is collected. This opens the door for the kind of massive T cell data sets that are needed to provide clear answers that go beyond just a few interesting anecdotes.
One of the exciting things about the T-Detect COVID test is that it provides proof of concept that T-cell testing is effective in diagnosing disease. We can build on this and apply it to many other disease areas, beyond SARS-CoV-2 and infectious diseases.
Imagine if you had a T-cell test that could tell the difference between diseases with similar symptoms like Lyme disease and multiple sclerosis? Or a T-cell test that could detect many different diseases across multiple therapeutic areas with a single vial of blood, eliminating the diagnostic odyssey that many patients face as they spend years trying to get the correct diagnosis for their symptoms?
This represents a paradigm shift in how we diagnose, and ultimately treat, many different illnesses based on how the immune system naturally does this. We call this Immune Medicine.
That is the promise T-cell-based testing holds. We, along with our partners, are up to the challenge. We can say that with even greater confidence today than a year ago because we’ve been compelled to push the limits.
Many more exquisitely informative diagnostics are going to come from this ultimate stress test. It’s something we can look forward to on the other side.
Today’s guest on The Long Run is Brad Gray.
Brad is the CEO of Seattle-based NanoString Technologies.
NanoString started out in the early 2000s by making digital “bar codes” that allowed it to do multi-plex gene expression – the analysis of multiple genes at once, and the extent to which they were dialled on or off in a given sample.
The technology caught on with a few prominent early adopters in academia. But the company struggled in the early years of selling instruments and consumables to academic labs. Brad was hired in 2010 to be the CEO who could take it to another level in commercialization, and lead an expansion that would take the company into the cancer diagnostics business.
There were some ups and downs. For a while, Nanostring got by with cash from pharmaceutical partners who wanted to evaluate tumors in patients getting immunotherapies. This was a time when many in biopharma was trying to understand what made some tumors ‘hot’ and others ‘cold’, and there was growing interest in why some patients respond and others don’t.
Nanostring in 2019 decided to divest its cancer diagnostics work, and concentrate on a new technology platform – GeoMx. This is the latest tool from Nanostring, and it’s caught on with scientists much faster than the first. These are the early days of spatial biology – in which Nanostring and others are seeking to shed light on what’s going on in cells with fine-grained resolution, but also with more context that can sometimes be lost.
The company’s new ambition — “map the universe of biology.”
As a young person trying to break into biotech, he got some international experience, worked as a consultant, and then got a chance to learn the diagnostics business under Henri Termeer at Genzyme. Brad is one of the many executives who credits Henri with inspiring him and creating an opportunity for him to thrive.
I think you’ll enjoy this conversation with a business leader of a company with an enabling technology.
Please join me and Brad Gray on The Long Run.
Wearable devices have an ability to capture lots of data, in real-time and over long periods of time, that may reflect aspects of an individual person’s health.
But (and this is a common theme in the application of data science to healthcare), gathering volumes of data is one thing – deriving meaning from these data in a way that significantly improves a person’s health is another.
A recent paper in Nature Medicine paper highlights the delicate balance digital health researchers must maintain as they demonstrate the potential of emerging wearable device technology while taking care not to get ahead of the current state of the science, in terms of what the devices actually can tell us.
The research began in Mike Snyder’s lab at Stanford University, and was co-led by Jessilyn Dunn (a rising star in biomedical engineering now on faculty at Duke University) and Lukasz Kidzinski (now an AI researcher at Stanford and director of AI at Princeton, NJ-based Bioclinica).
For a decade, Snyder has led the charge on wearables. He has famously used himself as a guinea pig. So exhaustively has he monitored his own parameters, including genomics, proteomics, and every other -omic, that Baylor College of Medicine researcher Richard Gibbs, tongue-in-cheek, proposed a new term, the “narciss-ome”, to describe this comprehensive assessment.
As Snyder’s Stanford colleague Euan Ashley writes in Genome Odyssey (my recent WSJ review here),
“Mike Snyder was on a mission to measure everything about himself, all the time, using every technology possible. And I mean everything. Starting in 2010, shortly after he started at Stanford, Mike would show up to meetings sporting multiple different wearable devices. You would meet him, and there might be one smartwatch on one wrist and a different one on the other. Sometimes, he would wear an armband device the size of a pack of cards that detected airborne toxins in his environment. At one meeting, he showed up with a front-facing camera that took time-lapse pictures of everyone in the room. It freaked everyone out, so he stopped that soon after. Lloyd Minor, the dean of Stanford’s School of Medicine, refers to him as ‘the most studied organism in history.’”
Whether these exhaustive measurement efforts are truly useful has been less than clear; in some ways, like the dancing bear, they seem most remarkable not for quality of the clinical insight generated, but rather because they were conducted at all. Phrased differently, it’s not clear that the burden of such comprehensive data collection is (yet) justified, as I’ve recently discussed (here).
Nevertheless, the promise of rich data collection, particularly using wearables, remains as compelling as it was when Denny Ausiello and I articulated the ambition of digital health nearly a decade ago: we live our lives continuously, yet our medical needs tend to be evaluated episodically, and (hopefully) infrequently.
Surely, there must be meaningful insight to be obtained from relatively dense, continuous, longitudinal measurements that can’t be gleaned from the occasional clinic visit.
The challenge has been surfacing this hidden insight, and capturing the implicit value.
Which is where the latest paper comes in. Utilizing data from 54 participants in the Stanford iPOP (integrative personal omics profiling) study, researchers examined data extracted from the smart watches the participants wore. The scientists first compared these values to two vital signs (temperature and resting heart rate) obtained in clinic visits using a validated instrument, and then utilized machine learning to see whether they could use either the wearable data or the clinical data to predict the values of routine clinical laboratory tests.
The study utilized an Intel Basis watch, subsequently withdrawn from the market for safety concerns (the device could overheat, causing burns or blisters). The paper was originally submitted for publication in September 2018, but not published until May 2021, perhaps explaining why the Basis was used in this just-reported study.
The Basis could detect four parameters:
First, the researchers wanted to get a sense of how the measurement of resting heartrate obtained on the smart watch (using PPG) compared to clinic observations. Many devices use PPG to measure heart rate, including the Apple Watch, the Whoop strap, the Oura ring, and the Fitbit tracker, among others. The approach measures absorbance of the shined light, which is proportional to blood volume variation (each pulse transiently increases the volume).
According to a 2018 review article in the International Journal of Biosensors and Bioelectronics:
“The popularity of the PPG technology as an alternative heart rate monitoring technique has recently increased, mainly due to the simplicity of its operation, the wearing comfort ability for its users, and its cost effectiveness. However, one of the major difficulties in using PPG-based monitoring techniques is their inaccuracy in tracking the PPG signals during daily routine activities and light physical exercises. This limitation is due to the fact that the PPG signals are very susceptible to Motion Artifacts (MA) caused by hand movements.”
These concerns were further examined in a recent NPJ Digital Medicine paper from Dunn’s current lab at Duke, examining potential sources of PPG wearable variability, compared to an ECG gold standard. While skin tone turned out not to represent a significant source of variability, motion was; moreover, the wearables exhibited different degrees of accuracy, with the Apple Watch generally outperforming competitors.
Dunn’s data accord with my own experience using consumer wearables during exercise; I’ve found the Apple Watch works better than other wearables I’ve tested, but not nearly as well as measurement techniques detecting electrical activity directly, like the Polar chest strap I’ve now adopted. Consumer-facing ECG measurements, like Kardia, and like the Apple Watch measurement obtained when holding the crown for 30 seconds, also utilize electrical detection.
Notably, in Dunn’s recent paper, consumer wearables significantly outperformed several “research-grade” wearables that were also evaluated; research wearables allow investigators access to the underlying waveforms, while consumer wearables function like black boxes from a research perspective, dramatically limiting their use in clinical research and making it prohibitively difficult to utilize more than one wearable in a given clinical trial — a critical interoperability obstacle that Jordan Brayanov, Jen Goldsack, and Bill Byrom elegantly discussed last year in STAT.
As the three authors explained,
“You’d think that monitoring heart rate remotely would be easy. But wearables from technology giants like Apple and Samsung measure it in different and proprietary ways. One device may record the number of beats over 10 seconds and multiply by six; another may communicate an ‘instant’ heart rate reported after every single heartbeat. This means the two platforms’ data aren’t consistent and so can’t easily be used simultaneously in clinical trials.”
Back to the original paper: Snyder’s team found that when they considered two weeks’ worth of resting heart rate measurements at the same time of day as the clinic visits, the values were similar, but the variability was significantly less in the wearables groups, compared to the clinical measurement group.
In other words, you get more consistency measuring resting heart rate over weeks on a wearable than assessing it once in a while in the clinic.
Score one for the wearable!
However, temperature measurement was a different story; here, as the researchers report, “clinically measured oral temperature was a more consistent and stable physiological temperature metric than wearable-measured skin temperature….”
Translation: compared to clinical measurement, assessment of temperature on wearables was somewhat scattered.
With these foundational parameters of performance established, here’s where the paper gets interesting. The researchers examined the four basic categories of output from the watch – measurements of heart rate, temperature, electrodermal activity, and steps – and began the alchemy of data science known as “feature engineering.”
Feature engineering involves selecting attributes from the raw data to use as an input for a machine learning model. It could include a statistical property of the data – average heartrate, say, or a property of the distribution of the heart rate, or it could be the implied activity state of the individual, based on number of steps.
According to the authors, there were 5,736 possible features they could have considered, from which they selected 153 that seemed the most likely to be altered in a fashion that could conceivably be reflected in a clinical laboratory test.
These 153 features were then fed into several different types of models intended to predict the value of one of 44 different clinical labs that were also obtained from the study participants. The initial work suggested one modeling approach, called random forest, generated predictions that explained up to a fifth of the variability seen in measures of hematocrit, red blood cell count, hemoglobin levels, and platelet count.
To be clear, the contention isn’t that wearable data predicted these clinical labs with exceptional accuracy, but rather that wearable-derived data seemed to very roughly correlate with these clinical labs, and others.
When the researchers examined which features were driving the predictions, it turned out that various permutations of electrodermal activity played a critical role in predicting hematocrit, red blood cell count, and hemoglobin levels, while features driving platelet count predictions were all based on heart rate.
The authors then conducted what felt like a bit of a pedantic demonstration exercise, comparing predictions derived from the 153 wearable features with those derived from the two measurements from the clinical visit (heart rate, vital signs), and found that generally, more is better. The authors typically got better predictions when they had more data to consider, even if the source data was only consumer-grade, vs regulatory grade like the clinical measurements.
To read some of the coverage describing this paper, you’d think we could forget about the need for future blood draws, and just rely on data extracted from smart watches. “Your smartwatch can predict blood study results,” one headlined declared. Another: “More than just a step-tracker, smartwatches can predict blood test results and infections, study finds.”
Some of this hyperbole likely stems from the actual title of the paper itself: “Wearable sensors enable personalized predictions of clinical laboratory measurements,” which seems, in the context of the reported data, a bit aspirational.
Co-author Dunn may have expressed the contribution of the paper best in a dialog on LinkedIn, commenting:
“I want to emphasize that this is more about directionality than about exact predictions of clinical labs. The current status of this work is certainly not to the point of replacing clinical labs with wearables, but rather it may indicate which labs are more likely to have changes, which can then be directly and specifically measured (think of it as a pre-screening tool for labs when you have limited time and resources).
This, in my opinion, falls under basic research. We need to establish the principle that these relationships exist before we can iterate over them to improve predictions toward more clinical utility. Agreed that there is much more to do, and I hope in this paper we succeed in making the case that this is a path worth following.”
In a larger sense, this assessment captures the current status of many digital and data technologies that are being brought to bear in biopharma and healthcare these days: neither ready for prime time nor quite living up to the hype, but nevertheless making real progress, which resolute cynics can choose to ignore only at their peril.
Sometimes the most relevant digital health companies have the least elaborate technology.
More than a decade ago, I discussed UpToDate, a fairly basic medicine e-textbook, served through a web-based app. It was then, and still remains, a go-to site for timely, high-quality medical information relevant to clinicians.
There’s nothing fancy about it, it just works.
Recently, I discovered a similar resource, PrecisionNutrition, that’s focused on coaching coaches to help their clients adopt consistent healthy behaviors. What’s interesting is how PN fosters a holistic, multidimensional view of health. The company stands out against a field of competitors that tend to have a myopic emphasis on a particular aspect of physical health like weight loss or physical training.
Their stated goal is to teach the coaches they train to “transform” clients’ health, and help clients “thrive,” in the broadest sense.
Physical health, PN tells fitness coaches, is just a small percentage of “what determines your clients’ success.”
Instead, PN emphasizes a focus on a multi-dimensional view of well-being, arguing that they are all interrelated, and to improve any one of them, coaches need to consider the many components of well-being.
Not only will coaches attuned to the many components of well-being find themselves able to support their clients’ immediate goals (such as losing weight or gaining muscle), PN astutely argues, but an initial focus on fitness, say, may provide an opportunity, an on-ramp, to improve their clients’ well-being more generally, across multiple dimensions.
To me, this second part – fitness as an on-ramp to a broader opportunity to enhance well-being and to help people flourish more fully – is the profound health opportunity lurking in the consumer space.
It’s the opportunity that so many digital fitness companies, in their singular focus on driving athletic performance, may be missing.
At best, most of us probably think of “well-being” in an informal or casual sense; many probably share the perspective of one initial skeptic, who told Yale psychologist Laurie Santos he originally assumed, before taking her course on the subject, that it was “hippy California well-being crap.” (I can totally relate.)
Well-being, it turns out, has become a subject of serious scientific study, and represents a prominent area of focus for academic researchers and well as health organizations including the CDC and the WHO. There are treatises on the measurement of well-being, and academic centers focused on the cultivation of “human flourishing,” such as the Human Flourishing Program at Harvard, the Center for the Study of Human Flourishing at Kings’ College in New York, and the Stanford Flourishing Project.
Flourishing itself turns out to be key concept within a burgeoning academic area called “positive psychology,” which focuses on what makes us happy or fulfilled. The term was said to have been coined by Abraham Maslow (of “hierarchy of needs” fame), and popularized by Martin Seligman of University of Pennsylvania (previously best known for studies of depression, and the concept of “learned helplessness”).
Some of today’s most popular academic psychologists are associated with this school of thought, including Daniel Gilbert at Harvard (though he apparently rejects the label), and Santos at Yale (who hosts the wonderful and highly recommended “Happiness Lab” podcast).
The key premise around positive psychology, and flourishing, is the idea that thriving isn’t just about reducing misery, as critical as this obviously is. Rather, the goal of positive psychology, as Seligman puts it, is to “supplement its venerable goal” of misery mitigation with a new aim: “exploring what makes life worth living and building the enabling conditions of a life worth living.”
The question of what it means to be well, and to live a good life, is both ancient and elemental. Researchers tend to consider two formulations: a hedonic approach, focused on the idea that we’re driven primarily to make life pleasant and pleasurable, and a eudaimonic approach, which recognizes a greater range of motives, such as self-actualizing and pursuit of meaning.
From these approaches, frameworks have emerged that emphasize different aspects.
While well-being – definitionally – represents a valued end in itself, there are also data linking attributes often associated with well-being to traditional endpoints such as mortality.
A 2019 review by Trudel-Fitzgerald and colleagues at the Harvard T. H. Chan School of Public Health found that several dimensions of well-being “are associated with a reduced risk of premature all-cause mortality among the general population, with small to medium effects,” associations that, critically, held up even after adjusting for confounders.
The attributes with the strongest evidence, according to this review?
“Purpose in life, optimism, and ikigai (a Japanese term that translates into happiness, worth, and benefit of being alive, and apparently capturing both eudaimonic features like purpose and hedonic features like pleasure).
According to the authors, there’s also somewhat less robust evidence linking other dimensions such as life satisfaction, positive affect, mastery, and sense of coherence to reduction in mortality. Studies examining the relationship between happiness, personal growth, and autonomy with mortality reductions “suggested no effect or were too limited to draw firm conclusions.”
In medicine, we’re all too familiar with the vicious cycles leading to the rapid downward spiral of health: an injury might impede exercise, leading to weight gain and making exercise even more difficult. The fragility of life, so easily taken for granted, can be tragically revealed by even the slightest setback, and the cascade of challenges it can produce.
How encouraging, then, to encounter a publication (thoughtfully discussed by Gretchen Reynolds in the New York Times, here) describing a rare virtuous cycle – the reinforcing findings from a longitudinal study of aging American adults that people who have a greater sense of purpose are subsequently more likely to exercise, and people who are more likely to exercise are subsequently likely to describe having a greater sense of purpose.
The study utilized an approach called a “cross-lagged panel model” in effort to reduce confounding. The bidirectional effects observed were small but statistically robust, according to the authors.
Fitness as an on-ramp to a broader opportunity to enhance well-being and to help people flourish more fully is the profound health opportunity lurking in the consumer space
The very appeal of the conclusion should remind us to keep our enthusiasm in check, and as always, seek replication and additional data.
But this study, and others like it, speak to a broader point: we need to look at physical activity from a perspective less reductive than either athletic performance – the focus of digital fitness companies (“improve your position on the leaderboard!”) – or mortality reduction – the focus of many clinicians (“exercise more if you don’t want to die sooner”).
Exercise offers a range of benefits, in many domains, perhaps including an enhancement of our sense of purpose. Similarly, attention to these other domains, as PN advises coaches, can help motivate greater physical activity.
Both physicians and connected fitness companies need to reframe how they conceptualize exercise; the opportunity is to see it not only as a path to improved cardiovascular health and performance, but as tangible starting point for greater well-being.
Entrepreneurs leading the digital exercise movement may be missing an opportunity if they don’t profoundly enlarge their ambitions, and take, as their job-to-be-done, not the need to get people fit, but rather the opportunity to help people flourish.
The biopharma industry was pressured to get faster and more innovative to respond to the urgent global health challenge of the past year.
Asian-American executives have played a key role in leading this charge, although they often aren’t in front of the cameras and microphones.
As part of Asian American and Pacific Islander Heritage Month, we wanted to stop and think about who some of the AAPI leaders are in our sector, and honor their contributions. They inspire us.
While there is overall strong representation of Asian-Americans in the biopharma sector, there remains significant underrepresentation of Asian-Americans in positions of executive leadership. Asian-Americans make up 22 percent of employees, but only 3 percent of CEOs, according to BIO’s first report on diversity.
How did we arrive at our list of high-impact executive leaders? Our selection criteria were as follows: commitment to solving critical health problems, a track record of leading high-integrity and scientifically innovative organizations, and dedication to patients and the next generation of innovators.
As biopharma companies continue to improve in hiring leaders and employees representative of the patient populations they serve, we look forward to seeing this list grow in breadth and depth.
Here are our top 11:
David Chang, M.D., Ph.D., is the co-founder and CEO of South San Francisco-based Allogene Therapeutics, a biotechnology company developing allogeneic CAR-T therapies for blood cancers and solid tumors. David also serves as a venture partner at Vida Ventures.
David is perhaps best known for his work at Kite Pharma, where he was the head of R&D and chief medical officer who oversaw the team that developed one of the first two chimeric antigen receptor-modified T-cell therapies (CAR-T) for cancer. That personalized cancer immunotherapy, Yescarta, was a historic milestone for the field, developed around the same time as Novartis’ Kymriah.
Prior to Kite, David held senior leadership roles at Amgen, where he played key roles in development of Vectibix, Blincyto and Imlygic. Before entering the biopharma industry, David was Associate Professor of Microbiology, Immunology and Molecular Genetics at the David Geffen School of Medicine at UCLA.
Pearl S. Huang, Ph.D. joined Cambridge, Mass.-based Cygnal Therapeutics in January 2019 as President and CEO. She is also a Venture Partner at Flagship Pioneering and a member of the boards of Cygnal, KSQ Therapeutics, and Waters Corporation.
Prior to Cygnal, Pearl served as Senior Vice President and Global Head of Therapeutic Modalities at Roche where she oversaw the discovery of biologics, small molecule, and nucleic acid-based therapies.
Prior to Roche, Pearl was the co-founder and Chief Scientific Officer of BeiGene, and before that, she was the Vice President, Oncology Integrator of Discovery and Early Development at Merck where she led 14 early development teams and conducted seven first-in-human trials in one year. Before joining Merck, Pearl led oncology discovery at GSK where she initiated the programs that delivered trametinib and dabrafenib to patients.
Pearl received her undergraduate degree in life sciences from MIT and a Ph.D. in molecular biology from Princeton University. (Listen to Pearl describe her career arc, and her recent work on cancer drugs based on new learnings about the peripheral nervous system, on The Long Run podcast).
Angela Hwang is a member of Pfizer’s Executive Team and Group President of the Pfizer Biopharmaceuticals Group, which comprises the entire commercial organization of Pfizer. Her organization of 26,000 colleagues across 125 countries is responsible for bringing over 600 innovative medicines and products to patients.
Angela is a member of the Board of Directors of UPS, the global leader in supply chain logistics and package delivery, EFPIA (European Federation of Pharmaceutical Industries and Associations), as well as the Pfizer Foundation, a charitable organization that addresses global health challenges. She has also been active in industry groups such as BIO, where she previously co-chaired the Vaccines Policy Committee.
In 2019, Angela launched Diverse Perspectives, a podcast series where she hosts global thought leaders pioneering change across a variety of industries.
Angela received her Bachelor of Science in Microbiology and Biochemistry from the University of Cape Town and M.B.A. from Cornell University. She is a wife and proud mom to a teenage son and daughter, and a strong advocate for women’s leadership and sustainable global health equity.
Sekar Kathiresan, M.D. is a co-founder and the Chief Executive Officer of Cambridge, Mass.-based Verve Therapeutics, a biotechnology company developing gene-edited medicines for cardiovascular diseases.
Sek has dedicated his career to researching the genetic mechanisms underlying cardiovascular disease and using these insights to improve preventive cardiac care. Prior to co-founding Verve, Sek served as director of the Massachusetts General Hospital (MGH) Center for Genomic Medicine and was the Ofer and Shelly Nemirovsky MGH Research Scholar. He also served as director of the Cardiovascular Disease Initiative at the Broad Institute and was professor of medicine at Harvard Medical School.
Among his scientific contributions, Sek helped highlight new biological mechanisms underlying heart attack, discovered mutations that protect against heart attack risk, and developed a genetic test for personalized heart attack prevention.
Sek led the team at Verve that demonstrated CRISPR base editing could be used to edit PCSK9, and substantially and durably bring down LDL cholesterol in non-human primates with a single infusion. That pioneering work was published last week in Nature.
For more on Sek’s life story, and the concept behind CRISPR base editing for cardiovascular disease, listen to this episode of The Long Run podcast.
Reshma Kewalramani, M.D., is the CEO at Vertex and the first female CEO of a large US biotechnology company.
Reshma first joined Vertex in 2017 as the Chief Medical Officer. During her time as CMO, Vertex approved SYMDEKO/SYMKEVI and TRIKAFTA for Cystic Fibrosis and also advanced multiple clinical programs outside of CF including alpha-1 antitrypsin deficiency, APOL1-mediated kidney diseases, sickle cell disease and beta-thalassemia.
Prior to Vertex, Reshma spent more than 12 years at Amgen and held various positions of leadership including as Vice President, Global Clinical Development, Nephrology & Metabolic Therapeutic Area and Vice President, U.S. Medical Organization.
Reshma is passionate about developing and supporting the next generation of scientists and giving back to her community. She is a member of the board of directors of the Biomedical Science Careers Program, and RIZE Massachusetts.
Reshma completed her medical training in Internal Medicine and Nephrology at the Massachusetts General Hospital and is a graduate of the Boston University School of Medicine and Harvard Business School.
Samarth Kulkarni, Ph.D., has been the CEO of Switzerland and Cambridge, Mass.-based CRISPR Therapeutics since 2017.
Sam oversaw the strategic collaboration of CRISPR with Vertex Pharmaceuticals to develop gene-edited therapies for hemoglobinopathies. This collaboration recently culminated in data released at the American Society of Hematology 2020 meeting, where 10 patients with sickle cell or transfusion-dependent beta thalassemia were effectively cured after a one-time treatment with CTX001. See the paper describing that achievement in the New England Journal of Medicine.
Sam also oversaw strategic collaborations with ViaCyte to develop regenerative medicines for diabetes, with Nkarta Therapeutics to develop gene-edited cell therapies for cancer and the formation of Casebia Therapeutics, a joint subsidiary formed by CRISPR and Bayer.
Prior to CRISPR, Sam spent a decade at McKinsey & Company, where as a Partner, he co-led the biotech practice, focusing on strategy and operations and leading initiatives in areas such as personalized medicine and immunotherapy.
Sam received a Ph.D. in Bioengineering and Nanotechnology from the University of Washington and a B.Tech. from the Indian Institute of Technology.
Neil Kumar, Ph.D. is a co-founder and CEO of BridgeBio, a biotechnology company focused on developing medicines for rare genetic diseases.
At BridgeBio, Neil built a hub and spoke model of company structure, with BridgeBio as the umbrella company with an $8.8 billion market valuation driven by a network of subsidiaries that includes Eidos Therapeutics, QED Therapeutics and Navire Pharmaceuticals.
Prior to BridgeBio, Neil worked as interim vice president of business development at MyoKardia, and prior to that, he was a principal at Third Rock Ventures. Before that, Neil worked as an associate principal at McKinsey & Company.
Neil is passionate about building an organization that is patient-first. The cornerstone of the BridgeBio mission is commitment to patients and their families.
Neil received his B.S. and M.S. degrees in chemical engineering from Stanford University and received his Ph.D. in chemical engineering from MIT.
Dean Li, M.D., Ph.D., took over one of the biggest biopharma industry R&D jobs on Jan. 1, 2021 when he became Executive Vice President and President of Merck Research Laboratories.
From this position, Dean leads the company’s worldwide human vaccines and therapeutics R&D organization. In just a few months, Dean has already overseen several transformative deals at Merck including the acquisition of Pandion Therapeutics and collaboration agreements with Artiva Biotherapeutics and Amathus Therapeutics. (See TR insider coverage of the Pandion/Merck deal by Vikas Goyal).
Prior to joining Merck in 2017, Dean was in academic medicine at the University of Utah. He was a professor of cardiology, and chief scientific officer and vice dean at the University of Utah Health System.
While in academia, Dean co-founded several biotechnology companies based on research conducted in his lab, including Recursion Pharmaceuticals, Hydra Biosciences and Navigen Pharmaceuticals. (See TR coverage of Recursion’s AI drug discovery partnership with Bayer, Sept. 2020).
Dean mentored the next generation of physician-scientists through his roles as the primary investigator of the University’s Cardiovascular Research Training Program and as Director for the School of Medicine’s M.D./Ph.D. programs.
Vas Narasimhan, M.D., has been CEO of Novartis since 2018. Since joining Novartis in 2005, Dr. Narasimhan has served as Global Head of Development for Novartis Vaccines, Global Head of Drug Development and Chief Medical Officer.
He has overseen the licensing of over 30 novel medicines, including cell and gene therapies and vaccines. He is a champion of access and global health priorities, including through a commitment by Novartis to expand access to innovative medicines in low- and middle-income countries by at least 200% by 2025.
Vas is dedicated to unleashing the creativity of Novartis’ employees and making culture a driver of innovation, reputation and performance. Drawing inspiration from ancient philosophical texts and from leading thinkers today, he is committed to developing “unbossed” leaders who inspire and empower their teams to solve problems.
Vas received his bachelor’s degree in biological sciences from the University of Chicago, his M.D. from Harvard Medical School, and his M.P.P. from Harvard’s Kennedy School of Government.
Vicki Sato, Ph.D., serves as chairman of the board of directors at Vir Biotechnology and Denali Therapeutics, and as Venture Partner at Arch Venture Partners.
Vicki has a long history of distinguished work in research, teaching, and leadership. She served as Professor of Management Practice at Harvard Business School and as Professor of the Practice, Molecular and Cell Biology, on the Harvard Faculty of Arts and Sciences.
Before joining the Harvard Business School faculty, she worked as President of Vertex Pharmaceuticals from 2000-2005, and prior to that, she was Chief Scientific Officer, Senior Vice President of Research and Development and chair of the Scientific Advisory Board. Before joining Vertex, Vicki was vice president of research at Biogen, where she led research programs in the areas of inflammation, thrombosis, and HIV disease, and where she participated in the executive management of the company.
Vicki received her A.B. from Radcliffe College, her A.M. and Ph.D. degrees from Harvard University and pursued post-doctoral work at the University of California Berkeley and Stanford Medical Center. (Listen to Vicki describe her career path on The Long Run podcast, July 2018). [Editor’s Note: Vicki also serves as an advisor to Timmerman Report.]
Tachi Yamada, M.D., is currently a Venture Partner at Frazier Healthcare Partners. Before this, he was Chief Medical and Scientific Officer at Takeda Pharmaceuticals.
Prior to Takeda, Tachi was President of the Bill & Melinda Gates Foundation Global Health Program, where he oversaw over $9 billion in grants for applying technologies to address major health challenges of the developing world. Before joining the Gates Foundation, Tachi was Chairman of Research and Development at GlaxoSmithKline and member of the board of directors.
Earlier in his career, Tachi was the Chief of the Division of Gastroenterology and the Chair of the Department of Internal Medicine at the University of Michigan.
Tachi received his M.D. from New York University School of Medicine and a B.A. in History from Stanford University.
Jingyi Liu is a physician and an MBA candidate at the Wharton School of Business, where she is a William and Patricia Jewett Fellow. She is passionate about diversity and creating healthier and more equitable futures for patients around the world. Jingyi trained in internal medicine at Stanford Healthcare and received her MD from Harvard Medical School.
Eric Dai is a PhD Candidate in Bioengineering at the University of Pennsylvania. Outside of his research, Eric has served as an investor at Alix Ventures and helped lead Alix’s community and content platform, BIOS Community. Eric is passionate about the intersection of biotech, business, sustainability and social impact.
A frequent – and frequently correct – critique of entrepreneurs bearing technology is “your solution is not my problem.”
Healthcare – among many other domains, perhaps all domains – has been beset by “solutionism,” the idea that my clever technology will solve your hideously complex problem.
But perhaps it makes no more sense to instinctively reject this mindset as it does reflexively embrace it. After all, technological advances have enabled profound scientific advances as well as contributed to a range of benefits and comforts we take for granted.
But in each instance, someone had to figure out how to apply an emerging new technology to a relevant task.
A prismatic example may be found in the invention of the Post-It note. It was developed in the 1970s by a 3M research chemist named Spencer Silver and a 3M chemical engineer, Art Fry. Silver passed away this week; the fascinating story behind the Post-It was eloquently captured in his New York Times obituary by Richard Sandomir.
In the early 1970s, Silver had been trying to make a super strong adhesive that could be used in aircraft construction. Instead, he developed something that was comparatively weak, but which stuck to surfaces, peeled easily, and was reusable. The glue was patented in 1972, but despite Silver’s efforts to highlight its potential within 3M, it didn’t get much… traction. But he reached many colleagues, including Fry, who was trying to develop new products.
One day, at choir practice, Fry found himself wishing he had had a way to mark the songs in his hymnal – the slips of paper he had been using kept falling out. But if only there was some way to make the slips stick to the pages…
The Post-It note was born.
The Post-It Note (original name: Press ‘n Peel) was introduced in test markets in 1977, and nationally in 1980. According to the Times, “There are currently more than 3,000 Post-it Brand products globally.”
When you think about how this invention happened, it’s clear that the technology – the novel adhesive – was developed first, and its properties and capabilities studied and understood. Then, the originator of the technology – Silver – went looking for something useful to do with it. It was unarguably a solution in search of a problem. And Fry eventually provided the problem.
Silver turns out to be in good company. Many accomplished entrepreneurs begin with solutions, and cast about for the right problem.
As serial entrepreneur Max Levchin, co-founder of the billion-dollar startups PayPal and Affirm, explains in Ali Tamaseb’s Super Founders (discussed here),
“Most companies that I’ve started have been these really half-baked ideas that were initially about technology. I don’t often look at the world as ‘There’s a big problem; what can I bring to bear to solve it?’ Instead, I sort of say, ‘I can do this cool thing. That’s a nice hammer. What’s a nail?’ Sometimes you have hammers looking for nails and there’s no value to be built. But a lot of times, you can actually look at something new, and say, ‘Oh, cool. Artificial intelligence – it will help us to X. Or virtually reality – this will be useful for Y.’”
In some sense, you can also view the “pivots,” so central to startup evolution, as an expression of technology’s search for a problem.
“That’s a nice hammer. What’s a nail?” – Max Levchin, co-founder, PayPal and Affirm
Tamaseb cited a number of familiar examples of technology applications that took a while to find their groove. These include:
From the perspective of healthcare, the point is that figuring out how to use raw new technology in a notoriously complex domain like healthcare is inordinately difficult, but also vitally important.
Healthcare incumbents may have been right to reject many of the arrogant and ignorant technologists who showed up 10 years ago, certain their app would “solve” healthcare. Incumbents may also be right to look critically on the digital transformation imperatives consultancies are urging organizations to adopt post-haste.
But there needs to be a thoughtful middle ground between the false comforts of tech worship, on the one hand, and tech cynicism, on the other.
Like Art Fry, we need to thoughtfully engage with technology developers like Spencer Silver, ideally in an environment that cultivates such engagement and exploration, like 3M in the 1970s.
As Safi Bahcall writes in Loonshots (my Wall Street Journal review here), when 3M brought in a new CEO whose single-minded focus on efficiency squeezed out such chance encounters, innovation plummeted. It didn’t recover until another CEO restored the old system.
This lesson is especially relevant to healthcare.
To discover new medicines, it’s critical to provide “the intellectual space for tinkering and capitalizing on the chance observations and unexpected directions so important in medical research,” Nassim Taleb and I wrote in 2008.
The point: innovation blossoms in an environment and culture that affords adequate oxygen — time, space, and (critically, I’d argue) receptivity to novelty — for innovators to match promising new, and perhaps still somewhat raw technology with pressing, worthy, and suitable problems.
Have you ever met someone with Alzheimer’s disease?
Odds are you probably have. Odds are that question calls to mind the face of a beloved grandparent, neighbor, or family friend whom you’ve stopped by to visit, nervously clenching a bouquet of flowers. You greet them hopefully and wonder if they’ll remember you this time.
America’s population is aging. About 6.2 million Americans are living with Alzheimer’s disease. Over the next 30 years, that number is expected to more than double.
If you took a moment to speak with your neighbor’s caregiver on your way out the door, you also know that the devastating cost of this disease transcends the individual.
Caregivers — mostly untrained, mostly women — reduce their own social and economic activity to make time to tend to the ill, reporting up to 60 hours per week engaged in direct patient care.
Clinicians believe that “the main goal of treatment for AD is not necessarily to extend life but to improve function and maintain independence.”
So let’s think about it this way — if you were diagnosed with Alzheimer’s disease, what would it be worth to keep your mind?
What would it be worth to our society to keep 12 million more Americans functioning (and their caregivers free and productive) over the next three decades?
Turns out that last week, ICER decided on an upper bound.
“Using a similar modeling approach as our approach to modeling aducanumab, a treatment assumed to have no known harms that could maintain all patients in MCI [mild cognitive impairment] for the rest of their lives would result in threshold pricing of up to $50,000-$70,000 per year.”
Aducanumab, Biogen’s controversial monoclonal antibody that successfully clears beta-amyloid plaques from the brain, may or may not provide a clinically meaningful benefit to a subset of patients. That’s for the FDA to decide. Approvable or not, it’s certainly not the miracle drug ICER’s describing.
But the more interesting claim in this report is the organization’s insistence that $70,000/year represents the maximum value for a therapy that could successfully halt Alzheimer’s disease-related cognitive decline with zero side effects.
Don’t get us wrong — $70,000 per year would indeed yield a mind-meltingly high return (assuming successful treatment of 6 million Alzheimer’s patients, that’s $420 billion a year). That’s not only unfathomable but unrealistic — that’s almost twice what our country spends on all branded drugs in a year.
However, the idea that such a wonder drug is on the immediate horizon is as unrealistic as anticipating a $420 billion/year return on any single drug.
The development process in Alzheimer’s disease is more likely to be incremental — a 5% gain in function here stacked onto a 10% improvement there. And for drugs that get us to that goal incrementally — a few percent at a time — anchoring to the wrong number could result in the first few drugs being undervalued, reducing interest in continuing the effort to address this formidable disease.
Let’s suspend reality for a moment and imagine that the FDA approved a drug deemed clinically meaningful in a subset of patients but that still represented just one step towards the greater goal of halting all Alzheimer’s disease-related cognitive decline.
Imagine that ICER recommended a price similar to that proposed for aducanumab ($2,560/year). Now consider whether America’s insurance system would allow all eligible patients to use that drug.
Would it deter many by imposing high out-of-pocket costs or making patients buy it out of their deductibles? Would payers try to block companies from helping with patient assistance programs, imposing copay accumulators to ensure that patients felt the cost even after requiring prior authorization to ensure that the drug was appropriately prescribed?
The end result might be that such a drug is used by maybe half of all eligible patients, so in practice, the reward for its invention may end up totaling less than half of what ICER judged appropriate. And if, recognizing those barriers, a company tried to charge more upfront, ICER would be the first to protest.
Because ICER’s math doesn’t account for genericization, they’re technically arguing that it’s worth spending $420 billion a year on this wonder drug forever. Over a century, that’s $42 trillion (ignoring inflation for simplicity). Indeed, that’s in the ballpark but actually below the Alzheimer’s Association current estimate that by 2050, the US will be spending $1 trillion per year on Alzheimer’s care.
Luckily for all of us, even a $70,000 a year wonder drug would eventually go generic (or biosimilar).
So if it’s worth spending $42 trillion on this disease over a century per ICER, then it would be a great value for us to spend only $7 trillion.
To put it in context, $7T is so much money that we could reward each of 30 drugs for 15 years with as high a reward as adalimumab (Humira), the biggest drug in history (sales of $15 billion a year), after which each of those drugs would go generic.
Break it down a little further and $15 billion/year, if spread across all 6 million Alzheimer’s patients, comes out to $2,500/patient. It’s a price that might pass muster with ICER. But if it is used by 200,000 patients while branded (and then more when it’s generic, which often happens), then $15 billion a year suggests a price of $75,000 per patient.
So it would have been more relevant and worthwhile for ICER to have answered the question: What should be the upper limit on a genericizable drug that solves Alzheimer’s dementia? Given ICER’s math, the answer would be something closer to $400,000/year ($42 trillion over 15 years, divided by 6 million patients). And that’s before factoring in caregiver spillover, as Milliman recently did and economists at the National Bureau of Economic Research did, which would push that value even higher.
There are other variables ICER’s math ignores. Consider the value of the peace of mind that all of us would feel if we knew that we’d be spared some degree of dementia should we face an Alzheimer’s diagnosis. That’s arguably worth most of all — we won’t speculate on a value, but there are economists who do (they aren’t at ICER).
Even an incremental advance like aducanumab can benefit patients as it furthers our understanding of a disease — and can be a valuable piece of a future combination treatment or waystation to something more effective. ICER does not recognize that the drug will eventually face biosimilar competition that will drive down its price, making it much more cost-effective over the long run. ICER also fails to account for the value of delayed disease progression and the health, financial, and emotional benefits that accrue to patients and caregivers as a result of those delays.
Given these neglected variables, it’s critical that we not accept as gospel that solving Alzheimer’s dementia is only worth $70,000/year. If we do, we may never reach our goal, instead undervaluing and failing to reward the increments of progress that will allow us to get there.
Once we grasp the value of the Alzheimer’s moonshot, it becomes easier to appreciate that while the rewards of spurring that effort may seem large, they are worth it to society. This kind of math should guide our willingness to pay, but doesn’t mean that we’ll actually have to pay anything close to that. Companies face constraints on pricing, often from competition, that keep drug prices well below their societal value, as economists have demonstrated for the cholesterol-lowering drug atorvastatin (Lipitor).
The point is not to judge whether treating any one patient with any one drug is worth it, but to ask whether we’re making forward progress towards our larger goal of beating this disease.
If after 50 years of high rewards and aggressive research we find ourselves not much further along, we can revisit whether the threat and burden of Alzheimer’s is just something we have to accept in our lives. But we’re only just starting to crack the code of this disease.
For now, we believe that society has too much to gain to risk underpricing the hope of progress.
RA Capital is a registered investment adviser. This material is not intended, and should not be construed as, investment advice or recommendation to invest in any security. Likewise, this material is not intended as a solicitation to invest in any RA Capital product or service.