Giving Models and Modelers a Bad Name
As someone who has spent a career building and studying disease models, primarily for cancer, the latest update from Chris Murray and the IHME model makes me cringe.
The IHME model, readers will recall, has been frequently cited by the White House coronavirus task force. On May 4, the IHME called a press conference to release the results of their COVID-19 model update which showed a staggering departure from their prior predictions of about 60,000 deaths through the end of August.
Obviously, this earlier model had to be updated – and fast: the official US death toll was already at more than 68,000 at the time of the May 4 press conference.
The new prediction from IHME through August estimates 134,000 deaths – more than double the previous model’s estimate. Murray, the institute’s director, told reporters that the death toll was significantly increased because the latest update had taken changes in mobility into account. Indeed, travel patterns now show an uptick in movement in states that are beginning the re-opening process, and increased mobility means increased opportunity for infection.
Murray, in a May 4 interview on CNN with Anderson Cooper, said the earlier model was built on an assumption of statewide social distancing measures remaining in effect through May in order to suppress transmission. Now that states are lifting the orders, the model had to be revised to incorporate the fact that “more people are getting out and about.”
While I am in total agreement that premature relaxation of social distancing will lead to an explosion of new cases and deaths, you don’t need a model to know that. That’s how epidemics work when a population is as far away as the US is from herd immunity. And models everywhere are showing the same thing.
Where I live in Washington State, Gov. Jay Inslee and his advisors looked at models that predicted dramatic growth of infections if the state were to relax the Stay Home, Stay Healthy order prematurely. Those models were part of the rationale for extending the stay-home order until the end of May.
It makes a nice story, to tell the world that the reason your model’s predictions have changed is because the population’s behavior has changed. The implication is that it’s not the model’s fault, it’s the politicians and the people’s shifting behavior. Indeed, that was the same explanation given by IHME for their model revising its early death toll of about 90,000 dramatically downward to about 60,000 in early April. At that time, Murray explained that the change in predictions showed that social distancing had been a wild success – better than we could ever have imagined.
The lowering of the death estimate, in turn, led to howls of protest that we had over-reacted by shutting down and staying home. Those howls put pressure on elected officials to relax social distancing. Clearly, models do matter.
On Apr. 14, I wrote in these pages that that the interpretation regarding social distancing success was wrong and misleading and placed far too much credibility in the early predictions. The early model results were based on an oversimplified empirical model. It extrapolated death curves from the earlier experience in China, Italy, and eventually Spain. It assumed that social distancing would work as well as in those settings, and that the pattern of deaths after the peak would be a mirror image of the pattern before it. Revisions of the model tried to soften these assumptions, making many other assumptions along the way. It is likely that these revisions, rather than social distancing success, drove the dive in the deaths predicted in that April revision.
The same thing is happening now. A quick skim of the IHME’s model updated site leads one to an eye-glazing list of changes, including some that have nothing to do with mobility and everything to do with improving how well the model matches the data on cases and deaths recorded up until this point.
When Murray spoke about mobility patterns causing an update to the model, he neglected to say in the same breath that the team had made fundamental changes to its model-building approach in order to arrive at its current set of predictions. IHME is now more like a hybrid of empirical and mechanistic approaches. What we aren’t seeing from IHME is a clear and transparent statement on the truly humbling “back to the drawing board” nature of what it has just done by rebuilding its model.
Here is one change that truly raised my eyebrows:
“Since our initial release, we have increased the number of multi-Gaussian distribution weights that inform our death model’s predictions for epidemic peaks and downward trends. As of today’s release, we are including 29 elements… this expansion now allows for longer epidemic peaks and tails, such that daily COVID-19 deaths are not predicted to fall as steeply as in previous releases.”
This change alters the shape of the assumed mortality curve so it does not go down as fast; it alone could explain a substantial portion of the inflation in the revised mortality predictions.
The proof is in the Washington State pudding. The IHME is no longer predicting that we will have less than one case per million on May 28 and can therefore safely reopen, as it did in its previous incarnation. But little has changed here on the policy front and in residents’ mobility patterns, according to Google Mobility reports, through April.
I am not aiming my comments at the IHME modeling team, which I imagine is sincerely doing its best to deliver results that match the data and produce ever-more-complex predictions of the future. They are working overtime to fit the rapidly evolving and imperfect data, marching to a drumbeat of deadlines from everyone that wants the impossible – crystal clear and precisely accurate forecasts. To their credit, the last part of the May 4 update does note that both model changes and increased mobility projections could account for the change in predictions. But that never made it into the headlines. And that is a problem of transparency.
Transparency begins in the sincere effort by those who communicate models to make sure that they are properly interpreted by the policymakers and public that are using them. The IHME pays lip service to transparency by documenting their model’s updates on their website. But their pages-long description is chock full of technical fine print and is hard to understand, even for a seasoned modeler like myself.
A key part of transparency is acknowledging your model’s limitations and uncertainties. This is never front and center in the IHME’s updates. It needs to be.
It is ironic to me that I am being this critical when I agree so strongly with the message that is being broadcast as a result of this update. Make no mistake – if some states that are opening prematurely, or some that are considering doing so, change their minds as a result of this update, it will a very good thing.
We have to remember that this epidemic took root and grew massively in every state from miniscule beginnings. We should all be sobered by our real-time experience of exponential growth. If there is an ambient prevalence of more than a handful of cases in any state, then anything that increases the potential for transmission will lead to a re-growth. We do not need a model to be able to predict that. But, as we plan for how to reopen in each state of our union, we need to know what extent of growth in new infections we can manage. And models can help us with that.
When and how much we can reopen will depend on the surveillance and containment infrastructure that we put in place to control upticks and outbreaks. I am convinced that models can help us think clearly about complex policy questions – such as finding the balance between changes that increase transmission and measures to contain it. Models, along with other data and evidence, can guide us towards making sensible policy decisions. I have seen this happen time and time again in my work advising national cancer screening policy panels.
But as modelers, we have a responsibility. We have to be humble. We must make sure that the key caveats and uncertainties that are the nature of our work find their way into the headlines and are not relegated to the fine print.
If we don’t, we will give modeling, and modelers everywhere, a bad name.