Predicting Death Could Change the Value of a Life

New technology promises to forecast the length of your life. But for disabled people, measuring mortality can prove fatal.
Collage of images of human anatomical illustrations and person in wheelchair
Photo-Illustration: Sam Whitney; Getty Images

The Future of Futures

Everyone wants to know what’s to come—right? But our obsession with predictions reveals more about ourselves.

If you could predict your death, would you want to? For most of human history, the answer has been a qualified yes. In Neolithic China, seers practiced pyro-osteomancy, or the reading of bones; ancient Greeks divined the future by the flight of birds; Mesopotamians even attempted to plot the future in the attenuated entrails of dead animals. We’ve looked to the stars and the movement of planets, we’ve looked to weather patterns, and we’ve even looked to bodily divinations like the "child born with a caul" superstition to assure future good fortune and long life. By the 1700s, the art of prediction had grown slightly more scientific, with mathematician and probability expert Abraham de Moivre attempting to calculate his own death by equation, but truly accurate predictions remained out of reach.

Then, in June 2021, de Moivre’s fondest wish appeared to come true: Scientists discovered the first reliable measurement for determining the length of your life. Using a dataset of 5,000 protein measurements from around 23,000 Icelanders, researchers working for deCODE Genetics in Reykjavik, Iceland developed a predictor for the time of death—or, as their press release explains it, “how much is left of the life of a person.” It’s an unusual claim, and it comes with particular questions about method, ethics, and what we mean by life.

A technology for accurately predicting death promises to upend the way we think about our mortality. For most people, most of the time, death remains a vague consideration, haunting the shadowy recesses of our minds. But knowing when our life ends, having an understanding of the days and hours left, removes that comfortable shield of abstraction. It also makes us see risk differently; we are, for instance, more likely to try unproven therapies in an attempt to beat the odds. If the prediction came far enough in advance, most of us might even try to prevent the eventuality or avert the outcome. Science fiction often tantalizes us with that possibility; movies like Minority Report, Thrill Seekers, and the Terminator franchise use advanced knowledge of the future to change the past, averting death and catastrophe (or not) before it happens. Indeed, when healthy and abled people think about predicting death, they tend to think of these sci-fi possibilities—futures where death and disease are eradicated before they can begin. But for disabled people like myself, the technology of death prediction serves as a reminder that we’re already often treated as better off dead. A science for predicting the length of life carries with it a judgement of its value: that more life equates to better or more worthwhile life. It’s hard not to see the juggernaut of a technocratic authority bearing down on the most vulnerable.

This summer’s discovery was the work of researchers Kari Stefansson and Thjodbjorg Eiriksdottir, who found that individual proteins in our DNA relate to overall mortality—and that various causes of death still had similar “protein profiles.” Eiriksdottir claims that they can measure these profiles in a single draw of blood, seeing in the plasma a sort of hourglass for the time left. The scientists call these mortality tracking indicators biomarkers, and there are up to 106 of them that help to predict all-cause (rather than specific to illness) mortality. But the breakthrough for Stefansson, Eiriksdottir, and their research team is scale. The process they developed is called SOMAmer-Based Multiplex Proteomic Assay, and it means the group can measure thousands and thousands of proteins at once.

The result of all these measurements isn’t an exact date and time. Instead, it provides medical professionals with the ability to accurately predict the top percentage of patients most likely to die (at highest risk, about 5 percent of the total) and also the top percentage least likely to die (at lowest risk), just by a prick of the needle and a small vial of blood. That might not seem like much of a crystal ball, but it’s clear this is merely a leaping-off point. The deCODE researchers plan to improve the process to make it more “useful,” and this effort joins other projects racing to be first in death-prediction tech, including an artificial intelligence algorithm for palliative care. The creators of this algorithm hope to use “AI’s cold calculus” to nudge clinicians’ decisions and to force loved ones to have the dreaded conversation—because there’s a world of difference between “I am dying” and “I am dying now.”

In their press release, the deCODE researchers praise the ability of biomarkers to make predictions about large swaths of the population. “Using just one blood sample per person,” says Stefansson of the clinical trials, “you can easily compare large groups in a standardized way.” But a standardized treatment is not something that applies well to the deeply varied needs of individual patients. What happens when a technology like this—supplemented by AI algorithms—leaves the research lab and enters use in real-world situations? In the wake of the Covid-19 pandemic, we have an answer. It marks the first time death predictive data has been put to work at such a large scale—and it has revealed deeply disturbing limits of “cold calculus.”

In October of 2021, a study at the University of Copenhagen demonstrated that a particular protein on the cell surface is likely to predict who is in danger of a serious infection caused by the novel coronavirus. Once this protein biomarker was employed, it determined who would become severely ill with a 78.7 percent accuracy rate. On the face of it, this seemed like excellent news. We should want to know which patients will be most in need of care—and triage, or sorting, has traditionally been used as a means of saving more lives more effectively. Everyone would be cared for; less life-threatening cases might just wait longer to see a doctor. But as Covid-19 overwhelmed ICU wards and hospitals ran out of supplies and beds, triage was instead employed to decide who received care and who was turned away.

During the height of the pandemic, in May 2020, New York Guidelines targeted saving the most lives, “as defined by the patient’s short-term likelihood of surviving the acute medical episode.” Trying to sort out exactly what that means can be difficult; it could refer to saving “as many people as possible” or saving “the greatest possible number of life years,” or, even more problematically, saving “the greatest amount of quality‐adjusted life years.” In the as-many-as-possible model, it might mean privileging those without the protein that predicts long Covid hospital stays. In the models about life years, particularly when subjective measures about quality are involved, those with disabilities or chronic conditions, or even mental health issues, may be excluded. Some US states had emergency protocols saying that “individuals with brain injuries, cognitive disorders, or other intellectual disabilities may be poor candidates for ventilator support,” while a physician in Oregon cited low “quality of life” as a reason to refuse a ventilator. The research now available for the worst outbreaks has shown just how deeply inherent bias against disabled lives really goes.

As the pandemic drags on, disabled persons continue to fear being denied care because of someone else’s measurement of their amount, quality, or value of life left. If the standardized predictions envisioned by deCODE are made with a view to conserving care for able-bodied people first, then measuring mortality does more than predict death; for disabled people, it can actually hasten it.

There are better ways to measure a life than counting the days till its end. Disability advocates, many of them also disabled persons, have long registered the systemic bias in our health care systems, but the Covid crisis has helped bring some of these issues to the fore. As Matthew Cortland, lawyer and senior fellow at Data For Progress, explains, automated algorithms offered by AI or by deCODE “could be used to determine who to deny care to,” as in “they're going to die anyway, we should save the money.” Similarly, Alyssa Burgart, a physician, bioethicist, and clinical director at Stanford, describes the way crisis thinking tends to consider shorter lives of less value, as if disabled, chronically ill, or elderly people were less human or less worth saving. The assumptions being made now will be with us long after Covid has come and (hopefully) gone; our thinking in crisis needs to change or disabled people will always be a secondary consideration.

The problem is the concept of “long-term survivability,” the focus on length of life as a means of assessing value. “Death prediction technology doesn’t have to be bad,” explains Burgart, “it all depends on human decisions.” The tech is not as objective or as accurate as many suppose, but when policymakers assume a death prediction is right, she says, they “risk making foolish decisions to give more resources to people who are already doing just fine: How can we ensure the most needed resources go to those who can benefit from them the most?” We must instead protect the most vulnerable.

Cortland suggests the same data could be used to “surge resources” to those who are at “increased relative risk of short-term mortality.” For instance, when assessing patients for ventilators, use these two criteria: 1) who would be most likely to die without a ventilator, and 2) who would be most likely to survive with one. Death itself should not be the focus, nor a solution in its own right. The question, he explains, should be “What keeps people alive?” It’s not only ICU beds and ventilators, it's also resource allocation outside of hospitals: a safe place to live, enough to eat, affordable medicine. Predictive algorithms cannot parse social inequality; public health and policymakers can’t let them inadvertently enforce social determinants of health through denial of care.

The lives of a disabled person, a disadvantaged person, an ethnic minority, an elderly person, a woman, a child, a refugee all matter. Every moment is precious, every breath, every spoken word, every whispered wish. Prediction tools will continue to be used, and can be used for good, but we owe a responsibility to the least protected. When crises come—and they will, whether through new variants, entirely new diseases, or the consequences of climate change—we could build new hospitals, temporary wards, and treatment tents; we could bring doctors out of retirement or provide provisional emergency treatment licenses (as has been the case in Canada). We could exhaust the resources we have to ensure that all lives are treated with equity. Further, policy must foreground those who will be most at risk from death prediction tech and put advocates in charge of building policy to control and contain it. The future, says Burgart, is always influenced by our decisions and priorities in the present. Death prediction may be useful for early detection of disease, but in the end, it will never be able to measure the value of life.

That is something we must do for ourselves.


More from WIRED's special series on the promises and perils of predicting the future