Covid 19: Algorithms that can predict mortality risk are here

AI models that predict mortality with 90% accuracy could revolutionise medical field


When Covid-19 first hit Europe, doctors were uncertain about how this new and deadly disease affected patients and what they would need during treatment. To give hospitals a glimpse of the future, a team of computer science researchers at the University of Copenhagen developed a machine that could predict how likely Covid patients were to die of the disease—with 90 per cent accuracy.

To create the algorithm driving the calculation, researchers input data from 4,000 Danish Covid-19 patients. It included their age, sex, body mass index (BMI), lab tests, vital signs, prescriptions, consultations as well as the other conditions they had.

Using that information, the algorithm was “trained” so that it could learn what the risk factors were that made Covid patients most likely to die of the disease.

The AI could then, with up to 90 per cent certainty, determine whether a person who is not yet infected with Covid-19 would die of the disease or not if they became infected. Once admitted to the hospital, the computer could predict with 80 per cent accuracy whether the person would need a respirator.

“The new thing here was the depth of the data we had access to,” says Mads Nielsen, one of the algorithm’s creators. “We had access to all the electronic healthcare records, including lung X-rays, for all the patients that were affected with Covid-19 in the whole region.”

Algorithms that predict mortality have made leaps forward in the past year and the Danish mortality model is not the first to claim high rates of accuracy. An algorithm developed in October by academics in China, which also analysed levels of protein and urea nitrogen in Covid-19 patients’ blood, boasted accuracy rates of 96 per cent.

In the US, researchers at health care provider Geisinger announced last month that they had also created an algorithm that was able to predict mortality within the next year, using videos of echocardiograms, a type of scan that looks at the heart and nearby blood vessels.

Personalised diagnostics

The promise of algorithms that can predict who is most likely to die is that the technology could enable doctors to make early interventions, detecting the disease before it is too late, and personalising treatments.

Dietmar Frey, head of Berlin’s Charité Lab for AI in Medicine, says humans might be able to understand how the risk of diabetes correlates with sugar intake, but artificial intelligence can simultaneously analyse the importance of age, location, profession and other pre-existing conditions, taking a multi-dimensional approach.

“The big advantage of machine learning is that it can find these hidden correlations and patterns,” he says. “In the not-too-distant future, I think you could personalise the diagnostics and also the treatment for a lot of diseases, and you could much sooner find risks for diseases.”

But the use of the Danish algorithm has been restricted. “The models are only being used for hospital planning, not for treatment of the individual patients,” says Nielsen. “On every Covid-19 patient in the hospital, they are running a huge battery of tests.” Nielsen adds that the algorithm is currently trying to assess which tests are important.

Mangled’ data

The reason the algorithm is not making decisions on individual patients’ treatment is because of a key problem that lurks at the heart of medical algorithms, which has caused some to brand them as sinister “black boxes”. Often, even the scientists who develop the algorithm have no idea how they work.

Maxine Mackintosh at the Turing Institute describes deep learning models—like the one in Denmark—as comparable to sausage machines in that they mangle information beyond recognition. “[The information] gets completely re-processed,” she says. “You cannot tell what went into it in the first place. But you just know that what comes out of it is a sausage.”

Nielsen says his team are currently trying to unpick their algorithm to understand what are the driving features used to calculate risk. But for him, the choice to use a deep learning model, which is harder to explain, meant prioritising performance over understanding. The algorithms need to win over the trust of doctors and patients, especially when Copenhagen University has ambitions to deploy the technology in other areas of medicine, such as predicting which patients might suffer complications during surgery.

But according to Nielsen, we use the technology we do not understand every day. “When we press the brake in our car, we have an ABS [anti-lock braking system] that is making us not break for milliseconds if we are doing it wrong,” he says. “We do not know how that works, but we are happy because it normally does work. When a technology has been in use for a long time, then users trust it.”

He adds: “That is the other solution, if we cannot make the model explainable, then we have to prove to a much higher degree [that] they work for the individual.”