news

“Let’s move past the hype cycle” surrounding AI, say medical experts

Posted: 30 June 2017 | By Charlie Moloney

Machine Learning Conference held at Stanford University, 2017

Machine learning (ML), applied in the field of medicine, is “situated at the peak of inflated expectations”, wrote Stanford professors of medicine Jonathan Chen and Steven Asch, in an article published this Thursday for the New England Journal of Medicine.

Chen and Asch called for a “stronger appreciation of the technology’s capabilities and limitations”, to prevent “the hype from outpacing the hope for how data analytics can improve medical decision making”.

The benefits of having Artificial Intelligence (AI) analysing large quantities of medical data is often overstated, the authors pointed out, as “the relevance of clinical data decays with an effective ‘half-life’ of about four months”.

the question of whether a machine could surpass a human in its ability to treat patients makes “for stimulating debate – but is largely irrelevant”

AI tends to “recapitulate historical trends”, which are not massively helpful in establishing how medicine should be practised in future scenarios, which the professors pointed out are often fundamentally impossible to forecast.

“Identical twins with the same observable demographic characteristics, lifestyle, medical care, and genetics necessarily generate the same predictions – but can still end up with completely different real outcomes”.

Even if, the researchers theorised, a machine could perfectly predict the future, it could only do so by looking at associations in data – seemingly linked factors which might be misleading.

The machine would look at indicators, such as the patient’s medical records, age, and family history, in a “theory-free” way, and which won’t necessarily point towards an effective treatment plan.

Chen and Asch called for a “stronger appreciation of the technology’s capabilities and limitations”

For example, a machine could predict that a patient receiving Noradrenaline (a drug used in cases of excessive bleeding), or palliative care is highly likely to die, because that is what the data will show.

However, the AI might make “a substantial leap in casual inference” by concluding that those treatment plans should be stopped, because they are factors in the patient’s death.

According to Chen and Asch, the AI systems usually only make “highly accurate” predictions in cases where the outcome is already obvious to any practising medical professional.

Whilst the doctors praised the profoundly positive impact that AI has already had on medicine in other ways, they said that the question of whether a machine could surpass a human in its ability to treat patients makes “for stimulating debate – but is largely irrelevant”.

Related industries

Related functions

Related topics

Related organisations

Related key players

,