The promise of Artificial Intelligence (AI) in health care is clear: Computers will perform work normally associated with intelligent beings. But while AI may be the next big thing it’s not poised to take your job just yet. Though we imagine what AI can offer clinically and what the future might be, our expectations need to be tempered with realism.
Two recent articles, one in the Journal of the American Medical Association (JAMA) and the other in the New England Journal of Medicine (NEJM), address the potential of AI in a balanced and thoughtful way.
In the NEJM perspective piece, authors Chen and Ash note that machine learning is at the “peak of inflated expectations.” Machine learning software combined with the best human clinician “hardware” will do more to enhance care than anything people or machines can do on their own, say the authors. There are limitations to what AI can do alone because AI still boils down to algorithms. As the article notes, the more data we put into different realms, the greater the accuracy but it’s not always useful. For example, big data might identify that palliative care consults and epinephrine infusion are associated with mortality. This won’t be a revelation to those who practice in this area.
Beam and Kohane in JAMA note that while machine learning “is not a magic device that can spin data into gold”, the addition of various risk calculators is adding to the value of the information being served up. AI will likely evolve to more advanced models that can deliver more robust predictions that we can do more with. But they note the tools “come with no guarantees of fairness, equitability, or even veracity.”
Modern day medicine needs AI to deal with all the information we’re capturing in order to identify trends. But we need to add our own intelligence to the mix and resolve to be realistic about AI capabilities.