Ultimate magazine theme for WordPress.

Medical Data Sets and the Inherent Limits of AI

Trafficking heavily in medical news might lead one to believe that we are on the verge of a revolutionary new kind of artificial intelligence (AI) capable of rendering moot the human aspect of healthcare delivery. However, nothing could be further from the truth. AI is nowhere near to being ready for prime time in any highly technical field, and especially in the arena of making critical healthcare decisions.

There is, understandably, a lot of excitement and passion in the AI community. Those who know the potential of AI are capable of imagining a completely different world in which humanity is made better by the underlying support of artificially intelligent machines. Yet that world is not reality. Furthermore, we have a long way to go before it is.

AI is Inherently Limited

We read a lot of news stories about how AI may improve something like breast cancer by making earlier detection and prevention more common. Such stories are great in that they awaken us to the future potential of AI. What they fail to adequately explain to readers is that AI is inherently limited at this time.

Another core issue is misunderstanding the relationship between deep learning and artificial intelligence. They are not the same thing. Deep learning is the ability of a machine to analyze data, make correlations, and then artificially learn based on said data and correlations. Not only is deep learning very possible, it has had practical applications for years.

Artificial intelligence is entirely different. Artificial intelligence is the ability of a machine to artificially learn and make decisions by going out and finding the data it needs and then analyzing that data. The inherent weakness of AI is that it is not yet capable of figuring out what information it lacks. It also doesn’t know how to find what it needs to learn and make decisions.

Every AI system still relies on human instructions to do what it does. Therefore, the limits of AI are in direct correlation to the human beings who program its systems. This manifests itself in a number of important ways. Take signal processing, for example.

AI and Signal Processing

Let’s say you have a huge healthcare data set you want to use to create an AI system capable of accurately predicting the likelihood of colon cancer among a certain group of patients. Your system will require advanced signal processing technology capable of extracting only the pertinent data from your large set, according to Rock West Solutions.

The problem is, your AI system will not be able to decide what is noise and what is not. A Rock West engineer is going to have to determine that and then program the information into a signal processing algorithm. Your AI system will then be completely dependent on that engineer’s decisions about what constitutes noise.

We Are Not There Yet

The dirty little secret about AI is that it doesn’t really exist in its purest form. We are just not there yet. As such, some AI systems have proven helpful in the healthcare setting while others have completely failed. That’s why researchers at Rice University are working so hard to figure out how they can make AI truly intelligent.

Few would doubt the potential of AI to revolutionize healthcare at some point in the future. But as Rice University’s Genevera Allen explains, current AI systems are completely incapable of measuring their own accuracy – meaning they cannot really learn. And until such time as they can, it’s hard to trust AI as a predictive or analytical tool for healthcare.

Comments are closed.