As humans, mistakes are inevitable.
However, to what extent is this common notion tolerable in the field of medicine? According to a study conducted by the Pew Research Center, mistakes diagnosing patients are the most common and dangerous errors made by doctors in the U.S. and result in permanent injury or death for as many as 160,000 patients annually. This statistic begs the question: are doctors not as good as they seem? But perhaps, the problem is not the doctor, but the human.
One promising solution for this magnitude of human error is artificial intelligence. According to an article from Medical News Today, it was found that agents of machine learning could detect diseases ranging from cancers to eye diseases as accurately as health professionals. Moreover, machines have an advantage that humans do not have: they lack brains. They are uninfluenced by emotion, bias, or self-centeredness, all of which are a factor that plays into ethical concerns of doctors. For instance, in many private practices, doctors are paid by the procedure; so, many doctors often perform unnecessary procedures to earn greater amounts of money. With the emotionless and mindless state of artificial intelligence, such ethical concerns could potentially be fixed if machines are integrated into medicine.
However, implementing AI into medicine actually brings us closer to our goals? Or would they actually undermine what we’re trying to achieve?
One particular obstacle to the integration of AI is the want for human-to-human interaction. Even beyond medicine, from baristas to receptionists, there are many professions that we’d rather have humans work instead of robots. We have become significantly accustomed to humans running these jobs, and we have developed a strong sense of trust. That trust is especially strong with physicians; humans are too accustomed to humans overseeing such crucial operations to our lives.
However, it is important to consider that thousands of lives are at stake. Perhaps, the integration of artificial intelligence in the health sector leads to short-term consequences, for a long-term improvement.
But, although we are trying to minimize human error with machines, is that really plausible? The answer to that question is no, simply because machines are programmed in a way to be perfect does not guarantee perfection. According to a Forbes article by UCI artificial intelligence professor Neil Sohita, AI will never achieve perfection because the inherent way in which artificial intelligence is developed will always leave a margin of error – especially in the cases when they have to adapt their intelligence. They simply lack that common sense knowledge that humans have, hence why it may be difficult to completely replace them as doctors.
I have witnessed this firsthand. As a strong believer in the intersection of technology and healthcare, I too have experimented with machine learning and healthcare datasets, ranging from x-ray scans to optical coherence tomography imaging. In short, I have trained a number of image classification models using such healthcare datasets to diagnose diseases, and through my experiences, I can confirm that Professor Sohita’s testimony is correct. In machine learning, if a model achieves 99%, or even above 95%, accuracy, something is wrong. This would be an instance of overfitting, when the model stops learning and instead starts scrutinizing patterns specific to the data. For instance, it could be learning the resolution of the camera in the image data or finding patterns not found in other datasets; it overfits itself to the training data and will underperform to unseen data, thus defeating its purpose. Even for one of my personal projects, I knew something was up when I saw 96% on one of my models. Then, I studied the graph of my loss function – which would show a consistent downward trend if it fit properly – but instead, the graph fluctuated. That’s how numbers mislead the public, and like Sohita said, it’s impossible to achieve perfection. They will never significantly outperform the human benchmark, but they can definitely help.
Another pertinent issue of AI goes beyond just the field of medicine; there are ethical issues that could expand into the tech sector as well. When developing such novel machine algorithms, programmers must work with sensitive healthcare data. Therefore, numerous ethical principles must be respected when working with such data. For instance, such principles include beneficence, patient autonomy, non-maleficence, and more. Thus, using AI as a solution to unethical practices in healthcare could potentially lead to even more ethical issues in the tech sector, thus why AI is more of a tradeoff than it is a solution.
All around the world, we see AI as the future. From speech-to-text to Siri to Tesla’s self-driving cars, technology is everywhere. But in reality, one field where technology won’t replace is healthcare, for it is so important that humans are able to oversee the process that the increase in performance that AI brings is negligible. That is what’s overhyped about AI, we think it's the future but it won’t be the future everywhere. Perhaps, the most optimal solution is a mix of both human and AI in the health industry.
Comentarios