On Ensuring Ethics In Healthcare AI with Patrick Bangert, VP of AI at Samsung SDS
The third session of Worldwide AI Webinar 2022 gave us a clearer view of AI applications in healthcare and the ethical considerations by Patrick Bangert, VP of AI at Samsung SDS. He claimed that healthcare is one of the major fields that will be revolutionized by AI in the near future and people will feel a significant change in the way they experience and interact with the healthcare system.
The recording will be available to watch on demand shortly! Please stay tuned!

Key Takeaways

AI can accelerate the diagnosis time and accuracy

While the outcome of an injury diagnosis is usually accepted by the patients, more severe diseases like diabetes, cancer, neurological disorders, cardiac disorders, ocular disorders, and respiratory disorders tempt the diagnosed to seek a second opinion.
According to a comprehensive research report by Market Research Future (MRFR), the medical second opinion market is estimated to reach US$9751.58 Million at a CAGR of 15.8% by the end of 2027.
Patrick Bangert took this to mean 30% of the time, doctors’ diagnoses are incorrect. 
Then, cancer diagnostics are extremely constrained by the availability of medical professionals and the amount of time they have. Normally, a patient needs to wait for about six weeks to receive the result of their condition.
AI, if trained properly as stressed by Patrick, can do this instantly. 
“The benefits (of applying AI into healthcare) to the patients is obvious. The benefits are a vast reduction of anxiety and worrying and potentially a reduction of severity in the disease because you treat it early.” - Patrick Bangert, VP of AI at Samsung SDS

What kind of explanation can you expect from an AI?

Patrick stated that currently and unfortunately, we have yet had a scientific and mathematically acceptable definition of explanation. As Mr. Bangert pointed out, this is an important active research area that can lead to the acceptance of AI by the general population. 
However, there are two attempts at explanation.  The first one is called a heat map. This method shows the problematic area and offers explanations to patients and physicians. Yet, this kind of explanation is insufficient for patients and can partly help physicians grasp the situation. 
The second attempt is turning images into sentences. Learning context and meaning by tracking relationships in sequential data like text, image data or video data is Transformer AI. While this is a complex process for machines, turning images into sentences essentially isn’t hard for human beings.
In a nutshell, the science of artificial intelligence is not yet sufficient at providing helpful and trustworthy explanations. 

We must impose ethics into healthcare AI

There is no universal agreement on what ethics mean. 
Nonetheless, for Patrick Bangert, one of the questions he asks regarding ethics in AI is: “What is the cost of being wrong?”
In AI models, there are two sources of errors: false positives (FP) and false negatives (FN). In an FP situation (a healthy patient misdiagnosed with cancer), the outcomes might be short-term but can be damaging nonetheless since they will have to go through emotional reactions, tests, and treatment costs. In an FN situation where a cancer patient is diagnosed as healthy, the damage is long-term. 
To handle these types of circumstances, he came up with a framework called Technology Ethics Framework.
In Patrick’s opinion, the right way to think about ethics and AI is to see AI as a process. First, one must look at the use case. Then, he must define the intent behind building a model. While building said model, bias & variance, model complexity, and hidden variables such as cultural biases and stereotypes must be constantly monitored. The next stage is decision support and has to be led by an ethical principle, which Patrick considered utility the right approach. The last step is lifecycle management as an AI model is never fully finished. The model drifts because the world changes and the model should be adjusted universally. 
In conclusion, since AI is more automated and scalable than most other processes that pose ethical risks, ethics in AI differs from ethics in general. Investigating ethics in AI is a risk-mitigation effort that will pay off in the form of fewer negative headlines and legal action.