Samsung's VP of AI shares his insights into AI ethics in Healthcare among others topic
Samsung, one of the world’s largest and most widely-known electronics corporations, does more than just produce electronic devices.
It is also a pioneer in new technologies as its subsidiary, Samsung SDS, was established only to run its information- and communications-technology operations. In the field of AI and data science, Samsung SDS is a familiar name.
Wow AI recently had the honor to hold an interview with Patrick Bangert, the Vice President of AI at Samsung SDS.
You can watch the whole interview here
to get a sneak preview of what topics Patrick Bangert will discuss at the Worldwide AI Webinar.
If you haven’t had the time to do so, in this article we will outline some highlights of our talk.
About the speaker
If you’re not familiar with Patrick Bangert, physics, technology and essentially AI have been his interest since his early days. After obtaining a physics degree he became a theorist. At one point, Patrick Bangert questioned if the emerging newfangled computer technology could help develop mathematical language models. That’s how he got into the field and eventually made it to the position of Vice President of AI at Samsung SDS.
Biases in AI and User Data Collection
On the much-debated topic of biases that AI holds, Patrick Bangert states:
“There are systems where bias is not the problem, but the entire use case could be construed to be a problem. So personally, I think, for example, having an autonomous drone outfitted with weapon systems and an AI that with the permission to fire whenever the AI thinks it's identified a target is morally objectionable and has nothing to do with bias against people, against groups of people. This is biased against humanity in general.”
He also deemed user data collection is a complex and important debate. And that the best way to cope with it on a sort of global corporate side is to default to the strictest system.
AI in Healthcare
Discussing AI applications in healthcare, Mr. Bangert believed that AI is only complementary to human healthcare practitioners.
Machine’s diagnosis accuracy can go up to 99% but they cannot explain why a patient comes down with this disease and what the next steps are. Computer health systems at the moment are not able to provide replies that showcase empathy and build trust like how doctors can with patients.
“So it necessarily needs to be a computer-human hybrid interaction that actually gets the problem solved. And I think this is where the real secret lies. Doctors and patients need to become familiar and comfortable with the fact that AI systems are now a helpful tool around the doctor's office. In just the same way that a stethoscope and a scalpel or a microscope are useful tools around the doctor's office. They do not substitute the doctor. They help the doctor in getting the job done.” - Patrick Bangert, VP of AI at Samsung SDS
Mr. Bangert will be sharing much more first-hand information on Explainability and AI Ethics in Healthcare at the upcoming Worldwide AI Webinar.
To engage in meaningful conversations with our keynote speakers, save your spot here: https://event.wow-ai.com/worldwideAI2022/