AI Experts from IBM, Samsung SDS, Thomson Reuters, GridMatrix and Oxford Brookes Debate on The Future of AGI
The last session of the Worldwide AI Webinar was a long-awaited and eye-opening one!
Moderated by award-winning tech journalist and Forbes’ founder David Churbuck, this open debate on the future of AGI featured Noelle Silver, AI executive from IBM, Patrick Bangert, VP of AI at Samsung SDS, David von Dollen, VP of Data Science at GridMatrix, Fabio Cuzzolin, professor of AI and director of visual AI lab at Oxford Brookes University and Shirsha Ray Chaudhuri, director of engineering at Thomson Reuters.
Read on for the highlights of this session.
The real meaning of artificial general intelligence
Patrick Bangert started the conversation by sharing his thoughts on the meaning of AGI. He believed that for most people, AGI meant AI having acquired general skill sets comparable to those of a human being, such as talking, driving, or observing. However, determining whether it is conscious or sentient or not depends on how one defines consciousness and sentience. Moreover, he thought that the main concern of the majority was whether AGI would be harmful.
“My personal opinion is that it's a very nice concept to talk about general intelligence, but we are so far away from having it that it's hardly even worth discussing how to get there, or what it would mean. It's a nice dream for media, but that's really all it is. What we have today is narrow AI.” - Patrick Bangert, VP of AI at Samsung SDS
David von Dollen agreed with him and added:
“We don't have enough answers around human sentience to really define what sentience means for a strong AI. However, as we endeavor towards artificial general intelligence, we can create systems that augment human intelligence with weak AI. And I think the really interesting follow-up question to that is like how are weak AIs augmenting human intelligence today for what tasks and for what outcomes.”
Noelle Silver had a different perspective:
“I kind of think of AGI the same way that it might be to my user, that they are talking to an AGI because it will seem to them to be this all-knowing assistant, this all-knowing bud in their ear. But the reality is, it's not one model that knows all the things. It's hundreds of thousands of intelligent agents working together, and I do see a world where that's possible. [...] But I do see the perception of that being real for people that contextual understanding that gives me access to any intelligent agent on the planet that can serve my needs in the context and time frame that I need it. I don't think we're a lifetime away from that type of interaction.”
The obstacles to achieving AGI
When asked what the obstacles were for AGI to be practically available, Patrick responded:
“One of the main obstacles is really knowledge of the world. So if you take a standard chatbot with the most advanced language models, the GPT-x family of models, if you have a casual chat with it like how you're feeling and things like that, the outputs are perfect. But if you ask it pointed questions that require knowledge of arithmetic, that require knowledge of the relative size and weight and gravity and how the world works and that certain things are food and others are not, it fails miserably. It is absolutely worse than even the youngest child that can communicate with you.”
That leads to the next obstacle, which is to marry up the logical reasoning version of artificial intelligence and the neural networks parameter-based learning of artificial intelligence to be in a coherent form that we have both conversational ability and logic.
Fabio Cuzzolin built on that saying the first step to AGI would be formalizing the concept and the problem. He also shared that at the very least AI would need a time-variant from the original machine learning to be able to mimic that kind of evolving intelligence that calculates humans.
How can an AGI become applicable commercially?
Patrick answered this question by saying that if we did have AGI, every commercial use case would be solved.
Shirsha Chaudhuri talked about the society-impacting values of an AGI and that it could make the world more inclusive for the disabled, the elderly, and people who have a hard time understanding social dynamics.
David Churbuck then mentioned the SkyNet fear, indicating that AGI could take away some professions’ livelihoods.
“The problem is that if AI removes some jobs but not others, you have a kind of a two-class society. You're left with people who do have jobs because their field has not yet been taken over and other people who don't have jobs. If we have AGI and nobody has a job, then it's a different story because now you can revolutionize the economy overall. You could make sure that everybody has a universal basic income, which is being discussed anyway. Once you have that, if everybody is completely assured of having a relatively nice life without having to do any physical labor for it, then you do have the possibility of having some pretty utopian and beautiful visions of the future.”, as shared by Patrick.