By David Churbuck
Throughout the history of computer science, the concept of an “artificial intelligence” has captured the imaginations of pioneers like Alan Turing, science fiction writers, Hollywood directors, to critics concerned about the risks of the algorithms and the models used to train the AI-powered applications now embedded in our cars, our smartphones, and most aspects of our professional and personal lives. Following the early breakthroughs and accompanying hype around speech and handwriting recognition forty years ago, AI and machine learning technologies appeared to stagnate in the 1990s and first years of the new millennium; overshadowed by massive changes in the IT-computing paradigm such as the Internet, web services and applications, the embrace of cloud computing, open source software, social networks, and the realization by most organizations that they had to shift their business models to a digital-first approach if they wanted to meet their customers’s new technology driven expectations.
Over the summer of 2022, Wow AI, a global provider of high-quality AI training data based in New York City, invited a panel of experts drawn from different industries and areas of expertise to share their insights into the current state of artificial intelligence and machine learning (AP\ML]; to discuss the factors that have accelerated the recent adoption of AI in applications ranging from self-driving cars to the real-time transcription of online meetings, the writing of news articles; and “reading” x-rays to identify the early signs of heart disease. Whether it is telling Alexa to set a kitchen timer, the suggested text that anticipates what you will say next, or unlocking your cellphone with a thumbprint or a selfie – AI in its broadest definition has become ubiquitous, making the interface between ourselves and our devices easier, safer, and more accessible.
All the experts were asked about the factors that first made them interested in AI\ML and the trends emerging today that they find most intriguing and potentially impactful. All agreed that the past decade has been a Golden Age for AI – one that was made possible by the affordable availability of AI services delivered from the cloud, by the inexpensive power of graphics processing units or GPUs designed to handle the types of transforms and calculations at the foundation of AI models. As processors became more powerful and models were trained on massive data models, familiar technologies such as handwriting and speech recognition improved in leaps and bounds. Self-driving cars became a familiar sight on the streets of Silicon Valley and Arizona. Healthcare, financial services, manufacturing ….the embrace of AI-powered applications was underway by 2015, the year one could argue will go in AI history as the year when Amazon put AI into the home with Alexa.
Seven years later, and one of the experts who shared her thoughts on the state of AI today,, Noelle Silver, shares her experience later in this article of being a member of the Alexa development team, and her early realization that the device’s greatest impact wasn’t on her as much as her son and father and who due to medical conditions were unable to use a device like a smartphone or PC without the power of a voice interface.
The Experts
● David Von Dollen, former head of AI at Volkswagen North America
● Patrick Bangert, VP of Artificial Intelligence at Samsung SDS American
● Noelle Silver, founder of the AI Leadership Institute, and Global Partner, AI & Analytics at IBM
● Aravind Ganapathiraju, VP of Applied AI at Uniphore
● Andreas Welsch, VP & Head of Market & Solution Management - Artificial Intelligence, SAP
All agree on the fundamental factors behind the current “Golden Age” of AI but each one has a unique perspective on different trends and issues as AI pervades society, continuously improving the human-machine interface to the point where AI-enabled applications are becoming more embedded in every aspect of our lives.
The five experts will share more insights along with more than 20 other thought leaders in Artificial Intelligence and Machine Learning recruited from across the Fortune 500 and around the world at organizations such as Walt Disney, Deloitte, Microsoft, Oxford, The US Department of Commerce and many others, during a two-day online discussion of contemporary AI and ML trends on September 29-30 hosted by Wow AI.
In the weeks leading up to the event, each speaker spent some time sharing the inspiration and factors that led them to a career in AI\ML, the emerging technologies they are keeping an eye on for the future, as well as the barriers, risks, and opportunities that lie ahead for organizations that rely on AI/ML to power their applications and solutions.
Welcome and thanks for joining. Please tell us what you do and how you came to a career working with AI:
David Von Dollen (ex-VW) - After I read Geoffrey Hinton’s work on artificial neural networks and deep learning I was hooked and wanted to learn more about AI which I did when I studied Data Science at the University of Washington, then my masters at Georgia Tech, followed by my doctorate at Leiden University in quantum computing and AI. Today I’m head of AI at Volkswagen.
Patrick Bangert (Samsung SDS) - I have always been fascinated by how the world works, that’s why I graduated from university with a physics degree – there I learned that physics is really a conversation between experiment and theory. Physics always involves a formula, an equation expressing a model of mathematical language. I looked at all the newfangled computer technology around me and wondered why can’t we use this to accelerate that. And that is exactly what AI is. After a career in process industries like oil and gas and chemicals I joined Samsung Data Systems.
Noelle Silver (IBM): My dad raised me in the golden age of science fiction so I’ve been talking to robots in my own mind since I was six years old! It was very interesting and ironic to me that my career led me to work on something as significant as Amazon Alexa after reading writers like Ray Bradbury and Isaac Asimov. My son was born with Down’s Syndrome and my dad became ill and lost his ability to use a phone or computer, I knew as a technologist that there were important potential applications for technologies like Alexa for people like my son and my father.
Aravind Ganapathiraju (Uniphore): I’ve been in the field of speech recognition for over 25 years, so I would call myself a dinosaur. My PhD dissertation was an attempt to use one of the early machine learning breakthroughs called support vector machines, it was about using that technology for speech recognition and I’m proud it was one of the first pieces of work in that areas and I’ve kept working in the field ever since, evolving into natural language processing (NLP) and other aspects of conversational AI. At Uniphone we provide cloud-centric products enterprises use to gain efficiencies and insights from the conversational data generated from their contact centers.
Andreas Welsch (SAP): I first learned about AI back in ‘08 or ‘09 when I was studying for my bachelors degree. AI was an elective at that time, so I added the course because it was so far out there that I thought it was something out of the Matrix that was not very applicable to my work in IT where I managed projects. It felt too futuristic and not very tangible back then. Now I wonder why I was wrong? Now it’s a fantastic and exciting space to be in, to be able to shape what we do with technology.
Fears about an AI going out of control – such as Skynet in Terminator or Hal 9000 in 2001:A Space Odyssey – autonomous assassin drones armed with bombs and picking out its own targets, or even devices like Amazon Alexa or Google Assistant or Apple Siri acting like microphones eavesdropping on us for Big Brother. They’ve been associated with the concept of AI ever since Alan Turing proposed the Turing Test in the 1950s. Seventy-five years later and there’s been laws passed restricting facial recognition in the state of Illinois, there is legislation pending at the state and federal level to regulate AI and review algorithms for signs of bias or the perpetuation of old models that could deny a person equal opportunity. Is this just more fear of the new? What is the risk the gains of the past ten years could be reversed or future developments hindered by fear, baseless conspiracy theories or over-regulation?
Andreas: I think if we look at people and humans and humanity as a whole there has always been a fear of not being quite the pinnacle of evolution; that there is something that comes after us…You need to make sure that the people who are affected by the change in how they work are part of the process, that they are aware of why and how you want to introduce a piece of technology like AI, what the limitations are, and where it can help them become better and more effective. When that occurs I’ve seen the strongest bonds and trust in new systems occur.
Noelle: There are lots of examples of bad AI in the world or threats to our privacy from devices a lot more threatening than Alexa. I mean the average smartphone has 50 applications all trying to get permission to access the camera, the microphone and our contacts. But in terms of unintended consequences, I’ve consistently been opposed to AI being applied to anything demographically oriented. Hiring bots trained to identify qualified candidates instantly start selecting the candidates most like past successful candidates. Guess what? They all look the same. Same schools. Same backgrounds. Those patterns would lead someone to say the model is working. But when you take a step back you see, “Wait. We have 10% women which is exactly the same level we’ve had the past decade.” Biases end up perpetuating bad behavior. Maybe the models need to be infused with some inclusivity.”
In the 1980s AI seemed to have potential in decision support systems, and expert systems, but then it seemed to stall, almost a victim of the term “artificial intelligence” to the point it fell out of vogue and was pushed aside by waves of big technological change from the Internet, to ecommerce, social networks, and so on and so forth. Then, almost overnight it was in our cars with us, in our phones, then our living rooms, and so on and so forth to the point where we’re looking at autonomous vehicles, real time meeting transcription, and in the case of Aravind’s company, Uniphore, analyzing customer interactions for tone and emotion – not just content. What happened to help AI get over the hype that surrounded it in the past, while delivering significant results after so many years of being ignored?
David Von Dollen: I would say two factors brought AI out of its “winter.” One was hardware – computing power primarily in the form of GPUs which have had a tremendous impact, as well as smartphones which means we now all have devices in our hands that can perform sophisticated image recognition. The other factor is ongoing refinements to the underlying algorithms, many of which have been known for quite some time, but were waiting for the computing power and infrastructure to catch up and make them usable. Geoffrey Hinton was exploring conversational neutral nets in the late 1980s, it just took thirty years for the technology to catch up to make it ready for the world.
Patrick Bangert: This renaissance of AI we are experiencing in today is sometimes called the “Deep Learning Revolution.” Yes, some of it comes down to processing speed and we have the graphics processing units we didn’t have thirty years ago, but it’s not just about speed and saving time – that’s not very interesting – speed is mainly interesting in the sense that it allows us to train much bigger models in the same amount of time. That’s where the benefit really comes in. So the models related to say, handwriting recognition or speech recognition, those models are just a lot bigger now than ever before and therefore they’ve become better at performing those tasks. The second benefit is scientific. A lot of headway is being realized in deep learning due to the mathematics of AI gaining novel algorithms, and novel modeling methods that are a step-change better than what we had in the 1980s.
Aravind Ganapathiraju: If you had the term “neural network” in a proposal or business plan in the 1980s or 90s you probably had no chance of getting funding. Whereas in the 2000s and beyond the computing power caught up and AI became very accessible, kicking off a whole wave of engineers and scientists who used machine learning to solve all kinds of problems. The difference is accuracy. The first ASR systems (automated speed recognition) had a 40 percent error rate. On the same task today we are pushing a 5% error rate. So we are literally almost there – I’m not saying it's a solved problem, but it is indicative of the evolution that has happened over the last two decades.
Let’s talk about the role data has played in helping AI\ML deliver on its promises. Data is abundant – we have more sensors collecting it, more storage to store it, and the computing power to finally process it at acceptable speeds. Aside from strict laws governing the processing and storage of personal data and regulations to ensure data privacy, especially around patient data and other personally identifiable information, what should providers of AI-enabled products and services be thinking about when it comes to data?
Aravind: One of the latest products we have released at Uniphone is “Q for Sales”. We call it an “Emotional Intelligence” platform because it analyzes conversations not just by examining the tonal information in a call such as this one, but also the visual cues, giving insights into the engagement levels of each participant by analyzing facial expressions and other visual cues. The fusion of different data types – going from audio to text, from video to facial expressions – provides a call center person with valuable insights and nudges to gain a better outcome from the call. The point being that it is the combination of different types of data being analyzed that will mark future approaches to collecting it.
Andreas Welsch: While it’s become easier to build AI capabilities thanks to standardization and choices in how it is deployed, a byproduct of the Big Data trends of the early 2000s has been an influx of so many data points from transactional systems, social media, weather data, sensor data of all kinds to the point that there’s just so much information that it’s not possible for one individual human being to analyze everything by themselves, or even a team of five or a ten of ten data scientists to do it at the speed and the scale and quality needed to make decisions in business today. With the application of AI on the task, and we’re able to detect these patterns in the data that give you insights that allow you to automate certain parts of your business processes in a way that has never been possible before at such a level of scale.
I also think, to Aravind’s point, that looking at different types of data – not only structured data contained in tables, or text stored in our systems – but unstructured data from documents to audo recordings, other images, videos. There’s just so many more data pools available to us with these technologies, now we have the tools to analyze them so much better and on a much larger scale than ever before.
Patrick Bangert: At Samsung we train all sorts of models, some for our own internal use. An example would be scrap detection in our manufacturing – a critical part of Samsung’s model as the company manufactures all of its own devices in its own factories. If halfway during the manufacturing process a particular part fails for some reason or another, you want to detect it as soon as possible and scrap it from the process so you don’t add value to an item that is already broken. It’s the most valuable model in fabricating semiconductors because the process to make wafers that eventually turn into chips is a very expensive effort, and isolating the wafers that were scratched or damaged in some way as early as possible is a model that can potentially save hundreds of millions of dollars. The role data plays across the company in driving AI systems to forecast how many people will buy a particular Samsung device, at which stores, and how to get inventory to those stores upon launch — the trick is about getting the amounts right so we don’t end up with warehouses full of unsold goods, or on the other hand, we don’t want to run out either. Our internal data is the fuel for those forecasting systems, data unique to our business and our success.
Noelle Silver: Google, Facebook, Apple have all done very well by getting people to give them their data for free. I think Web3 is really forcing people to rethink data and go from giving up their data in exchange for free emails or photo storage, to Web3’s approach that says, “You can use my data, you can even make money off my data, but there needs to be equity in understanding how the data is being used, and potential even profit sharing with the people using that data for their gains. I think it’s part of an ethical shift we’re seeing between the collectors of data and the sources of that data. Companies like Apple get this, and distinguish themselves by popping up a little dialogue that asks “Do you want to share your real address or would you like us to mask it for you.” That translates in my mind to more responsibility on the part of the companies to being a responsible, ethical steward of its users’ data.
David Von Dollen: I focus a lot on what I call “narrow AI” – an algorithm that’s trained on a specific set of data to perform a specific task. That’s what a lot of our applications do today. It’s all pretty much pattern recognition but within narrowly defined constraints. I think those types of applications present the risk of people using Narrow AI in ways that may turn out to be harmful, much more so than some sentient AI taking over in a Skynet situation.
To get more insights from the experts visit:
To watch the full conversation with the experts who will be keynoting the
Worldwide AI Webinar, please visit:
●
Samsung SDS’s VP of AI discusses User Data Collection and AI ethics in Healthcare:
video
●
Uniphore's VP of AI discusses The Latest AI Strategies and Future Innovations in Conversational AI:
video
●
SAP’s VP of AI addresses the current acceptance levels of AI & the need for AI Literacy:
video
●
IBM's AI executive, founder of AI Leadership Institute on AI ethics and benefits:
video
●
Volkswagen’s Former Head of AI on Autonomous driving space & AI Sentience:
audio
These five thought leaders and other experts from around the world will be taking your questions, and discussing the issues and opportunities in AI and ML applications, training models, and data sources, and other topics over the course of two days on September 29 and 30th: https://event.wow-ai.com/worldwideAI2022/
David Churbuck is the founder and former editor of Forbes.com, and a prize winning tech journalist.