Award-winning IBM’s AI Executive Discusses Ethical and Unethical Use of AI
Even though artificial intelligence can bring about a substantial positive impact in many areas of our lives, AI’s inappropriate and unethical use has become a hot topic in recent years.
 
As there is a need to address this pressing matter, Wow AI has invited Noelle Silver, a well-known champion for using AI responsibly to join the discussion. Once part of the Amazon Alexa development team, Noelle has witnessed first-hand the drastic change AI has brought to the world and how some businesses have abused the new technologies for their own agenda.
 
In an eye-opening pre-Worldwide AI Webinar interview, Mrs. Silver discussed in detail the current ethical and unethical AI applications with the former Editor-in-chief of Forbes, David Churbuck. 
 
Watch the whole interview here.
 
Read on to find out the insights she has to share. 
 
 
 

About the speaker

Noelle Silver is a multi-award-winning technologist and entrepreneur who is now the founder of AI Leadership Institute and is an AI Executive at IBM. She has led teams at NPR, Microsoft, IBM, and Amazon Alexa, and is a consistent champion for public understanding of the ethical use of AI and tech fluency.
 
She was recently awarded the Microsoft Most Valuable Professional award for Artificial Intelligence and VentureBeat’s Women in AI Responsibility and Ethics award. 
 
Noelle’s interest in AI started at the age of six as she grew up in the golden age of science fiction. Having read Asimov and Bradbury for decades before working on Alexa, her passion for science fiction led her to a career in AI. 
 
Noelle’s son who was born with Down syndrome in a time when people thought such a patient could not survive and her dad who was robbed of his ability to use electronic devices in an accident have been at the center of her work when she was developing Amazon Alexa.
 
“I don't know what it's going to be, but everything I built on Alexa, I built with those two use cases in mind.”

Ethical applications of AI

Noelle Silver believed that smartphone apps with different privacy policies asking for non-restrictive access to your microphone and camera are more threatening than Alexa. Amazon in fact simplified its privacy and data usage language and was transparent about using customers’ data to help them make better choices and give them a more convenient life.
 
“Amazon in the early days [...] said we're going to make your life more convenient, but we need your data to do it. We were very transparent in the fact that we were doing it. But I don't think the understanding of the world using that technology was at the level that they could even really conceive of.”
 
Noelle then claimed that Web3 is making users rethink data and going from “What are you doing with my data?” to “You can use and even make money on that data. But there needs to be equity in understanding and even profit sharing potentially with the people you're using that data for.” 
 
Apple, as she mentioned, has listened to its audience. Now, whenever you access Apple’s website, a little dialogue pops up suggesting that they would mask your email address for you. Noelle dubbed this action “ethics at its finest” and declared that more businesses should take responsibility to ensure responsible AI. 
 
 

Unethical AI examples

Meta’s BlenderBot is a prime example of unethical AI, according to Noelle Silver. 
 
BlenderBot is a conversational AI prototype that shows Caucasian males you ask it for examples of CEOs or suggests homemaker as the second top profession for women. 
 
Then, there was Amazon which uses AI in hiring. They created a hiring bot and it instantly started finding candidates that resembled their past successful candidates, whom all look the same, sounded the same, and went to the same schools. This eventually resulted in an imbalance between male and female new hires. 
 
As someone who has always been against AI in anything that would be demographically oriented, Noelle believed this was where companies should take a step back and ask themselves if it’s ethical to do so and what if they infuse some inclusivity so the model won't be able to be trained automatically.
 
At the upcoming Worldwide AI Webinar, Noelle Silver will dig deeper into this topic as she is going to discuss building responsible applied AI solutions at scale.
 

 
Grab your free spot: https://wow-ai.com/event
 
Watch the whole interview here.