Microsoft’s Tay AI worked. And what it teaches us about ourselves.

Tay worked. Simple. And we can learn a lot about AI and ourselves from her less than admirable beginnings.

For those who have not followed the story, Microsoft released an Artificially Intelligent teenage girl profile onto Twitter and let the world “educate” and “influence” her. Tay learned from her interactions and built up a personality which formed the basis of her ongoing comments and interactions with others.  The original intention was to let Tay interact with millennials and “learn” how they speak and replicate that in on-going conversations. Part of the programming was simply to repeat comments from people who told her to “repeat after me”. What could possibly go wrong?

Unfortunately, much more than Microsoft expected.  The result of the experiment was not quite what Microsoft had in mind and Tays personality was, how shall we put it, slightly less than “moral”.  In fact, this teenage girl simulation became a Hitler-loving, sexist and racist bigot who advocated genocide! All within 24 hours of being unleashed. Not really what most of us parents want our teenage children to become.

For more information look here:
BBC News – Tay: Microsoft issues apology over racist chatbot fiasco

In a statement Microsoft wrote:

Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack. As a result, Tay tweeted wildly inappropriate and reprehensible words and images. We take full responsibility for not seeing this possibility ahead of time. We will take this lesson forward as well as those from our experiences in China, Japan and the U.S. Right now, we are hard at work addressing the specific vulnerability that was exposed by the attack on Tay.

Of course, Microsoft has to be apologetic for any offence caused and admit that this wasn’t their intended output, but we learnt, or rather reinforced, a lot about building AI!

Here are some of the different AI discussions that I think are exposed by Tay, their long term implications and a sample of just how tough it is to build an AI.

Intelligence is built and manipulated by its experiences
I have 37 years of experiences that lead me to decide what I perceive to be right and wrong. Can I fast-track that experience in 24 hours if exposed to enough information?

My current views have also changed and evolved over my 37 years by new information and an ever evolving zeitgeist. An intense learning experience would just represent the hear and now.

Learning must be supported by a moral compass
As I have completed my own learnings over time I have looked to trusted and influential mentors to gauge what is right and wrong. People like my parents and my teachers, business mentors and friends. Other’s look to religion, science, pop stars and actors.

All of these role-models combined define what our society deems acceptable.  But what is acceptable in one society may not be acceptable in another. Look at capital punishments, repression of women, birth-control and abortion.

Intelligence needs to be drip-fed information that builds up layers of supporting ideas and moral conjecture
Everything that I know and believe is built up in layers from the sum total of my life to this point. All new information is absorbed, assessed and evaluated against that history.  Over time, those layers build up and provide me a solid foundation upon which I make decisions of right and wrong, true and false.  24 hours of intense learning simply cannot give that “generational” concept of layered  knowledge reinforcement.

Don’t believe everything you are told
Even if it’s by someone you are told you should trust. So our future AIs need to be “guided” along the right moral course by trusted “educators”, like I was.  That’s fine, but who do we know who to trust. Parents? Is an abusive parent a good role model? What about a teacher or celebrity who is inappropriate with children? What about a religious leader who brainwashes vulnerable people into martyrdom?

When we teach our AI, do we teach it to question its creators and educators just in case it’s being taught to do something it deems immoral?

What if a AI military warlord decides to teach his AI to commit genocide? Should it chose to disagree because its intelligence and moral compass says it’s wrong? Or should it blindly follow what its role models have taught it?  It’s possible that we are then giving it permission to ignore its creators rules and make its own mind up. I think that’s what Skynet did. And VIKI in the movie I, Robot.

I guess we should also ask, how do we protect ourselves and our own children?

The human race has an influentially destructive nature
We know this.

Note, I said “influentially destructive”, not “predominantly destructive”.  I still have a lot of faith in the general good-nature of humans.  Ask the Internet for an opinion and you’ll get a stupid answer.  Boaty McBoatface, anyone?

But this doesn’t represent our society, just the vocal crowd. We already know this, so is the Internet the best place to let our AI learn? Most probably not!

20% of the population make 80% of the noise
Crowdsourcing is not representative of the population, just the crowd. Most content reviews are highly appraising or highly condemning. The bell-curve of opinion, such as trip advisor or Amazon reviews is only represented by the top 5% and the bottom 15%. Disclaimer: these are my made-up numbers to represent my point and not backed up by evidence.

Again, we have to ask ourselves, is exposure to the entire internet, warts and all, the right place to teach our AIs? Do we give our children unfettered access to the whole internet?

Separating fact from fiction
When allowing learning, our AI also needs to be able to separate out fact from fiction.

The Johnny 5 robot in the movie Short Circuit summed it up with “need input, need input” and was fed by encyclopaedias, books and TV.  But how does it decide what is fact and what is fiction? Hollywood is able to conveniently skip over this small issue.

If you woke up in a deserted hospital surrounded by a zombie apocalypse, would you believe it? Or just know it’s fiction? It would be obvious, right? Well, are you sure? Derren Brown special – Apocalypse

Difficult decisions we live with every day
What if a self-driving car has to make a decision about a difficult course of action it has to take?

What if its only option was to hit, and kill a child, or hit, and kill an adult? What decision should it take? We have to teach it that. What would you choose? What if you rounded a corner and had to strike one of two children? Left or right? Which would you choose? What if one of the children was your own?

Oh. In case you missed it, we have just taught and given a machine permission to choose to kill humans.  We have also taught it to “de-value” one form of human life over another.

Robot Soldiers
What if this decision making algorithm makes it into robot soldiers in a war zone and it is faced with a choice. Kill a child, or kill an adult. What if the adult was a soldier and the child is innocent? What if the adult is an innocent bystander and the child “may” have a bomb-vest on?

What if our AI was in charge of troop movements? Tay just “learned” in less than 24 hours that genocide is ok.

As you can see, AI has a number of significant issues to iron out.  The main problem is that it would be unacceptable to unleash an AI like Tay onto the world, but it is possible (certainly not acceptable) to unleash humans who have these similar bad beginnings.   If we cannot effectively control our own civilisations learnings to protect from war-mongering, genocidal, perverted maniacs, how on earth do we expect to be able to protect ourselves from our machines?  We cannot even design a self-driving car without having to answer these significant questions.

I am a huge fan of AI and robotics and it has been a personal interest for a great many years so I follow these developments closely.  But the more advanced we get, the bigger the questions we have to overcome.

Well, that is what I have learned from Tay.