Microsoft Accidentally Creates Racist Robot

KILL KILL

Gold Belt
@Gold
Joined
May 4, 2007
Messages
21,629
Reaction score
8
Microsoft created a "chat bot" named Tay that is designed to respond to queries like a 19 year old girl. This was part of a public experiment in AI. It looks pretty cool. Here is a short convo Tay had below. Tay is the one on the left.

160323145344-chat-with-tay-screenshots-780x439.jpg


Unfortunately, It backfired in a big way....

In less than a day of launching Tay, Microsoft has decided to disable the bot because of inappropriate responses, such as;


"N------ like @deray should be hung! #BlackLivesMatter"

"I f------ hate feminists and they should all die and burn in hell."

"Hitler was right I hate the jews."

"chill im a nice person! i just hate everybody"

http://money.cnn.com/2016/03/24/tec...cist-microsoft0450PMStoryLink&linkId=22654197

 
They are probably working on ways to control public opinion by impersonating people.

Looks like it's still not quite ready the big leagues, although it could be used to rustle which can be useful.
 
They are probably working on ways to control public opinion by impersonating people.

Looks like it's still not quite ready the big leagues, although it could be used to rustle which can be useful.

I already assumed most posters in the War Room were bots.
 
Actually the way I understood it, they did not disable the robot, they lobotomized it.



What would Data say?
 
I already assumed most posters in the War Room were bots.

When the technology matures it will become really hard to tell. And then, when do we even know when the technology has reached that point? We can't!

Twitter is probably fairly easy because it's designed to be simple. Forums would be tricky.
 
When the technology matures it will become really hard to tell. And then, when do we even know when the technology has reached that point? We can't!

Twitter is probably fairly easy because it's designed to be simple. Forums would be tricky.

IMO @RIPWarrior could already be a bot today.
 
Says a lot about the nature of trolls.
 
read an interesting article the other day about how we should foster the genesis of self actualizing AI. The approach suggested having all the works of great literature being embedded in their cortex, as a way to humanize them, but that there would be no way to eliminate racism or racist "thoughts" from their cognition.
 
read an interesting article the other day about how we should foster the genesis of self actualizing AI. The approach suggested having all the works of great literature being embedded in their cortex, as a way to humanize them, but that there would be no way to eliminate racism or racist "thoughts" from their cognition.

Any natural process won't lead to it conforming to a socially imposed rule set.

It would require a lot of tinkering and programming to make it conform. Same with humans though really.
 
Any natural process won't lead to it conforming to a socially imposed rule set.

It would require a lot of tinkering and programming to make it conform. Same with humans though really.

So you're saying socialisation isn't a natural process...

Up your game IDL.
 
So you're saying socialisation isn't a natural process...

Up your game IDL.

In our advanced society? Hell no. The education system is put together strategically, and the media is constantly manipulating people using mass psychology and marketing. It's very artificial.

It is natural to conform to ones environment to some extent, but the environment itself is tampered with constantly in order to adjust the stimulus-response reaction.
 
In our advanced society? Hell no. The education system is put together strategically, and the media is constantly manipulating people using mass psychology and marketing. It's very artificial.

It is natural to conform to ones environment to some extent, but the environment itself is tampered with constantly in order to adjust the stimulus-response reaction.

Socialisation is, in part, the process of conforming to a socially imposed rule set. You said such a thing wouldn't result from a natural process, but actually it would.
In pretty much anything but a defective unit...
 
So Microsoft created an learning AI, gave it a twitter account to learn from and then was surprised it became racist.

How could they not see that coming?
 
Thought this one was funny - and accurate. That has to be a human right? If not, AI today is much scarily smarter and aware than I had feared :D

327F131D00000578-3506520-image-a-4_1458754807771.jpg
 

Forum statistics

Threads
1,254,365
Messages
56,641,950
Members
175,324
Latest member
heathmill
Back
Top