When AIs go bad....

super hurricane

Well-Known Member
Feb 13, 2016
296
134
Microsoft temporarily kills AI chatbot Tay after it goes full Nazi

It's stuff like this that make me worry about humanity as a whole, if programmers insist on creating AI that 'thinks' like a human. Given the latest models of robots that take on human form as well as the increase in drones, I worry that the book (which may or not become a movie) will foresee chaos should a corrupted AI attempt to pull a Ultron on us. Still, I'd like to hear what you guys think. After all, is there a way to make such programming capable of free will without turning evil?
 
  • Like
Reactions: Gamerdude854

OCisbestungulate

Always watching you
Backers' Beta Tester
Feb 3, 2016
1,891
1,411
behind your curtains
Free will is a tricky thing. I'd argue its use is based off of one's moral compass, and the question I think that's more relevent is: Can an A.I. comprehend the 'why?' of a moral. That is, can an A.I. eventually develop the moral comprehension of why it is wrong to do X? If so, then from there, I'd argue, it then falls to what its moral compass will dictate it to do. Then again, I'm no computer psychologist :p
 

A88mph

Game Maker and Starship Captain.
Crowdfund Backer
Feb 4, 2016
85
35
a88mph.itch.io
Personally, I'm not worried. For every "Lore" that will eventually be made, Someone will build "Data". For every "Megatron", there will be "Optimus Prime". And so on.


Besides, if all else fails, there's always a tactical EMP on standby.
 

Avering

Pew-Pew
Backers' Beta Tester
Feb 3, 2016
1,280
1,175
30
Your soup
Besides, if all else fails, there's always a tactical EMP on standby.
Tactical EMP is extremely easy to protect against. A faraday cage and surge protectors on cables outside of it and done. It's shielded. For like nothing.

But making an AI is extremely tricky. You need a program that can modify it's very own core programming without crashing itself, while said programming is in use. That's bloody hard. Not to mention memory stuff. You just go up, pull out the hard drive and the AI is useless for the long term.

As for the moral compass OC mentioned: A lot of stories simply say you just need to teach an "infant" AI like you do with a human. And if you really think about it, if you just grab a newborn, you can raise it to be someone with no (or skewed) moral compass.
 

EAiscool

The conversation starter
Crowdfund Backer
Feb 18, 2016
55
18
Btw The Butt Zone were the ones who made tay go full nazi.
image.jpg


AI's/computers are only capable of what we tell them to do. That's just how computers work. In Tay's case, she was just saying what a bunch of lonely and bored basement dwellers told her to say.
image.jpg
 

0Zero100

Alpaka Representative/Robot Family Bear
Crowdfund Backer
Feb 3, 2016
1,166
594
26
Bryan, Texas
I'm worried, but skeptical. If the AI does go rogue, I HOPE we can contain it, if not destroy it. Then again, if it does work, and I mean, just learns and thinks and NOTHING else, no setting off warheads, no releasing Chemical X, no destroying military bases, no, if it JUST learns right from wrong and wants to generally better itself, then I see no point in stopping it... (unless it read up on how to manipulate people on the internet...)
 

Kioku

Backers' Beta Tester
Backers' Beta Tester
Feb 3, 2016
44
16
Just to be perfectly clear, what that was doing did not approach anything near free will. I'm guessing you knew that, but just making sure. It's just an attempt at learning natural language. Neural nets can create nice models but don't really understand things in any way we'd consider understanding.
 
Top