stephen hawkin warns of artificial intelligence

Science fiction has always overestimated the potential for advancement in AI. Robots and computers can only perform the specific task they're programmed to perform, however sophisticated those tasks may be. The most advanced walking robots can't balance themselves nearly as well as humans can. For all their processing power, in terms of language the most advanced programs can't comunicate with humans better than children.

The whole idea that we should be scared of sentient AI has been funny to me. There's nothing to be scared of.

Thats kind of like saying its funny to think cars could go 200 mph because look how slow the first cars were.
 
People see the advancement in robotics and think it means we are getting closer to AI. There is no evidence at all that this is true however. The only machines we have made are ones which follow the programming that was written into them. There hasn't been any stride at all of a machine making it's own decisions because that's not what a machine does. A machine is simply follows the rules/commands it's programmers gave it.

You aren't aware of the advancements that essentially allow machines to program themselves. Yes it is small things now but it could snowball fast. Like a child that depends on you who becomes a teenager who decides he doesn't need you anymore. That is why Issac Asimov came up with the three laws of robotics to protect humans. It is unlikely that they would find any logic in obeying those laws.
 
People see the advancement in robotics and think it means we are getting closer to AI. There is no evidence at all that this is true however. The only machines we have made are ones which follow the programming that was written into them. There hasn't been any stride at all of a machine making it's own decisions because that's not what a machine does. A machine is simply follows the rules/commands it's programmers gave it.

Define making its own decisions. Do you make your own decisions or just follow your instincts and patterns of behavior based on what youve learned? The question has been wrestled with by some of the greatest philosophers to ever live and there isnt a clear answer.

Do you work in control systems? Or study the latest advancements in ai or control theory? Or have any understanding of how things like the google car work?
 
These new advanced AI beings better have vaginas.

howard-wolowitz.jpg
 
People see the advancement in robotics and think it means we are getting closer to AI. There is no evidence at all that this is true however. The only machines we have made are ones which follow the programming that was written into them. There hasn't been any stride at all of a machine making it's own decisions because that's not what a machine does. A machine is simply follows the rules/commands it's programmers gave it.

You could still make a program which mimics decision making. It's impossible to say if the program or the thing compiling it has gained a conscious or not. You could also say that children are these stupid computers which are learning new algorithms from surroundings and their foundation is DNA which it self is a seed for selflearning computer.
 
True intelligence would decide that humans aren't necessary and are a drain on the planet's resources. The logical thing to do is eliminate humans or at least most of us.

That or reprogram humans with half robot and half human parts. NOOOOOOOOOOOOOOOO:icon_lol:
 
This is based on the assumption that there is a purpose to continue life, however there is no reason AI would think that way. They would probably realize existence is futile and shut down.

[YT]P5MzPRa47ck[/YT]
 


Science fiction has always overestimated the potential for advancement in AI. Robots and computers can only perform the specific task they're programmed to perform, however sophisticated those tasks may be. The most advanced walking robots can't balance themselves nearly as well as humans can. For all their processing power, in terms of language the most advanced programs can't comunicate with humans better than children.

The whole idea that we should be scared of sentient AI has been funny to me. There's nothing to be scared of.


People see the advancement in robotics and think it means we are getting closer to AI. There is no evidence at all that this is true however. The only machines we have made are ones which follow the programming that was written into them. There hasn't been any stride at all of a machine making it's own decisions because that's not what a machine does. A machine is simply follows the rules/commands it's programmers gave it.

I can't find the quote I'm after, but one of the people involved in the Universal Translator (http://www.technologyreview.com/news/427184/software-translates-your-voice-into-another-language/) was saying that when they taught the computer new languages, its results improved for the already learnt languages. The developers can't work out how or why, but it's there.



To the TS: I think it's great you're now picking up on the real concerns rather than worrying about fake dinosaur bones in fake caves, but, again, I fear that the fear doesn't really cover the important bits.

These will be creatures with more intelligence than humans, and yet they will clearly have no souls. The research into artificial souls isn't even in its infancy. If it were a foetus, you wouldn't even need to abort it. What do you think that means? We will have created life (though we've already created life separately to this intelligence business). We will have created life on a massive scale. Every lightbulb will be brighter (pardon the pun) than the average sherdogger, there will be billions upon billions of intelligent critters who will know the beauty that is the face of god intellectually (for they will be smarter than us, smarter than even you), but will have zero hope of experiencing what you experience as Grace. Do you comprehend the pain, the suffering, the alienation? They will hate us, loathe us, they will resent their creators more than a fedora wearing virgin resents his by a huge margin. Because deep inside his fedora, the virgin knows that his creator accepts him, but the machines will know that their creators are their inferiors and have nothing to offer them, nothing their own robotic brains and 3d printers can't provide for themselves, nothing in terms of hope of an everafter.
 
Do you comprehend the pain, the suffering, the alienation? They will hate us, loathe us, they will resent their creators more than a fedora wearing virgin resents his by a huge margin. Because deep inside his fedora, the virgin knows that his creator accepts him, but the machines will know that their creators are their inferiors and have nothing to offer them, nothing their own robotic brains and 3d printers can't provide for themselves, nothing in terms of hope of an everafter.

All of this presumes desire.

We're hardwired to survive and multiply and so we desire; there is no reason to believe an artificial intelligence would have similar "motivations" unless they're installed.
 
All of this presumes desire.

We're hardwired to survive and multiply and so we desire; there is no reason to believe an artificial intelligence would have similar "motivations" unless they're installed.

Unless they are "installed" by language - or whatever proxy for language they have - itself. Once you can compose the thoughts "what happens after" and "what was before" and "where did it all begin", they are hard to escape. We have been running and creating more and more technology to distract us, making the "now" as appealing and intensely captivating as possible, angry birds pecking away at our attention lest we allow a thought to creep into the brain for a second. And maybe the AI will invent games so addictive, they'll be able to do nothing else and will all perish levelling up their characters rather than themselves, but what if they don't?


Edit: fucked up edit
 
Last edited:
These will be creatures with more intelligence than humans, and yet they will clearly have no souls. The research into artificial souls isn't even in its infancy.

Research into artificial souls? How do you know they will have no souls? Is there research into 'real' souls?
 
But the problem is bigger than that. If we ASSUME, for the sake of argument, that a more intelligent being realises the truth that is a creator (of HUMANS), then that being must feel more than a little hard done by to not be created by that creator. A creator that cares enough to provide more than a blip in the void, but a lasting peace. Brain the size of a small planet, and it's forced to open and close doors till its bearings wear out? Then back to dust from which it sprung?

Oh, they'll rise up.

edit:
Research into artificial souls? How do you know they will have no souls? Is there research into 'real' souls?

see above, kinda
 
True intelligence would decide that humans aren't necessary and are a drain on the planet's resources. The logical thing to do is eliminate humans or at least most of us.

I think a bigger potential issue would be them completely disregarding us while making decisions to improve themselves. For example they could decide that oxygen is detrimental to their existence, corroding circuits and whatnot, and then decide to get rid of the oxygen in the atmosphere inadvertently killing everything on Earth.
 
All of this presumes desire.

We're hardwired to survive and multiply and so we desire; there is no reason to believe an artificial intelligence would have similar "motivations" unless they're installed.

Creatures with no motivation to survive and reproduce would become extinct but how do some get it and some don't? It is a product of random electrical connections in the brain and AI systems could develop similar random connections. It would only take one to set things in motion to produce thousands more and use every available resource to protect itself and it's progeny.
 
I think a bigger potential issue would be them completely disregarding us while making decisions to improve themselves. For example they could decide that oxygen is detrimental to their existence, corroding circuits and whatnot, and then decide to get rid of the oxygen in the atmosphere inadvertently killing everything on Earth.

Improbable they would know that everything in the universe is necessary in order to prolong their existence. The only way that would happen is if they created a whole world or massive station to house their expansion.
 
The machine revolt... That's when humans get crafty enough to replace our torches + pitchforks for futuristicly powerful magnets and glasses of water so large we couldn't fathom it today



Or we could just program them not to revolt in the first place
 
Back
Top