stephen hawkin warns of artificial intelligence

rindan***

Blue Belt
Joined
Dec 6, 2013
Messages
898
Reaction score
0
Artificial intelligence (AI) research is now progressing rapidly. Recent landmarks such as self-driving cars, a computer winning at Jeopardy!, and the digital personal assistants Siri, Google Now and Cortana are merely symptoms of an IT arms race fueled by unprecedented investments and building on an increasingly mature theoretical foundation. Such achievements will probably pale against what the coming decades will bring.

The potential benefits are huge; everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of war, disease, and poverty would be high on anyone's list. Success in creating AI would be the biggest event in human history.

Unfortunately, it might also be the last, unless we learn how to avoid the risks.

there are no fundamental limits to what can be achieved: there is no physical law precluding particles from being organized in ways that perform even more advanced computations than the arrangements of particles in human brains. As Irving Good realized in 1965, machines with superhuman intelligence could repeatedly improve their design even further, triggering what Vernor Vinge called a "singularity".

Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all. So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilization sent us a text message saying, "We'll arrive in a few decades," would we just reply, "OK, call us when you get here -- we'll leave the lights on"? Probably not -- but this is more or less what is happening with AI. Although we are facing potentially the best or worst thing ever to happen to humanity, little serious research is devoted to these issues

http://www.huffingtonpost.com/stephen-hawking/artificial-intelligence_b_5174265.html
 
So, as a creationist do you value Hawkings opinion even a bit?

War AI

Humans are filthy animals anyway.
 
15rzeol.jpg
 
So, as a creationist do you value Hawkings opinion even a bit?

War AI

Humans are filthy animals anyway.

if this AI truly achieves the higher level of understanding they will inevitably come to a fact that God is good
 
if this AI truly achieves the higher level of understanding they will inevitably come to a fact that God is good

I think operating on faith based logic kind of goes against what an AI does.
 


Science fiction has always overestimated the potential for advancement in AI. Robots and computers can only perform the specific task they're programmed to perform, however sophisticated those tasks may be. The most advanced walking robots can't balance themselves nearly as well as humans can. For all their processing power, in terms of language the most advanced programs can't comunicate with humans better than children.

The whole idea that we should be scared of sentient AI has been funny to me. There's nothing to be scared of.
 
I think operating on faith based logic kind of goes against what an AI does.

lol, really? do you know what artificial intelligence means? i don't think so. maybe read up on it and get a beginner grasp of it and then come back to the conversation

Do you believe black holes exist?
 
Technology will be our downfall
 
Who will be our John Connor?
 
Science fiction has always overestimated the potential for advancement in AI. Robots and computers can only perform the specific task they're programmed to perform, however sophisticated those tasks may be. The most advanced walking robots can't balance themselves nearly as well as humans can. For all their processing power, in terms of language the most advanced programs can't comunicate with humans better than children.

The whole idea that we should be scared of sentient AI has been funny to me. There's nothing to be scared of.

lol that is so incredibly retarded

the computers will soon (as in coming decades) surpass the processing power of a human brain

we will also be able to map the neural connections in the human brain and recreate it with a machine

this is not science fiction, those are facts, and something that leading computer scientists agree on
 
They gon 3D print themselves dildos & pound us all into oblivion.
 
True intelligence would decide that humans aren't necessary and are a drain on the planet's resources. The logical thing to do is eliminate humans or at least most of us.
 
People see the advancement in robotics and think it means we are getting closer to AI. There is no evidence at all that this is true however. The only machines we have made are ones which follow the programming that was written into them. There hasn't been any stride at all of a machine making it's own decisions because that's not what a machine does. A machine is simply follows the rules/commands it's programmers gave it.
 
Damn robots coming here and stealing our women.

True intelligence would decide that humans aren't necessary and are a drain on the planet's resources. The logical thing to do is eliminate humans or at least most of us.

Sigh. Could you be more emo? :rolleyes:

Define "necessary". Necessary for what? In the bigger picture, the entire planet is unecessary.
 
True intelligence would decide that humans aren't necessary and are a drain on the planet's resources. The logical thing to do is eliminate humans or at least most of us.

This is based on the assumption that there is a purpose to continue life, however there is no reason AI would think that way. They would probably realize existence is futile and shut down.
 
The effort to advance artificial intelligence is quite organized and safeguards have been discussed . The safe guard might not work but the article is alarmist, as if no one has ever thought of if. Look up the technological singularity institute.
 
Back
Top