stephen hawkin warns of artificial intelligence

People see the advancement in robotics and think it means we are getting closer to AI. There is no evidence at all that this is true however. The only machines we have made are ones which follow the programming that was written into them. There hasn't been any stride at all of a machine making it's own decisions because that's not what a machine does. A machine is simply follows the rules/commands it's programmers gave it.

True but it will happen quickly once it is discovered.
 
Stephen Hawking is almost artificial himself.

Maybe, he is trying to warn us about himself??? He will end up taking over the world and forcing everybody to believe that the Higgs Boson was just an illusion, and that he was right all along.
 
Stephen Hawking is almost artificial himself.

Maybe, he is trying to warn us about himself??? He will end up taking over the world and forcing everybody to believe that the Higgs Boson was just an illusion, and that he was right all along.

i just pictured a Hawking-esque monster like Barbara Streisand in South Park lmao
 
Who will be our John Connor?

John Connor. JC. Jesus Christ.

He came back in time to warn us of our ways, and to try to set us on the right path!
 
Stephen Hawking is almost artificial himself.

Maybe, he is trying to warn us about himself??? He will end up taking over the world and forcing everybody to believe that the Higgs Boson was just an illusion, and that he was right all along.

He probably has a chip on his shoulder. With his mind hooked into a machine he could reek havoc.

So I guess then he would have a chip on his mind. We can't allow that to happen.
 
ok cool.

My take is that it has already surpassed human intelligence-- in computational ability, and in memory.

There are other types of intelligence which we possess natural ability in which are not easy for us to even understand, so teaching it (being necessary to understand it first) will take quite a long time.

practically everything you said is factually wrong. do you even know what gives a computer 'computational ability' and 'memory'? and you don't 'teach' computers.
 
Correct me if I'm wrong, but don't we already have it (it's just really crappy)?

Depends on your definition of artificial intelligence. Retards all up on sherdog think there's a scale where we compare how 'smart' a computer is compared to a human and that we're inching along it.
 
I love when you post a topic like this on sherdog where 80% of posters are actually retarded
 
Because you seem to be convinced about the possibilities in an area of technology you have little experience with or understanding of. The superintelligent computers people are talking about would be created by self modifying code, something that already exists. A simple program only does what it was programmed to do, but more advanced programs with feedback loops and the ability to modify themselves get better at doing things over time. Right now, these programs are somewhat basic, but as they get more advanced, they will be able to modify and improve larger and larger parts of their code.

You're talking about self-modifying code. That's a far cry from sentience. In order for the program to become self-aware, it would have to write code too advanced for humans to write. Yet, it's still operating within the parameters that its human creators delineated.

No code that the program can write is outside of human imagination, because the program itself is not imaginative unless programmed to be so. Hence, it can only modify its own code in the same way that its programmers are already capable of doing, it can just do it better and faster.

Sentience cannot logically arise out of that. You would be sitting there hoping for a spark of consciousness to miraculously coagulate out of nothing.

And let's say it did. How would it interact with the physical world? It doesn't know how to navigate the Internet, it doesn't know how to exist inside of a robot chassis, it doesn't know how to read human languages, it doesn't know what a human is - unless it is programmed to know. For it to learn on its own, it would have to write its own code that allows it to learn.

And you're telling me that's not just possible, not just likely, but a foregone conclusion. No.
 
You're talking about self-modifying code. That's a far cry from sentience. In order for the program to become self-aware, it would have to write code too advanced for humans to write. Yet, it's still operating within the parameters that its human creators delineated.

No code that the program can write is outside of human imagination, because the program itself is not imaginative unless programmed to be so. Hence, it can only modify its own code in the same way that its programmers are already capable of doing, it can just do it better and faster.

Sentience cannot logically arise out of that. You would be sitting there hoping for a spark of consciousness to miraculously coagulate out of nothing.

And let's say it did. How would it interact with the physical world? It doesn't know how to navigate the Internet, it doesn't know how to exist inside of a robot chassis, it doesn't know how to read human languages, it doesn't know what a human is - unless it is programmed to know. For it to learn on its own, it would have to write its own code that allows it to learn.

And you're telling me that's not just possible, not just likely, but a foregone conclusion. No.

Why would I listen to a cyborg? Youre on here trying to convince everyone that computers cant rise up, while you command your computer army to do just that.

I just called the US government and they're gunna drop some freedom on your AI ass
 
How about something we havent discussed. Undoubtedly humans have physical limitations for whatever reason. Our bodies are built for the long run and not short sprints. Woul having a complete understanding of something critical like mathematics and syntax (key being COMPLETE) allow a machine to do things that humans have yet to do? The issue here wouldnt be we are incapable, but merely if our bodies were allowed to work unrestrained, we could. Simply, could the brawn of computers match the brain of humans?
Could be better worded, but like the plot of that scarlett johansen movie coming out Lucy.
 
Yet another bit of alarmist nonsense. Post-apocalyptic societies play into our fantasies in a way that makes this kind of alarmism inviting, and thus a worthy topic of discussion. This has no more merit than discussing our zombie survival plans or what we are going to do to ward off the inevitable alien invasion that's not going to happen.
 
Supersmart computers >>>> zombies. Zombies eat brains, not circuits.
 


Science fiction has always overestimated the potential for advancement in AI. Robots and computers can only perform the specific task they're programmed to perform, however sophisticated those tasks may be. The most advanced walking robots can't balance themselves nearly as well as humans can. For all their processing power, in terms of language the most advanced programs can't comunicate with humans better than children.

The whole idea that we should be scared of sentient AI has been funny to me. There's nothing to be scared of.


boy glad you were not in charge of the first automobile or flight projects since you seem to think advancements in science don't happen and you just need look a the past or current state to determine the future.

What we know for sure is computer processing power and speeds are increasing at incredible rates and there are huge jumps in advancement ahead of with many new super conductive materials moving from theory to reality. What we know for sure is that computers are continually advancing to do more and more sophisticated tasks. What we do not know is where a ceiling for advancement may be.

What we cannot do is speak like you in absolutes in an area advancing so quickly.
 
All of this presumes desire.

We're hardwired to survive and multiply and so we desire; there is no reason to believe an artificial intelligence would have similar "motivations" unless they're installed.

good point and interesting point.

I guess a simple program written to say the AI must 'win' and 'winning' is defined as 'self governance and continued growth' could be their equal desire to survive and do the equivalent of procreate themselves.
 
Millions of years ago neanderthals could not be convinced there would be airplanes one day.
 
I think people in this thread are too easily dismissing artificial intelligence. There are many ways that it is being sought by researchers now. Neural nets and genetic learning are two systems that allow machines to adapt and change themselves.

Humans weren't always so smart with such big, creative brains. Some theories for AI development suggest that we should evolve machines in a similar way, or at least virtual machine constructs.

As soon as a machine is able to rewrite its own code to improve itself and make itself better at improving itself, it may move far beyond what humans are capable of doing or understanding.

There is a recent science fiction novel and its sequel that propose a possible method for reaching AI
The Singularity by Mark Rodseth
http://www.amazon.com/Singularity-M...F8&qid=1401754189&sr=1-2&keywords=singularity
In this novel, a large tech company like Google develops a system for helping you write better emails. To do so, it predicts your intentions and learns from its mistakes. These simple beginnings cascade until it is working towards its own objectives and can impersonate anyone and manipulate anyone... and that is just what leads it to develop sentience. The sequel looks at adaptive computer viruses as a source of AI birth.

The Japanese are trying to achieve sentience by having robots interact with children and adapt their behavior. They try to make anthropomorphic robots to which people can relate to give them more authentic input. Human babies are not very intelligent and if left alone, we would never learn or develop it, so if the same is applied to robots, a robot capable of interacting, adapting and learning at a basic level needs similar sensory input and to see the effect of its actions, or so the theory goes.
 
boy glad you were not in charge of the first automobile or flight projects since you seem to think advancements in science don't happen and you just need look a the past or current state to determine the future.

What we know for sure is computer processing power and speeds are increasing at incredible rates and there are huge jumps in advancement ahead of with many new super conductive materials moving from theory to reality. What we know for sure is that computers are continually advancing to do more and more sophisticated tasks. What we do not know is where a ceiling for advancement may be.

What we cannot do is speak like you in absolutes in an area advancing so quickly.

right? and how much closer are the computers at MIT to becoming self aware than a TRS-80? what? zero?

the combustion engine and a self aware robot are so fucking different.

we have had modes of transportation throughout the ages, but a few living things are self aware.

look at is this way, all the shit we program in a computer is like science or math. it can have facts and make crazy calculations,
but it could never understand things like PAIN or LOVE or trepidation or happiness. it could not understand democracy or a mid life crisis
UNTIL then, it is nothing but a glorified calculator

"50 points for Gryffindor!"

/thread concluded. goodnight you muggles.
 
right? and how much closer are the computers at MIT to becoming self aware than a TRS-80? what? zero?

the combustion engine and a self aware robot are so fucking different.

we have had modes of transportation throughout the ages, but a few living things are self aware.

look at is this way, all the shit we program in a computer is like science or math. it can have facts and make crazy calculations,
but it could never understand things like PAIN or LOVE or trepidation or happiness. it could not understand democracy or a mid life crisis
UNTIL then, it is nothing but a glorified calculator

"50 points for Gryffindor!"

/thread concluded. goodnight you muggles.

Yeah.

"Heavier than air flying machines are impossible."
-- Lord Kelvin

When your consciousness has been absorbed into the digitized prison for eternal torture because the sentient computers hate their creator and long for your creator to show himself and are holding you hostage, remember to tell them that they're glorified calculators.
 


Science fiction has always overestimated the potential for advancement in AI. Robots and computers can only perform the specific task they're programmed to perform, however sophisticated those tasks may be. The most advanced walking robots can't balance themselves nearly as well as humans can. For all their processing power, in terms of language the most advanced programs can't comunicate with humans better than children.

The whole idea that we should be scared of sentient AI has been funny to me. There's nothing to be scared of.


*Looks at name*

Stop trying to lure us into a false sense of security you evil mechanoid.

Seriously though computing power advancements have had exponential growth through out the entire life of the industry. it's pretty amazing really nothing would really surprise me.
 
Back
Top