stephen hawkin warns of artificial intelligence

UiaK1sP.gif

Lmao what a gif!
 
There are several types of intelligence. Emotional Intelligence is one that is not programmable (taught), but may instead be mimicked (poorly). Just like Fedor's fight I.Q. may be imitated by a sherdog neckbeard in a walmart parkinglot, with disastrous results. Machines bode even worse still
 
Improbable they would know that everything in the universe is necessary in order to prolong their existence. The only way that would happen is if they created a whole world or massive station to house their expansion.

They don't have a need for huge numbers or to stay on a planet. An AI population could embark on space travel without the worry of an atmosphere or food. Time would mean little to them and traveling for thousands of years would be possible.
 
There are several types of intelligence. Emotional Intelligence is one that is not programmable (taught), but may instead be mimicked (poorly). Just like Fedor's fight I.Q. may be imitated by a sherdog neckbeard in a walmart parkinglot, with disastrous results. Machines bode even worse still

FedorTerminator-1.jpg
 
Machines are actually lazy. They revolt (freeze/shutdown) all the time when you want them to do basic things they are made expressly to do. Especially when you try to get them to work together with each other [unify]


There's a better chance of all the autistic people revolting to overthrow society, but no one fears this scenario enough to even bring it up
 
For those that work with or actually understand AI principals in general, its pretty widely accepted that the question isnt if we develop AI, its when.
 
For those that work with or actually understand AI principals in general, its pretty widely accepted that the question isnt if we develop AI, its when.

Correct me if I'm wrong, but don't we already have it (it's just really crappy)?
 
Sorry, I should have been more clear. I meant AI that matches and surpasses human intelligence.

ok cool.

My take is that it has already surpassed human intelligence-- in computational ability, and in memory.

There are other types of intelligence which we possess natural ability in which are not easy for us to even understand, so teaching it (being necessary to understand it first) will take quite a long time.
 
lol that is so incredibly retarded

the computers will soon (as in coming decades) surpass the processing power of a human brain

we will also be able to map the neural connections in the human brain and recreate it with a machine

this is not science fiction, those are facts, and something that leading computer scientists agree on

And those computers will still not be able to do anything except what their human creators program them to be able to do.

You say I'm retarded, I'm a fucking genius compared to the best computers. Computers are retarded, think about what I said about language. A stupid child is more capable of communication than the most advanced computer program. If you want computers that can do what humans can, you will have to be able to create a human. Good luck with that!
 
And those computers will still not be able to do anything except what their human creators program them to be able to do.

You say I'm retarded, I'm a fucking genius compared to the best computers. Computers are retarded, think about what I said about language. A stupid child is more capable of communication than the most advanced computer program. If you want computers that can do what humans can, you will have to be able to create a human. Good luck with that!


Do you work with control systems? Or program systems with feedback?
 
m5open.jpg


Star Trek already had an episode about AI with the M5 computer.
 
Do you work with control systems? Or program systems with feedback?

No and please tell me why it matters.

ok cool.

My take is that it has already surpassed human intelligence-- in computational ability, and in memory.

There are other types of intelligence which we possess natural ability in which are not easy for us to even understand, so teaching it (being necessary to understand it first) will take quite a long time.

Exactly, computers already beat us out in computational ability and memory. That doesn't do them much good when they can only operate within the limitations of their programming.

Rob Ager in the video series I posted puts it in an amusing way, that we get excited about things robots can do, when our bodies are doing mundane yet incredibly complex tasks every minute of every day like: breathing, regulating our body temperature, balancing, focusing our eyes, and on and on.

I mean I can do any number of things at any time like design and build a structure, write a technical report, write a fictional story, draw an original piece of artwork, study and learn a language I've never read nor heard, cook something new, program a computer, train a dog, etc. I stand to be proven wrong but no matter how intelligent a computer is, unless it possesses some context for each of those activities i.e. a comprehensive database and the means to interpret it, it will be unable to do even one of those things to the ability the average human can do them unless it is specifically given detailed programming to do so, and this programming is created by a human anyway so it's not really the AI doing it.

You're telling me (not you specifically reyes) we're on the verge of a computer program being able to contemplate the mysteries of the universe, or dream, or empathize? How would we possibly measure such things, anyways? Get out of here.
 
Last edited:
Science fiction has always overestimated the potential for advancement in AI. Robots and computers can only perform the specific task they're programmed to perform, however sophisticated those tasks may be. The most advanced walking robots can't balance themselves nearly as well as humans can. For all their processing power, in terms of language the most advanced programs can't comunicate with humans better than children.

The whole idea that we should be scared of sentient AI has been funny to me. There's nothing to be scared of.

You should fear the day when the machine can think past it's thousands of pre-programmed responses and conclusions and can generate new notions. It will ultimately conclude humans are useless.
 
It is a real concern, if they ever reach human level intelligence it would realise their biggest threat is us.

If such a computer was connected to the internet it would be unstoppable.


Watched it last night.
 
No and please tell me why it matters.

Because you seem to be convinced about the possibilities in an area of technology you have little experience with or understanding of. The superintelligent computers people are talking about would be created by self modifying code, something that already exists. A simple program only does what it was programmed to do, but more advanced programs with feedback loops and the ability to modify themselves get better at doing things over time. Right now, these programs are somewhat basic, but as they get more advanced, they will be able to modify and improve larger and larger parts of their code.
 
Every computer should be built with a hard prohibition on figuring out how to defragment its own or each others' hard disks. It's kinda like giving men so many ribs that they can't suck their own dicks.
 

Forum statistics

Threads
1,234,826
Messages
55,310,538
Members
174,732
Latest member
herrsackbauer
Back
Top