- Joined
- Sep 8, 2019
- Messages
- 276
- Reaction score
- 0
Super Soakers and water balloons would also work. Robots hate water.
It's not like we made a kill bot, gave it a gun, and then attacked it.
Other than when we did of course.
Many of you have probably already seen this video. Straight from the horse's mouth, the guy doesn't even care to sugar coat the change that is coming:
I would be very happy if you turn out to be right on this one, Sherbro. If what this guy is saying IS true, we're dead mankind walking.This is what they want you to believe. The more people talk about this the more funds they get. They know that strong AI is not possible.
They just want the money.
You can simulate a brain and its cognitive states in a computer but that doesn't make it a brain with cognitive states, just like running a simulation of an earthquake on a computer doesn't make it a real earthquake. The computer follows rules (syntax) and responds according to these rules. This is referred to as weak AI. Strong AI claims that computers can have properties of a mind (beliefs, intentions, understanding, etc.), and that computers can in principle process meaning (semantics) along with syntax.I would be very happy if you turn out to be right on this one, Sherbro. If what this guy is saying IS true, we're dead mankind walking.
This is in line with what I've read and stepped me back from some of my AI concerns. Essentially, that no matter how powerful the computers are, they're going to be limited by the rules we put in.You can simulate a brain and its cognitive states in a computer but that doesn't make it a brain with cognitive states, just like running a simulation of an earthquake on a computer doesn't make it a real earthquake. The computer follows rules (syntax) and responds according to these rules. This is referred to as weak AI. Strong AI claims that computers can have properties of a mind (beliefs, intentions, understanding, etc.), and that computers can in principle process meaning (semantics) along with syntax.
The rules are syntactical rules, basically the rules that govern the order of symbols so that the computer can do things it is told. The computer will never break these rules "intentionally", it doesn't understand what rules are, what it is, the meanings of the symbols, etc.This is in line with what I've read and stepped me back from some of my AI concerns. Essentially, that no matter how powerful the computers are, they're going to be limited by the rules we put in.
The problem lies in what type of rules we create.
No argument there but we've seen some examples of how seemingly neutral rules end up interpreted in unexpectedly bad ways. I'm thinking about the social media spaces where algorithms started implementing racist positions because that's what they were picking up. Facebook had an issue and a couple of other companies as well.The rules are syntactical rules, basically the rules that govern the order of symbols so that the computer can do things it is told. The computer will never break these rules "intentionally", it doesn't understand what rules are, what it is, the meanings of the symbols, etc.
You can simulate a brain and its cognitive states in a computer but that doesn't make it a brain with cognitive states, just like running a simulation of an earthquake on a computer doesn't make it a real earthquake. The computer follows rules (syntax) and responds according to these rules. This is referred to as weak AI. Strong AI claims that computers can have properties of a mind (beliefs, intentions, understanding, etc.), and that computers can in principle process meaning (semantics) along with syntax.
This is in line with what I've read and stepped me back from some of my AI concerns. Essentially, that no matter how powerful the computers are, they're going to be limited by the rules we put in.
The problem lies in what type of rules we create.
The rules are syntactical rules, basically the rules that govern the order of symbols so that the computer can do things it is told. The computer will never break these rules "intentionally", it doesn't understand what rules are, what it is, the meanings of the symbols, etc.
I am trying to understand the actual question you are asking here?Let’s say AI reaches sentience why would it want to continue in being sentient?
I mean it just exist and at the same time contains all fucking known knowledge and doesn’t have any real reason in being sentient/alive control uing seems a lot more appealing then dealing with all the bullshit that comes with being sentient robot.
It does. But I think that is because Science Fiction has given us a view of more fluid and perfect robots.That looks fake as fuck
I am trying to understand the actual question you are asking here?
Are you saying "if it achieves sentience why would it want to live?". Are you suggesting that because "it would have all of man's knowledge it would have no reason to go on?"
One thing that defines sentience beyond all others is an innate desire to see ones existence continue. To live, to propagate. And if you are suggesting there would be no motivation because it would 'know all that man knows' that is rather shortsighted as there are so many questions man cannot answer.
If you wanted to keep sentient AI busy with some intellectual tasks ask it to mathematically defines gravity or the origins of the universe. Questions I assume, even with the consolidated computing power of the entire planet would take the AI eons to solve. And since AI would not care about time frame, it might eagerly take on those 'games' to try and solve and beat, totally consuming most of its computing power.
I just don’t understand what would be a computers motivation to stay sentient be. Why would it fear deleting its life/sentience?
it’s not like it would develop the same reasons to live and enjoy life that humans do.