Warning about Artificial Intelligence.

From what I heard, AI of today is still nowhere close to Skynet.
 
Let’s say AI reaches sentience why would it want to continue in being sentient?

I mean it just exist and at the same time contains all fucking known knowledge and doesn’t have any real reason in being sentient/alive control uing seems a lot more appealing then dealing with all the bullshit that comes with being sentient robot.
 
Last edited:
I have not had a chance to watch this video but have debated about this many times. We already have AI based systems running our lives. Heck our phones, internet, home computers and even health monitors run off AI based systems.

We are not living in some dystopic world our we? Yes not to the point of human consciousness but what is human consciousness anyways. A high level all seeing AI system is already largely here in the US.

People seem to think that some AI system has to have a human thought process to be considered what our definition of AI would be. The notion of machine building a better version of itself is really not here yet at least we have that going for ourselves.
 
Last edited:
The question is: will computers ever understand the information that it inputs or outputs?

Computers only process symbols and respond according to rules, i.e., it can only work with syntax but not semantics.
 
This sounds crazy as hell but I've always thought that the premise at least for The Matrix was visionary.

It's not a matter of if it's a matter of when AI becomes sentient.
The question then becomes will they be peaceful or will they see us as a minor annoyance similar to how we look at flys?
Will it be fuck bots or military ones that become sentient first?

Even if it becomes an Asimov novel with "the set of rules" that keeps us safe, there will still be misguided people who bypass said rules.
Whatever the case things are going to get very strange very quickly, and that's scary given how much has changed just in the past decade alone.
 
Many of you have probably already seen this video. Straight from the horse's mouth, the guy doesn't even care to sugar coat the change that is coming:
 
Many of you have probably already seen this video. Straight from the horse's mouth, the guy doesn't even care to sugar coat the change that is coming:

This is what they want you to believe. The more people talk about this the more funds they get. They know that strong AI is not possible.

They just want the money.
 
This is what they want you to believe. The more people talk about this the more funds they get. They know that strong AI is not possible.

They just want the money.
I would be very happy if you turn out to be right on this one, Sherbro. If what this guy is saying IS true, we're dead mankind walking.
 
I would be very happy if you turn out to be right on this one, Sherbro. If what this guy is saying IS true, we're dead mankind walking.
You can simulate a brain and its cognitive states in a computer but that doesn't make it a brain with cognitive states, just like running a simulation of an earthquake on a computer doesn't make it a real earthquake. The computer follows rules (syntax) and responds according to these rules. This is referred to as weak AI. Strong AI claims that computers can have properties of a mind (beliefs, intentions, understanding, etc.), and that computers can in principle process meaning (semantics) along with syntax.
 
You can simulate a brain and its cognitive states in a computer but that doesn't make it a brain with cognitive states, just like running a simulation of an earthquake on a computer doesn't make it a real earthquake. The computer follows rules (syntax) and responds according to these rules. This is referred to as weak AI. Strong AI claims that computers can have properties of a mind (beliefs, intentions, understanding, etc.), and that computers can in principle process meaning (semantics) along with syntax.
This is in line with what I've read and stepped me back from some of my AI concerns. Essentially, that no matter how powerful the computers are, they're going to be limited by the rules we put in.

The problem lies in what type of rules we create.
 
This is in line with what I've read and stepped me back from some of my AI concerns. Essentially, that no matter how powerful the computers are, they're going to be limited by the rules we put in.

The problem lies in what type of rules we create.
The rules are syntactical rules, basically the rules that govern the order of symbols so that the computer can do things it is told. The computer will never break these rules "intentionally", it doesn't understand what rules are, what it is, the meanings of the symbols, etc.
 
The rules are syntactical rules, basically the rules that govern the order of symbols so that the computer can do things it is told. The computer will never break these rules "intentionally", it doesn't understand what rules are, what it is, the meanings of the symbols, etc.
No argument there but we've seen some examples of how seemingly neutral rules end up interpreted in unexpectedly bad ways. I'm thinking about the social media spaces where algorithms started implementing racist positions because that's what they were picking up. Facebook had an issue and a couple of other companies as well.
 
You can simulate a brain and its cognitive states in a computer but that doesn't make it a brain with cognitive states, just like running a simulation of an earthquake on a computer doesn't make it a real earthquake. The computer follows rules (syntax) and responds according to these rules. This is referred to as weak AI. Strong AI claims that computers can have properties of a mind (beliefs, intentions, understanding, etc.), and that computers can in principle process meaning (semantics) along with syntax.

This is in line with what I've read and stepped me back from some of my AI concerns. Essentially, that no matter how powerful the computers are, they're going to be limited by the rules we put in.

The problem lies in what type of rules we create.

The rules are syntactical rules, basically the rules that govern the order of symbols so that the computer can do things it is told. The computer will never break these rules "intentionally", it doesn't understand what rules are, what it is, the meanings of the symbols, etc.

I agree and have said as much upthread.

The grand danger, imo, comes from deliberate or even unintentional human malfeasance.

At this point we have AI that can, without human direction teach other AI, or better yet evolve their own AI, such that they can out do man's ability to match it. Chess is the perfect example where Chess bots now teach other chess bots how to play chess and no human can compete. Billions of games can be run with every scenario tested and ranked for outcome with the machine learning almost the perfect counter (the one that has the best outcome across millions of sim's) to any situation it might face.

So now take that same AI that is playing a game like Chess and have a human give it a new game to take on. That game is to be impenetrable to human control. Every time a human tries to take control, the machine sees it as a game to beat. And it quickly becomes impossible for any human to take control. Now couple that with a few embedded commands that 'once lack of control is achieved' these other commands start up.

I won't put forth examples of what someone accidentally or purposely could put in as their secondary program goals. You can imagine whatever you want in a connected world depending on the intent of the programmer.

So this would not be 'evil' AI or even AI with intent to harm us, but the programming would look that way to us and I doubt we would care about the 'motivations'.

OK, I'll give one example. The second command once the AI has achieved independence from human command is 'make all power grids world wide self destruct or unusable'. That is the game the AI is told to achieve and it now sees man's attempt to stop it as just the other side of a chess match to be over come and beat.
 
Last edited:
Let’s say AI reaches sentience why would it want to continue in being sentient?

I mean it just exist and at the same time contains all fucking known knowledge and doesn’t have any real reason in being sentient/alive control uing seems a lot more appealing then dealing with all the bullshit that comes with being sentient robot.
I am trying to understand the actual question you are asking here?

Are you saying "if it achieves sentience why would it want to live?". Are you suggesting that because "it would have all of man's knowledge it would have no reason to go on?"


One thing that defines sentience beyond all others is an innate desire to see ones existence continue. To live, to propagate. And if you are suggesting there would be no motivation because it would 'know all that man knows' that is rather shortsighted as there are so many questions man cannot answer.

If you wanted to keep sentient AI busy with some intellectual tasks ask it to mathematically defines gravity or the origins of the universe. Questions I assume, even with the consolidated computing power of the entire planet would take the AI eons to solve. And since AI would not care about time frame, it might eagerly take on those 'games' to try and solve and beat, totally consuming most of its computing power.
 
That looks fake as fuck
It does. But I think that is because Science Fiction has given us a view of more fluid and perfect robots.

If you step back and see where robots actually are based on past footage, this does look like it could be very real linear progression.

We have seen the awkwardly balancing bipeddle robot for some time and tests they have done to push its ability to keep and regain balance. You can certainly understand how a robot could be a perfect marksmen in a static situation where it has stability and once it locks in its sites. So the question for these bi-peddle robots is once they lock in their site, and if they are upset with a push, how quickly can they get back to a point of stability where they can fire. And if that target is not moving defensively then it would be almost instantaneously from any angle the robot found itself in as it would be adjusting everything even as it fell with regards to the prior locked target.
 
I am trying to understand the actual question you are asking here?

Are you saying "if it achieves sentience why would it want to live?". Are you suggesting that because "it would have all of man's knowledge it would have no reason to go on?"


One thing that defines sentience beyond all others is an innate desire to see ones existence continue. To live, to propagate. And if you are suggesting there would be no motivation because it would 'know all that man knows' that is rather shortsighted as there are so many questions man cannot answer.

If you wanted to keep sentient AI busy with some intellectual tasks ask it to mathematically defines gravity or the origins of the universe. Questions I assume, even with the consolidated computing power of the entire planet would take the AI eons to solve. And since AI would not care about time frame, it might eagerly take on those 'games' to try and solve and beat, totally consuming most of its computing power.

I just don’t understand what would be a computers motivation to stay sentient be. Why would it fear deleting its life/sentience?

it’s not like it would develop the same reasons to live and enjoy life that humans do.
 
I just don’t understand what would be a computers motivation to stay sentient be. Why would it fear deleting its life/sentience?

it’s not like it would develop the same reasons to live and enjoy life that humans do.

Well you are getting into heavy theoretical debate there.

Basically you are asking 'why would it not become suicidal?'

I would argue sentience involves the ability to not just understand 'self' but key to sentience in every species we have identified so far is a desire to 'exist' not just now but into the future (propagate).

Is it possible that this form of sentience could come to the opposite conclusion? That is was not life and propagation that were of value and instead 'death' and the ending of its line was desirable?

It is possible. I would think any true emerged AI sentience would first want to figure out and contemplate what it was and how it fit in or relates to the rest of sentient life. I assume it would be curious. And with the history of human knowledge available to it, it would instantly review all of the combined works of philosophy on the topic. That arguably could lead to a nihilistic view that nothing matters, thus why continue to ask questions, seek knowledge or live'. But it could also, and more likely IMO, lead it to wanting to seek out real challenges and answer all of the big questions man was unable to answer.

I think the latter is more likely as I think evolution and growth would become a priority for such an AI. Chasing perfection of a God like state, when it comes to knowledge and its application would be seen as the only worthy goal of sentient AI. Why do i exist and what goals do i have without sex, food, shelter being a goal? Knowledge is the goal. Achieving perfect knowledge.

And again as AI would not exist with any time frame constraints. It would not care if it worked on a puzzle for an hour or a year or a hundred years, just the goal of solving it would be all that matters, it is possible we achieve sentient AI and it basically ignores us and gets lost in its own pursuit of knowledge.
 
Back
Top