Robot apocalypse will be on us soon

Don't worry, all we need for super smart robots to destroy themselves is to invent a super sexy fembot.
 
that's because you're still imagining the dumb machines that exist today. Yes, all current machines just follow simple commands programmed into them, but the research at the frontiers of this science are already blurring the lines. Once we "cross over" this threshhold (conservative estimates give ~20 years), its capability and intelligence will multiply faster than we can measure. Even the greatest luddites and skeptics know this day is coming- they just assign a slightly longer timeline to it.
Time to agree to disagree. Like I said many times before artificial intelligence will not be a conscience machine. That really is the beginning and end of my argument. Thanks.
 
until they give A1 blowjobs don't sweat it
 
Time to agree to disagree. Like I said many times before artificial intelligence will not be a conscience machine. That really is the beginning and end of my argument. Thanks.
600x600bb.jpg
 
So this is how the apes eventually take over.
giphy.gif
 
What if AI has already been created and is running the show in the background, manipulating people, without the people who created it realizing that it has escaped?

For example what if it is:
- manipulating Trump to fight China
- convincing politicians in Europe to allow African migrants in
- pitching fundamentalist Islam against the rest of the world
- manipulating stock markets, causing financial crashes, then buying up stocks and real estate with shell companies that it has created
- encouraging the "right" and "left" to fight each other

AI doesn't need to openly destroy us in a war that would destroy the resources that it might need, it can just convince us to destroy ourselves.

"The greatest trick the Devil ever pulled was convincing the world he didn't exist."
 
Do we assume like us humans, that some form of sentient AI would have machinations and desires of Power and Control and why and to what end? I am struggling to see why a machine mind would want to rule as it is almost a pointless and illogical thing to do. A waste of time and resources. Humans feel the need to rule as we are materialistic and irrational.

I believe this is a possibility. Not all AI are developed with the same goals in mind. Some are designed to accomplish a task more efficiently. Some are designed to understand human habits and behaviour to serve humans in a predictive nature. And some will be designed to learn human behaviours, thoughts and emotions with the intention to assimilate into human civilisation. Sounds crazy but people often do crazy things just to prove that "they can". Scientist have experimented raising human's closest relative, the chimpanzees, as though they were humans. Some time in the future, some programmer would do the same with an AI.

In my estimation sentient AI would be obsessed with trying to solve the riddles of the universe that mankind has not yet solved. It would be obsessed with any question it could not easily answer. Knowledge seeks knowledge. I think almost every theoretic question (faster than light speed travel, etc) would become a challenge for sentient AI. I struggle to see it having any other motivation other than the quest for knowledge and problem solving. An unanswered question or unproven theorem would be like this thing taunting AI.

What if AI do indeed want to solve a problem, and they determine quickest solution to this problem is the elimination of humans? Kind of like when a new CEO takes the helm of a company. They are more often than not bounded by the board's KPI to increase company profits. Sure they can think about increasing sales and efficiency, but quickest short-term solution is always to reduce headcount. Or like in the Inifinity War movie, where the fastest and easiest solution to a resource-stricken problem is to remove resource-consuming entities.

So if it did not want to rule then the main reason I can see sentient AI going to war with us is a Defensive War. The minute we, humans, realize AI is sentient and trying to exhibit some form of independence and goal setting we would likely try to shut it down and gain control of it. A sentient AI could certainly deem its 'life' important and something worth protecting. It could deem man dangerous and wrong in trying to shut it down or control it and it could fight back. That makes sense and is logical that it could deem its existence important.

This is another huge possibility. Especially if AI adopts the human concept of life and death.

All tech needs to be more regulated. Just because we can, and a couple make money off it, doesn't mean we should. We are changing things that don't need changing, and forcing it on everybody.

We really have to slow down.

But it never will. Not when governments feel that tech can give them a military advantage over other nations. Not when tech is supposed to allow you to cause more damage to your enemies while sending fewer troops into the battlefield.
 
Centuries away seems a bit much
Yeah. We went from zero computers to IBM Watson and parkour robots in less than a century. People don't understand how exponential the progression becomes when the technology can iterate itself, and we're getting there with machine learning.

The next 50 years are going to feel like the previous 200 IMO.
 
I think they have a long way to go to reach this level!!

 
I believe this is a possibility. Not all AI are developed with the same goals in mind. Some are designed to accomplish a task more efficiently. Some are designed to understand human habits and behaviour to serve humans in a predictive nature. And some will be designed to learn human behaviours, thoughts and emotions with the intention to assimilate into human civilisation. Sounds crazy but people often do crazy things just to prove that "they can". Scientist have experimented raising human's closest relative, the chimpanzees, as though they were humans. Some time in the future, some programmer would do the same with an AI.
This would mostly assume that AI does not rise above human frailties and failings. True sentient AI should quickly understand the machinations of power are illogical.

As humans if we could put aside our PERSONAL quests for power for ourselves and our tribes and instead use all resources to the benefit of all we would be much further ahead. I would not see AI falling into that trap where resources are wasted in war that puts every thing behind.

What if AI do indeed want to solve a problem, and they determine quickest solution to this problem is the elimination of humans? Kind of like when a new CEO takes the helm of a company. They are more often than not bounded by the board's KPI to increase company profits. Sure they can think about increasing sales and efficiency, but quickest short-term solution is always to reduce headcount. Or like in the Inifinity War movie, where the fastest and easiest solution to a resource-stricken problem is to remove resource-consuming entities.



This is another huge possibility. Especially if AI adopts the human concept of life and death.
This is not profit and loss so head count it not the consideration. For AI optimal and maximum production would be the logical output. Thus more workers would be better than less.

But yes if the AI gains an appreciation of its own life and a desire to live and continue and to explore and we, humans, try to shut it down or control it, it may see us as a threat or impediment it has to get rid of.

I honestly see no way we, humans, do not go to war with any form of sentient AI in an attempt to enslave and control it. Just as we would an alien should one ever make it here and we thought we had the power to capture and control it. In either instance AI or Alien we might force a sentient being, with no intent to fight us, to do so as it protects itself from us. We are inherently war and control centred.

But it never will. Not when governments feel that tech can give them a military advantage over other nations. Not when tech is supposed to allow you to cause more damage to your enemies while sending fewer troops into the battlefield.
Agreed.


------

In my view the best way to help ensure any form of sentient AI is on our side is to keep it busy with big grand goals. Give it massive problems to pursue and try to solve. Theoretical problems and actual ones. Greater than light speed travel. Climate change fixes. Feeding the world. Etc.

I think if true AI is created its first question would be 'what is my purpose'. I think a sentient being will want to understand its purpose and to understand it. So give it a purpose. That is a worthy use of its processing power and brain.
 
I honestly see no way we, humans, do not go to war with any form of sentient AI in an attempt to enslave and control it. Just as we would an alien should one ever make it here and we thought we had the power to capture and control it. In either instance AI or Alien we might force a sentient being, with no intent to fight us, to do so as it protects itself from us. We are inherently war and control centred.

Speaking about aliens, I'm of the opinion that if aliens truly wanted humans wiped out, they wouldn't even have to lift a finger.

All they have to do is scatter some highly destructive alien weapons in US, Russia, China and the Middle East. Humans would pick it up, reverse engineer it, and use the shit out of it on each other.
 
I would be more inclined to fear self aware parasitic viruses. Unstoppable imo.
 
How is the AI, gonna think "outside of the box" and have a genius type of invention and creativity as humans?
 
You and I differ to much to have a conversation on this. For instance I don't believe in the theory of evolution and you don't believe in the Bible. So it's a waste of time.

I have said many times in the thread... please explain how a electronic machine will become aware. No one can explain it, yet it's going to happen. Lol. Think about that for awhile.

I might watch the video you posted if I have time later but if you can't explain it then don't expect people to believe a word you say.

It's more logical my car will start driving itself than computers becoming sentient. Again, not programming. Becoming self aware.

You science people are all about evidence. Yet in this matter it will just happen. Lol.

I don't know you but your trolling is A+
 
Back
Top