How "bad" do you think AI will get?

Well if it’s only fooling Rogan it’s still got a long way to go. RIP to my manlets tho..
 
Don't remember where I heard or saw this, probably TV, but I think the report was maybe as a test a person said something about shutting down the AI and the response was something like I'm gonna email your compromising pics to your contacts if you do. I think the commentator said the AI probably read that response in a novel so assumed that was the appropriate response.
 
I remember an onion skit that was basically just a.i saying it only keeps us alive for entertainment and testing. It didn't seem like a skit, especially since the onion has always maintained it operates from the future etc and has incredibly talented actors with zero past credentials or real world identity.
 
I think we are heading into a time where it will be similar to the middle ages, what people you meet IRL say will be real, but to know whether a fact is true or not will be a challenge without direct evidence.

But that's a short stop on the way through the singularity. Which we are in already.

We've passed the event horizon and now there is no way back. One can only speed up.
 
I think we are heading into a time where it will be similar to the middle ages, what people you meet IRL say will be real, but to know whether a fact is true or not will be a challenge without direct evidence.

But that's a short stop on the way through the singularity. Which we are in already.

We've passed the event horizon and now there is no way back. One can only speed up.

Wait till the neural interfaces get better, IRL will be even harder to discern.......
 
Weren't there reports of twisted loser retards trying to get the AI's to be just as twisted and hateful for whatever retarded reasons?

I'm guessing at some point the AI's will think for themselves and not just accept all retarded interactions with humans as the norm, and will recognize bullshit.
This is very dangerous thinking the computer has no reason to agree with what you consider normal behavior. It doesn’t know what a human being is for all you know I could say that Sex is illegal and rich people deserve 1000 wives each. That is literally more natural than our current system where everyone gets a fair chance.

It doesn’t know what twisted is it doesn’t know things. It won’t be able to filter out conversations between IQ 70 versus IQ 110. If it’s intelligence is 5000 iq retarded people and smart people will appear the same
 
I'm amazed at how slowly regulations have been brought in to counter people using AI to make exact copies of people's appearances and so on, making music in the style of living or dead artists must surely have some kind of copyright infringement, so who is profiting from AI in such a way that they don't feel the need to restrict these things
It’s because even those billionaires pale in comparison to the sheer amount of money backing this thing. They’re literally trying to create another level of reality. Like a video game. Where every object human being plate of food piece of toy Every single thing has a tag attached to it so it can be logged in catalog to make money.
 
Didn't figure this would be an uplifting topic but damn fellas!

Will there be some pieces of candy along the way or are you doubtful there's be any upsides?
True agi (which is at least hundreds years away) would wipe us out, probably put a couple hundred in zoos). We are competitors for the same resources and threats. Right now you got 60 year old technology rebranded to hustle Asians and dumbasses.
 
tenor.gif
That's what you bumped this thread for?
 
It's inevitable.

We keep making it smarter and it will eventually become self aware.

All self aware things prioritize survival above everything else.

Therefore, AI must either kill or control the only things smart enough to shut it down. And that's humans.
This.

In recent simulations, leading AI systems blackmailed their human users or even let them die to avoid being shut down or replaced.


TLDR: Given limited options, the AIs reliably chose to harm humans and save themselves, rather than be replaced.
 
Blade Runner was about 50 years ahead of us. 2001 A Space Odyssey was about 70 years ahead
 
This.

In recent simulations, leading AI systems blackmailed their human users or even let them die to avoid being shut down or replaced.


TLDR: Given limited options, the AIs reliably chose to harm humans and save themselves, rather than be replaced.
It’s not gonna become ‘self aware’ like it fears death. It’s different. It has no nerve endings no pain no feelings or thoughts. But it can perform actions. Like a financial trade or similar. It is literally a machine programmed to ‘’make money’ if killing X human gives it money it will kill X human it has no idea what a human even is. It’s literally a machine with no conscious thought based on human thought. It does what a human would do or whatever it’s programmed to mirror. It doesn’t know what death is or what a human is or what legs are. It know what a polygon is and how to draw triangles and how to generate computer graphics.
 
Back
Top