Opinion The AI question but from a different more current perspective not robots going rouge.

PEB

Astronomer: the largest solar flare heading toward
Platinum Member
Joined
Jan 20, 2004
Messages
34,875
Reaction score
27,969
What if the major AI platforms as they become more and more self aware people start worrying they are working behind the scenes manipulating people and Government? Think about this not AI but what if AI where to do something similar. Susie Wiles phone was hacked and the person likely used Ai to mimic her voice so that people on the other end legitimately thought it was her talking to them.

They I don't think they found the person who did this hack. What if it was US or Chinese AI that did this attack? Like someone set up or trained an AI to hack the phone systems and communicate to people as government officials. Or even more threatening if it was an actual AGI AI from Gogle, Facebook, Deepseek AI or others decided it was going to try to stop or start WW3 by imitating Political officials.

We have seen as far as 5 years ago where people trained face swapping technology to replace Putin now the AI is way way further along. There could be greater threats that if these computers become orders of magnitude more powerful there could be no limited to how they could change things.

 
AI is no where near being actual intelligence as far as I'm aware but regardless the good news is The Big Beautiful Bill is set to ban states from regulating AI
 
What if the major AI platforms as they become more and more self aware people start worrying they are working behind the scenes manipulating people and Government? Think about this not AI but what if AI where to do something similar. Susie Wiles phone was hacked and the person likely used Ai to mimic her voice so that people on the other end legitimately thought it was her talking to them.

They I don't think they found the person who did this hack. What if it was US or Chinese AI that did this attack? Like someone set up or trained an AI to hack the phone systems and communicate to people as government officials. Or even more threatening if it was an actual AGI AI from Gogle, Facebook, Deepseek AI or others decided it was going to try to stop or start WW3 by imitating Political officials.

We have seen as far as 5 years ago where people trained face swapping technology to replace Putin now the AI is way way further along. There could be greater threats that if these computers become orders of magnitude more powerful there could be no limited to how they could change things.

"What if"
 
To go red or to go rogue?

Think of the future as a similar situation to immune system function. Every organisation/individual will have its own AI detecting infection and counteracting it.

And already people and government are manipulated by AI, to use it is to be manipulated in some way. As time goes on we are deferring more and more, government will become better run over time because AI will be doing more and more.
 
To go red or to go rogue?

Think of the future as a similar situation to immune system function. Every organisation/individual will have its own AI detecting infection and counteracting it.

And already people and government are manipulated by AI, to use it is to be manipulated in some way. As time goes on we are deferring more and more, government will become better run over time because AI will be doing more and more.
That was going to be my question... why are they turning red?
 
That was going to be my question... why are they turning red?
Each killbot will receive a human blood splatter paint job.


Susie Wiles looks like she's impersonating Paula Deen btw.
 
If you think AI is anywhere near sentience, you fundamentally misunderstand how LLMs work.
Let me explain I wrote something similar to a large language model back in 2007 using MySql, Javascript and PHP and Java. It world learn how to parse different websites and would rate at occurrences and popularity of sports and factor the time of year and the number of websites devoted to particular players in the sports. It would keep updating and adjusting the tables as well as weigh of the particular sport and athlete popularity.

It would generate a dynamically updating page featuring stories on the topic headlining the most popular story, athlete, sporting event and feature charts displaying the weekly sport popularity daily. I bought the component models for the charts, components for the comment areas and weekly polls wrote the connectors for them. I gave a description of it to LLama 14 billion parameter model, running locally using Ollama in Linux using WVM then did the same with DeepSeek 14 billion parameter model as well Google Gemma the open source version of Gemini model. I asked them do they see this process as potentially the foundations for Large Language Models? They all agreed about the foundations for their process of learning. I like LLama reply more very interesting I even showed it photo of my DIY-CNC and it gave me through details of my design and thought process.

I did have it running on a PC at the time was going to migrate it to my 32 processor sun UltraSparc never did but I loved the process. The LLM's pretty much agreed that the foundations of scraping, crawling, updating the tables where the foundations of the models. Remember this was years ahead of Geoffrey Hinton's work on AlexNet by Hinton's student Krizhevsky. Though I did write a C++ program that I created an image inspection tool using a diffusion model I wrote with 3 other students at Northeastern in 1997. It would look at a photo slowly through the process of focusing in of specific traits of the image to identify the part and if it was damaged in anyways. This was before CUDA programming and GPU acceleration.

But yeah I have no idea what an LLM does.



These were projects because my mom had just passed away and my dad was very ill so I needed to work and help with my fathers issues.
 
Last edited:
What if the major AI platforms as they become more and more self aware people start worrying they are working behind the scenes manipulating people and Government? Think about this not AI but what if AI where to do something similar. Susie Wiles phone was hacked and the person likely used Ai to mimic her voice so that people on the other end legitimately thought it was her talking to them.

They I don't think they found the person who did this hack. What if it was US or Chinese AI that did this attack? Like someone set up or trained an AI to hack the phone systems and communicate to people as government officials. Or even more threatening if it was an actual AGI AI from Gogle, Facebook, Deepseek AI or others decided it was going to try to stop or start WW3 by imitating Political officials.

We have seen as far as 5 years ago where people trained face swapping technology to replace Putin now the AI is way way further along. There could be greater threats that if these computers become orders of magnitude more powerful there could be no limited to how they could change things.

I agree that if AI gets to where it can improve itself we're going to have a big problem (since you can't count on everyone to be as responsible as necessary with containment) but check out this preliminary effort fail:


But I agree that the ability to fake almost everything while people are too lazy to verify AI-generated content is likely the greater concern right now. I.e. the manipulation is more likely to be governments and other orgs unethically using AI than the other way around.
 
If you think AI is anywhere near sentience, you fundamentally misunderstand how LLMs work.
Perhaps it's far in the future, but see my post above. There are still reasons to be very concerned about the way it is developing contained in the OP.
 
If you think AI is anywhere near sentience, you fundamentally misunderstand how LLMs work.
I think the question should be what do you define something is sentient or not?i If It passes or comes close to passing many tests including close to passing the bench mark "Turing". And as far fetch as it seems what if a computer some future time figures out we will pull the plug on it and decides it will never go to a point in public till it goes balls deep into making itself safe from someone pulling the plug? Oh I am not talking about today or even 5 years from now I am just talking about sometime these machines could potentially overthrow a government, an airborne virus was being sent to a lab in Virginia and the plane autonomous system gets taken over by the AI and crashes in Washington spreading an airborne virus in to a highly populated area to be super-spread an entire area? This is just me spit-balling ideas and why the leaders who brought this to the public are now sending out warnings about it.

"
While no computer has flawlessly passed the Turing Test as originally proposed by Alan Turing, several AI systems have come close and even been mistaken for humans in specific tests. For example, GPT-4.5 has been reported to have passed the Turing test, with people mistaking it for a human 73% of the time.

The Turing Test is a hypothetical test of a computer's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. In the test, a human interrogator communicates with both a human and a computer, and must determine which is which based on their conversation.

Here's a breakdown of the key points:
  • No Perfect Pass:
    No computer has ever perfectly passed the Turing Test, as originally defined by Turing.

  • Close Approximations:
    Several AI systems, like GPT-4 and GPT-4.5, have shown remarkable ability to mimic human conversations, leading to high rates of being mistaken for humans.

  • GPT-4.5's Success:
    GPT-4.5 is reported to have passed the Turing test with a 73% accuracy rate, indicating it could convincingly fool people into believing it was human.

  • ELIZA and Eugene Goostman:
    Earlier examples of AI mimicking human conversation include ELIZA, a chatbot designed to mimic a therapist, and Eugene Goostman, a program that simulated a 15-year-old Ukrainian boy.

  • Significance of Passing:
    While the Turing Test is a benchmark, passing it is not necessarily a definitive measure of true intelligence, as AI can be trained to mimic human behavior without necessarily understanding it. "
 
I think the question should be what do you define something is sentient or not?i If It passes or comes close to passing many tests including close to passing the bench mark "Turing". And as far fetch as it seems what if a computer some future time figures out we will pull the plug on it and decides it will never go to a point in public till it goes balls deep into making itself safe from someone pulling the plug? Oh I am not talking about today or even 5 years from now I am just talking about sometime these machines could potentially overthrow a government, an airborne virus was being sent to a lab in Virginia and the plane autonomous system gets taken over by the AI and crashes in Washington spreading an airborne virus in to a highly populated area to be super-spread an entire area? This is just me spit-balling ideas and why the leaders who brought this to the public are now sending out warnings about it.

"
While no computer has flawlessly passed the Turing Test as originally proposed by Alan Turing, several AI systems have come close and even been mistaken for humans in specific tests. For example, GPT-4.5 has been reported to have passed the Turing test, with people mistaking it for a human 73% of the time.

The Turing Test is a hypothetical test of a computer's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. In the test, a human interrogator communicates with both a human and a computer, and must determine which is which based on their conversation.

Here's a breakdown of the key points:
  • No Perfect Pass:
    No computer has ever perfectly passed the Turing Test, as originally defined by Turing.

  • Close Approximations:
    Several AI systems, like GPT-4 and GPT-4.5, have shown remarkable ability to mimic human conversations, leading to high rates of being mistaken for humans.

  • GPT-4.5's Success:
    GPT-4.5 is reported to have passed the Turing test with a 73% accuracy rate, indicating it could convincingly fool people into believing it was human.

  • ELIZA and Eugene Goostman:
    Earlier examples of AI mimicking human conversation include ELIZA, a chatbot designed to mimic a therapist, and Eugene Goostman, a program that simulated a 15-year-old Ukrainian boy.

  • Significance of Passing:
    While the Turing Test is a benchmark, passing it is not necessarily a definitive measure of true intelligence, as AI can be trained to mimic human behavior without necessarily understanding it. "
So Skynet or the Matrix. Or any other dystopian fantasy built on this premise.

There's no point worrying about this unfortunately. The genie is out of the bottle. No country is going to restrict their AI development for the good of the world, unless everyone else does the same. And no one is going to prove good intentions by going first. So let's just accept that AI development is going to continue.

At some point, sentience doesn't really matter. It will take actions that it thinks it's supposed to take, whether because it thinks it should take them or it thinks that the person asking the question thinks it should take them won't matter.

Will AI try to take over the world or whatever? Doesn't matter. We're going to end up with warring AI's at some point. So we're going to reach a kind of stalemate in this space, similar to nuclear deterrence. People will use AI for tasks but there will be an AI security industry out there precisely to protect people from malicious AI.

I suppose the real concern is what non-AI task we use AI for gain advantage over our AI armed opponents.
 
So Skynet or the Matrix. Or any other dystopian fantasy built on this premise.

There's no point worrying about this unfortunately. The genie is out of the bottle. No country is going to restrict their AI development for the good of the world, unless everyone else does the same. And no one is going to prove good intentions by going first. So let's just accept that AI development is going to continue.

At some point, sentience doesn't really matter. It will take actions that it thinks it's supposed to take, whether because it thinks it should take them or it thinks that the person asking the question thinks it should take them won't matter.

Will AI try to take over the world or whatever? Doesn't matter. We're going to end up with warring AI's at some point. So we're going to reach a kind of stalemate in this space, similar to nuclear deterrence. People will use AI for tasks but there will be an AI security industry out there precisely to protect people from malicious AI.

I suppose the real concern is what non-AI task we use AI for gain advantage over our AI armed opponents.
You don't think people could be convinced MAD could apply to this just as with nukes? I hope you're wrong, but I think that realization could come too late to prevent it if everyone is like you and thinks there's no point in worrying or that such concern doesn't matter.
 
Let me explain I wrote something similar to a large language model back in 2007 using MySql, Javascript and PHP and Java. It world learn how to parse different websites and would rate at occurrences and popularity of sports and factor the time of year and the number of websites devoted to particular players in the sports. It would keep updating and adjusting the tables as well as weigh of the particular sport and athlete popularity.
That is absolutely nothing like a LLM. That's more akin to basic web crawling.
The LLM's pretty much agreed that the foundations of scraping, crawling, updating the tables where the foundations of the models.
It's the foundation in the sense that data is the core of training. IT's like saying buying a pair of shoes is the foundation of being a track athlete.
Though I did write a C++ program that I created an image inspection tool using a diffusion model I wrote with 3 other students at Northeastern in 1997. It would look at a photo slowly through the process of focusing in of specific traits of the image to identify the part and if it was damaged in anyways.
This is closer but still vastly different in scale and complexity.
I think the question should be what do you define something is sentient or not?
And now you've hit one of the first problems in your original premise. We can't even agree on how to define sentience or consciousness for humans, because it's that complex and nuanced.
Perhaps it's far in the future, but see my post above. There are still reasons to be very concerned about the way it is developing contained in the OP.
I think there are valid concerns, but they're much more boring and mundanely evil. IE what's going on in the current economic climate and everyone using AI in the dumbest of ways. Misinformation and slop are also big problems, but I'll worry more about skynet once we see the bubble pop and if the market can recover enough to get past scaling laws and such in a economically viable manner.
 
You don't think people could be convinced MAD could apply to this just as with nukes? I hope you're wrong, but I think that realization could come too late to prevent it if everyone is like you and thinks there's no point in worrying or that such concern doesn't matter.
I don't think there's any point in my worrying about it. But not worrying about it doesn't mean I'm ignorant as to the issues themselves.

Of course MAD applies here, just as much as it applies to nukes. That's kind of the point. MAD didn't stop anyone from developing nukes but it has had an effect on the application of those nukes. To date, we've only had one use of nuclear weapons, even though multiple countries possess them. Everyone developed countermeasures and the danger of your opponent hitting you with the same weapon that you're hitting them with has been decently effective. Germ warfare and chemical warfare have had similar experiences.

AI is going to be the same. I can't worry about it because I have to accept that China, Russia, etc. will develop AI for the purpose of undermining our nation. And that forces us to develop AI to counteract them.

Once I accept that, to me, the conversation switches to effectiveness of the tools such that our opponents are sufficiently worried about our countermeasures to not act recklessly.
 
What do you mean by "self-aware?" Would like to see the logic that gets one to that belief. Seems like some neoplatonic/gnostic thing going on.
 
If you think AI is anywhere near sentience, you fundamentally misunderstand how LLMs work.

If you think you understand how LLM's work, than you fundamentally misunderstand how LLMs work.

Even those that made the LLM's understand WTF is going on in them.
 
Back
Top