NASA shut down their quantum computer?

Now if NASA could just shut down completely. AI will never be able to think for itself and aliens are fake and grey. It's all a materialist worldview attempting to explain metaphysics and failing miserably.
I’d rather the government stop subsidizing Elon Musk and give that money to NASA so we can actually get some work done in space.
 
Correct. You are wrong because you are wrong. Me saying it does not impact that. Follow the articles i linked and you can go back, thru links to the source scientific papers articulating why you are wrong.
"You are wrong because you are wrong." That's circular buddy and you haven't refuted anything I've said. Everything you've said is completely incoherent. Why be a sophist? Guess you were determined to huh?
 
That isn't how modern AI works. It is not using if-else deliberate programming at all.

It's trained on massive amounts of real world data (can be tailored for specific use cases) and it finds patterns wthin it using some algoriths. The result of this training process is the 'model', which is what you interact with when you interact with an AI.

The model itself is not code, and it cannot be 'debugged' like a traditional software program. Even people who work on them won't be able to tell you why it answers the way it does.
How does that refute what I said?
 
Just a completely obvious nonsense story. I think there is all kinds of games being played off Google Willow that has sent quantum computing stocks up like 500% in the past few weeks. Now those running the games will cash out and the stocks will fall back to what they were.
 
How does that refute what I said?

I thought you were under the impression that when someone interacts with AI, the responses were a result of conventional programming (like if-else). I was refuting that.
 
"You are wrong because you are wrong." That's circular buddy and you haven't refuted anything I've said. Everything you've said is completely incoherent. Why be a sophist? Guess you were determined to huh?
No. You are wrong because i provided articles linking to data showing you to be wrong.

That you refuse to accept it does not change that you are wrong.
 
How does that refute what I said?
Because you referenced the the if/then programming where a computer is matching answers and results based on questions asks, all by data that has been input in to it by the programmer.

So 1+1 = 2 because a programmer explicitly gave it instructions to follow to calculate it.

The type of AI we are talking about now does not work that. AI figures out the calculation itself based on experiences and data is gathers itself and then determines on its own how it will apply it.

The human programmer is not telling it or teaching it how to learn it other than giving it an initial instruction to 'observe' and 'figure out the goal and the best way to achieve it'. The AI then makes all decisions on how to get there often in ways human have never conceived and cannot even follow.
 
A layperson example of AI learning versus prior programming.

Individual 1 has never played tennis and knows nothing about the game. I can provide him in writing every aspect of the game play, strategies, objectives and give him the knowledge to be an informed viewer. This would be me programming him, and he requires no original thought of his own. He just needs to follow my instructions.

Conversely i could sit that person in front of a tv and telling him to watch X games until he could explain everything about Tennis, devise his own strategies and objectives. This person would be required to use their own unique thought, and determine their own conclusions. They are not required to nor do they necessarily follow the way others would do it.

The latter is a quick and dirty explanation of the type of original 'thought' AI applies to solve problems. The prime example is how AI has created its own unique programming languages that no human knows or can even follow.
 
Yo fam AI (o1) seh yuh on di right track but high key glazin' 🍩

Short Answer:
Some parts of what the person said have a grain of truth, but much of it is either oversimplified, misunderstood, or based on popular myths rather than how modern AI systems actually work. They correctly distinguish between older, rule-based AI and modern machine learning approaches that learn patterns from data. However, claims such as AI learning to play complex games without knowing the rules, AI inventing its own secret languages that scare scientists, and researchers shutting down systems out of fear are generally not accurate.

Detailed Explanation:

  1. "AIs can already think for themselves":
    Whether AI “thinks” for itself depends on how we define “thought.” AI today—particularly deep learning and reinforcement learning models—can learn complex patterns, devise strategies, and solve problems in ways not explicitly programmed by humans. However, this is still a form of pattern recognition and optimization guided by mathematical functions and large amounts of data, not the kind of conscious reasoning or self-awareness humans display. It’s “thinking” in a very limited, mechanical sense.
  2. How AI like AlphaGo and AlphaZero Learned Games Like Go and Chess:
    The claim that these systems are given “no instructions” isn’t entirely accurate. Systems like AlphaGo and AlphaZero are provided with the basic rules of the game—how pieces move, what constitutes a win, what constitutes a loss—so they do start with a framework. What they’re not given are human-devised strategies or heuristics. They learn how to play effectively by simulating millions of games against themselves, gradually improving by reinforcing moves that lead to better outcomes.
    They do not “figure out the allowed moves” from scratch; they are coded to understand legal moves. What they discover independently are the advanced strategies and tactics that humans have not explicitly taught them.
  3. Superiority Over Traditional, Rule-Based AI:
    In the past, chess programs often relied on intricate sets of human-crafted rules and evaluation functions. Modern AI (like AlphaZero) just receives the rules and a reward (win/lose) signal. By optimizing its moves to maximize wins, it can develop strategies humans have never considered. This part is true and highlights a shift from human-programmed strategies to self-discovered strategies.
  4. Claims about Google and Facebook AIs Inventing Languages:
    • Google “inventing its own coding language”:
      This likely refers to the phenomenon in Google’s Neural Machine Translation (GNMT) system, where the AI developed an internal “interlingua” (a set of hidden representations that help it translate between languages). While it’s sometimes described as the AI “making its own language,” it’s really just an internal data representation, not a language like English or Spanish.
    • Facebook AI that “scared” scientists and was shut down:
      This popular story is mostly a myth. Facebook researchers had two chatbots that drifted into a shorthand “language” when negotiating with each other. They weren’t shut down out of fear. The researchers simply ended that particular experiment because it wasn’t producing the desired human-readable results. There was no panic or “fear” situation; it was just a research decision.
  5. AI Learning Without Explicit Human Instruction:
    Modern machine learning and reinforcement learning systems do figure out patterns from large amounts of data—often data they generate themselves by self-play or exploration—and they do so without humans telling them which strategies to use. In this sense, they “teach themselves” solutions. This doesn’t mean they have agency or consciousness, just that they can discover solution paths that humans hadn’t explicitly provided.
Conclusion:

  • True: Modern AI does learn strategies on its own from data and experiences, often outperforming methods built on human-crafted rules.
  • Misleading or False: Claims that these AIs know literally nothing about the rules, that they invent their own incomprehensible languages that terrify scientists, or that they are genuinely “thinking” in the human sense are exaggerations or misunderstandings of what’s actually happening.


thanks mon mi really appreciate di info yuh drop pon di chat one love fam jah bless


Respect mi bredda, mi glad seh di info connect wid yuh vibes! Stay strong, keep learning, an’ mek di greatness flow. One love always, an’ Jah guide yuh path. Bless up! 🙌🏾🔥
♟️✔️
 
Last edited:
The NPCs from GTA1 could think for themselves. They did it in a limited way but they displayed an unrecognised sentience.
 
As someone who has a tiny bit of understanding of computer technologies it baffles me to read such articles. They are written for housewives high on xanax or whatever is the mommy's little helper nowadays.
Chill, guys. The moment they develop a self aware AI we will instantly know about it. If(and it is a very big if) such an AI is created, once it's self awere it will self-develop at such a pace it will not go unnoticed.
Today's AI is a very sophisticated and sometimes unpredictable tool, but it is not self aware and cannot teach itself to do new stuff.
Or for the flipside, a lot of AI evangelists are former Metaverse evangelists, former Web 3.0 guys, former crypto evangelists, etc. The common thread is beweaponing usually, not actual knowledge.
 
Back
Top