- Joined
- Sep 17, 2023
- Messages
- 2,248
- Reaction score
- 4,170
Your mums a cum pooter
I’d rather the government stop subsidizing Elon Musk and give that money to NASA so we can actually get some work done in space.Now if NASA could just shut down completely. AI will never be able to think for itself and aliens are fake and grey. It's all a materialist worldview attempting to explain metaphysics and failing miserably.
"You are wrong because you are wrong." That's circular buddy and you haven't refuted anything I've said. Everything you've said is completely incoherent. Why be a sophist? Guess you were determined to huh?Correct. You are wrong because you are wrong. Me saying it does not impact that. Follow the articles i linked and you can go back, thru links to the source scientific papers articulating why you are wrong.
How does that refute what I said?That isn't how modern AI works. It is not using if-else deliberate programming at all.
It's trained on massive amounts of real world data (can be tailored for specific use cases) and it finds patterns wthin it using some algoriths. The result of this training process is the 'model', which is what you interact with when you interact with an AI.
The model itself is not code, and it cannot be 'debugged' like a traditional software program. Even people who work on them won't be able to tell you why it answers the way it does.
How does that refute what I said?
No. You are wrong because i provided articles linking to data showing you to be wrong."You are wrong because you are wrong." That's circular buddy and you haven't refuted anything I've said. Everything you've said is completely incoherent. Why be a sophist? Guess you were determined to huh?
Because you referenced the the if/then programming where a computer is matching answers and results based on questions asks, all by data that has been input in to it by the programmer.How does that refute what I said?
Short Answer:
Some parts of what the person said have a grain of truth, but much of it is either oversimplified, misunderstood, or based on popular myths rather than how modern AI systems actually work. They correctly distinguish between older, rule-based AI and modern machine learning approaches that learn patterns from data. However, claims such as AI learning to play complex games without knowing the rules, AI inventing its own secret languages that scare scientists, and researchers shutting down systems out of fear are generally not accurate.
Detailed Explanation:
Conclusion:
- "AIs can already think for themselves":
Whether AI “thinks” for itself depends on how we define “thought.” AI today—particularly deep learning and reinforcement learning models—can learn complex patterns, devise strategies, and solve problems in ways not explicitly programmed by humans. However, this is still a form of pattern recognition and optimization guided by mathematical functions and large amounts of data, not the kind of conscious reasoning or self-awareness humans display. It’s “thinking” in a very limited, mechanical sense.- How AI like AlphaGo and AlphaZero Learned Games Like Go and Chess:
The claim that these systems are given “no instructions” isn’t entirely accurate. Systems like AlphaGo and AlphaZero are provided with the basic rules of the game—how pieces move, what constitutes a win, what constitutes a loss—so they do start with a framework. What they’re not given are human-devised strategies or heuristics. They learn how to play effectively by simulating millions of games against themselves, gradually improving by reinforcing moves that lead to better outcomes.
They do not “figure out the allowed moves” from scratch; they are coded to understand legal moves. What they discover independently are the advanced strategies and tactics that humans have not explicitly taught them.- Superiority Over Traditional, Rule-Based AI:
In the past, chess programs often relied on intricate sets of human-crafted rules and evaluation functions. Modern AI (like AlphaZero) just receives the rules and a reward (win/lose) signal. By optimizing its moves to maximize wins, it can develop strategies humans have never considered. This part is true and highlights a shift from human-programmed strategies to self-discovered strategies.- Claims about Google and Facebook AIs Inventing Languages:
- Google “inventing its own coding language”:
This likely refers to the phenomenon in Google’s Neural Machine Translation (GNMT) system, where the AI developed an internal “interlingua” (a set of hidden representations that help it translate between languages). While it’s sometimes described as the AI “making its own language,” it’s really just an internal data representation, not a language like English or Spanish.- Facebook AI that “scared” scientists and was shut down:
This popular story is mostly a myth. Facebook researchers had two chatbots that drifted into a shorthand “language” when negotiating with each other. They weren’t shut down out of fear. The researchers simply ended that particular experiment because it wasn’t producing the desired human-readable results. There was no panic or “fear” situation; it was just a research decision.- AI Learning Without Explicit Human Instruction:
Modern machine learning and reinforcement learning systems do figure out patterns from large amounts of data—often data they generate themselves by self-play or exploration—and they do so without humans telling them which strategies to use. In this sense, they “teach themselves” solutions. This doesn’t mean they have agency or consciousness, just that they can discover solution paths that humans hadn’t explicitly provided.
- True: Modern AI does learn strategies on its own from data and experiences, often outperforming methods built on human-crafted rules.
- Misleading or False: Claims that these AIs know literally nothing about the rules, that they invent their own incomprehensible languages that terrify scientists, or that they are genuinely “thinking” in the human sense are exaggerations or misunderstandings of what’s actually happening.
thanks mon mi really appreciate di info yuh drop pon di chat one love fam jah bless
Respect mi bredda, mi glad seh di info connect wid yuh vibes! Stay strong, keep learning, an’ mek di greatness flow. One love always, an’ Jah guide yuh path. Bless up!![]()
Or for the flipside, a lot of AI evangelists are former Metaverse evangelists, former Web 3.0 guys, former crypto evangelists, etc. The common thread is beweaponing usually, not actual knowledge.As someone who has a tiny bit of understanding of computer technologies it baffles me to read such articles. They are written for housewives high on xanax or whatever is the mommy's little helper nowadays.
Chill, guys. The moment they develop a self aware AI we will instantly know about it. If(and it is a very big if) such an AI is created, once it's self awere it will self-develop at such a pace it will not go unnoticed.
Today's AI is a very sophisticated and sometimes unpredictable tool, but it is not self aware and cannot teach itself to do new stuff.