AI sucks balls

Best analogy is you see a mountain. You paint the mountain. It is not the mountain. That's ai right now.
People who use screw up constantly. The "tariff list" trump made when he started, was made by an llm that thought internet prefix/ codes meant it was a country.
In result the United States told the world, it was putting tariffs on penguins on an American military island.

You can ask stuff like "what happens when humans have no light" and get psuedo science replies. It's basically a giant magic 8 ball with billions of replies and a rng feigning intelligence. It's why they need to keep adding data. It's is simply humanity s presence on the internet reflected and of course it's going to say false things, refuse to admit it's wrong and say really racist shit.

The biggest issue is that it's being used by government, or healthcare, or the military and it screws up terrible and doubles down. It should not be anywhere near those 3
 
Two of my nephews, one a computer programer and other an engineer for Boeing both believe AI sucks. They don't find it very good. My father on the other hand seems to be in love with AI. I don't believe dad has a secrete AI girlfriend mom needs to worry about though. Personally I'm more in line with my father's beliefs. AI is in its early years. It is going to get better and more helpful.

I suspect AI is not going to be the big job killer some are predicting. Some easy to do jobs will find themselves on the chopping block, but overall I have doubts that it will be able to completely replace humans.

If we do get to that point though where AI is causing wide spread economic disruption then government will need to step in and begin to require that humans work with AI machines. Humans are in general not that friendly. Having large swaths of the population reliant on government handouts hasn't been shown to be a positive for mankind.
 
For AI learning/training, is there such thing as content verified or fact checked by a smart impartial human being where the AI will only accept learning/training from those articles or professional journals? I'm talking about some legit filter before the AI learns from it and becomes another spreader of fake nonsense.
 
For AI learning/training, is there such thing as content verified or fact checked by a smart impartial human being where the AI will only accept learning/training from those articles or professional journals? I'm talking about some legit filter before the AI learns from it and becomes another spreader of fake nonsense.
More than that, there need to be legal penalties for faking content in some arenas.
 
For AI learning/training, is there such thing as content verified or fact checked by a smart impartial human being where the AI will only accept learning/training from those articles or professional journals? I'm talking about some legit filter before the AI learns from it and becomes another spreader of fake nonsense.
Smart impartial humans are very scarce nowadays.
Also no, I don't think that what you said is implemented since major players gave their models internet access.
 
It's a tool like anything else and it's getting better over time. But yes it does need human supervision. And it should have it for the rest of time. We don't want an AI that doesn't have human supervision. If it runs on its own then that means something terrible has happened to humans.
 
It's a tool like anything else and it's getting better over time. But yes it does need human supervision. And it should have it for the rest of time. We don't want an AI that doesn't have human supervision. If it runs on its own then that means something terrible has happened to humans.
Google became less precise as time went on....

Why is it guaranteed that it will just get better and better?
 
Google became less precise as time went on....

Why is it guaranteed that it will just get better and better?

I'm not making any guarantees. But I've been following the development of AI and in general it's been getting better in terms of outputs. I'm not sure what you are specifically referencing, and I know there have been ups and downs, but overall the quality of output has improved. Especially when it comes to visual output.
 
I thought this a good article on AI

A Robopocalypse for Jobs? Not Today, and Probably Not Tomorrow, Either​



Imagine a world of human-level artificial intelligence. The job market would probably be a lot different. There’s the “robots take all the jobs” scenario, I suppose. A few folks own all the machines with the rest of us on the dole. Not sure that is the most likely outcome, however, even if artificial general intelligence proves possible.

Here’s another scenario: According to economist Pascual Restrepo of Yale University in his paper “We Won’t be Missed: Work and Growth in the AGI World,” if AGI could perform all economically valuable tasks, work would shift from economic necessity to personal meaning: art, teaching, and personal care. Wages might become pegged to the cost of AI compute. Labor’s share of income could shrink even as absolute living standards rise dramatically.

Sounds better! But where are we right now? Early, early days, I think. Economists usually think about automation not as whole jobs disappearing, but as specific tasks within jobs being replaced or assisted by technology. The economics team at Goldman Sachs, a bank, assumed in a 2023 analysis that generative AI can handle moderately difficult tasks across hundreds of occupations. It estimated that about two-thirds of US jobs are partly exposed, with AI able to ultimately automate roughly one-quarter to one-half of the tasks within those roles. Globally, that translates to about 18 percent of total work being automatable. Even so, Goldman expects AI to replace only about seven percent of US jobs while enhancing 63 percent—a reshaping of work, not a mass wipeout.

A different approach is taken in the new paper “Remote Labor Index: Measuring AI Automation of Remote Work.” The authors, from the Center for AI Safety and Scale AI, decided to treat AI systems as if they were freelance workers on real jobs. They took 240 genuine Upwork-style projects—everything from data dashboards and 3D product designs to marketing videos—and provided the same briefs, files, and deliverables to both humans and AI models such as GPT-5, Claude Sonnet 4.5, and Gemini 2.5 Pro. Human evaluators then judged whether the AI’s submissions would be acceptable to a paying client.

The result: Almost never, with a tiny 2.5 percent success rate “revealing a stark gap between progress on computer use evaluations and the ability to perform real and economically valuable work,” the paper concludes. Even the top-performing model, Chinese AI agent Manus, “earned” only about $1,700 out of $144,000 worth of human labor. Here’s why most of the AI outputs were rejected:

Rejections predominantly cluster around the following primary categories of failure:

  1. Technical and File Integrity Issues: Many failures were due to basic technical problems, such as producing corrupt or empty files, or delivering work in incorrect or unusable formats.
  2. Incomplete or Malformed Deliverables: Agents frequently submitted incomplete work, characterized by missing components, truncated videos, or absent source assets.
  3. Quality Issues: Even when agents produce a complete deliverable, the quality of the work is frequently poor and does not meet professional standards.
  4. Inconsistencies: Especially when using AI generation tools, the AI work often shows inconsistencies between deliverable files.
The failure rate cuts sharply against the dystopian headlines predicting a white-collar wipeout that’s just around the corner. Far from replacing designers, coders, or analysts, today’s AIs are still fumbling at doing the basics correctly. Models can edit text or generate images in seconds, but they crumble when asked to manage complex, multi-step work. That gives policymakers and firms time to adapt through training, not panic through bans or basic-income schemes. And as AI gets better, we’ll be ready.



Learn more: America’s Self-Driving Test of Faith | Anxious About AI? We’ve Been Here Before | How Environmental Virtue Signaling Starves the Poor | AI Ban Backers Risk Freezing Progress
 
I mean, I've done this same thing with YouTube and Google before.

You also have to know that not every one is mechanically inclined and has to tools to do what you did. 70 year old Daisy can diagnose the issue with her Lexus all day long with an AI tool.. but at the end of the day she's still taking it to the dealer to have them fix it.

The difference I find with AI is that if I'm struggling with something I can take a screenshot or a picture and it can tell me where I'm fucking up. Can't do that with a youtube video.
 
I just saw an A.I ad of chuck Norris it disgusted me and creeped me out they will definitely exploit his likeness when he dies.
 
Its still going to require people or good for specific tasks. Still hallucinations is everywhere.
 
It's a tool like anything else and it's getting better over time.

I don't use it that much but it seems atleast the DALL-E image generation I use for work sometimes has gotten worse.

Which makes sense since it's starting to use AI output as training input.
 
AI needs a mommy and daddy to tell it don't listen to that guy who's a drug addict retard who flunked out of elementary school.
 
as an aside - the term "hallucinations" is a clever techno marketing term to anthropomorphize errors.
Actually it's rather the opposite, in that we call that imagination but in LLMs we term it hallucination. Really it just determines a response that isn't in line with the users aims, expressed or otherwise.
 
I don't use it that much but it seems atleast the DALL-E image generation I use for work sometimes has gotten worse.

Which makes sense since it's starting to use AI output as training input.

I'm surprised you are still using DALL-E. I thought whatever was left of that got incorporated into ChatGPT, and there are other better models for image generation, like NanoBanana (Google), or MidJourney. Is it someone else that decides what tool you use or some kind of cost issue?
 
I'm surprised you are still using DALL-E. I thought whatever was left of that got incorporated into ChatGPT, and there are other better models for image generation, like NanoBanana (Google), or MidJourney. Is it someone else that decides what tool you use or some kind of cost issue?
NanoBanana? is that some kind of slang for having a tiny penis? because that's what all you AI nerds have got.
 
My nephew had a good quote about AI. "It was funny a few years ago, now it's scary."

He was talking about AI videos. I believe what he said sums it up.
 
Back
Top