There are real problems and also there is a lot of different stuff being lumped together into "AI" that tends to be very different.
An LLM with huge processing and high bandwidth memory demands is not the same technology as google's object and audio detection on tensor processors with minimal demands on processing and memory. They both employ machine learning but they're doing very different things and will lead to different breakthroughs.
I think LLMs are mostly hype. They are nearing a point where they've consumed all human generated content they could possibly train on, and they are still unable to write code or fiction worth publishing without a human babysitter to steer them. They are better at writing reference material, but they halucinate "facts" all the time. I don't see a future where you ask AI to create a game engine and it just bangs one out in 5 minutes like people imagine.
I think image, video, and 3D model generation have a lot of potential over time.
So far the most high stakes application I've seen is the X-62A AI Test Aircraft. It's an AI "pilot" that's simulated dogfights with a USAF pilot and they trust enough to fly Air Force Secratary Kendall. That's creeping up on some Skynet territory.