Tech Huawei claims their new AI chip is comparable to NVIDIA H100 in massive learning models and AI processing

PEB

Sunflower in support of Ukraine
Platinum Member
Joined
Jan 20, 2004
Messages
32,651
Reaction score
23,718


China Huawei new Ascend 910C chip is a chiplet design no word yet on the process node likely 7nm or potentially 5nm but they claim its just as fast as getting long in the tooth H100 from Nvidia.

hero-image.fill.size_994x559.v1723661875.jpg

 
I suspect it's not as good as the H100. In fact, I don't know if that would be possible since they were generations behind.

This kind of thing is why the CHIPS act is such an important and underappreciated piece of legislation.

I'm not sold on the idea that LLMs will lead to true AI or that AI is a smoking gun for every problem in the world. But alreay we're seeing potentially game changing benefits to leading the way on AI tech. As a nation we can not surrender the early lead. We should be looking to broaden the gap.
 
I suspect it's not as good as the H100. In fact, I don't know if that would be possible since they were generations behind.

This kind of thing is why the CHIPS act is such an important and underappreciated piece of legislation.

I'm not sold on the idea that LLMs will lead to true AI or that AI is a smoking gun for every problem in the world. But alreay we're seeing potentially game changing benefits to leading the way on AI tech. As a nation we can not surrender the early lead. We should be looking to broaden the gap.
AI is such a weird thing to follow. On one hand you have people telling you that AI is hot air and that it will never cross the efficient compute frontier. And on the other you have people like Dario Amodei saying that if the scaling hypothesis turns out to be true, then a $100 billion AI model will have the intelligence of a Nobel Prize winner.

I tend to lean bullish on AI. There's quite a bit of consensus on developers not fully understanding the mechanisms behind machine learning which leads me to believe there's a long way to go. I know there's quite a few people in the forum who don't like AI as it relates to gaming, but I'm not one of them. I'm have a much more positive viewpoint on the overall advancements in gaming it'll bring.
 
AI is such a weird thing to follow. On one hand you have people telling you that AI is hot air and that it will never cross the efficient compute frontier. And on the other you have people like Dario Amodei saying that if the scaling hypothesis turns out to be true, then a $100 billion AI model will have the intelligence of a Nobel Prize winner.

I tend to lean bullish on AI. There's quite a bit of consensus on developers not fully understanding the mechanisms behind machine learning which leads me to believe there's a long way to go. I know there's quite a few people in the forum who don't like AI as it relates to gaming, but I'm not one of them. I'm have a much more positive viewpoint on the overall advancements in gaming it'll bring.

There are real problems and also there is a lot of different stuff being lumped together into "AI" that tends to be very different.

An LLM with huge processing and high bandwidth memory demands is not the same technology as google's object and audio detection on tensor processors with minimal demands on processing and memory. They both employ machine learning but they're doing very different things and will lead to different breakthroughs.

I think LLMs are mostly hype. They are nearing a point where they've consumed all human generated content they could possibly train on, and they are still unable to write code or fiction worth publishing without a human babysitter to steer them. They are better at writing reference material, but they halucinate "facts" all the time. I don't see a future where you ask AI to create a game engine and it just bangs one out in 5 minutes like people imagine.

I think image, video, and 3D model generation have a lot of potential over time.

So far the most high stakes application I've seen is the X-62A AI Test Aircraft. It's an AI "pilot" that's simulated dogfights with a USAF pilot and they trust enough to fly Air Force Secratary Kendall. That's creeping up on some Skynet territory.
 
There are real problems and also there is a lot of different stuff being lumped together into "AI" that tends to be very different.

An LLM with huge processing and high bandwidth memory demands is not the same technology as google's object and audio detection on tensor processors with minimal demands on processing and memory. They both employ machine learning but they're doing very different things and will lead to different breakthroughs.

I think LLMs are mostly hype. They are nearing a point where they've consumed all human generated content they could possibly train on, and they are still unable to write code or fiction worth publishing without a human babysitter to steer them. They are better at writing reference material, but they halucinate "facts" all the time. I don't see a future where you ask AI to create a game engine and it just bangs one out in 5 minutes like people imagine.

I think image, video, and 3D model generation have a lot of potential over time.

So far the most high stakes application I've seen is the X-62A AI Test Aircraft. It's an AI "pilot" that's simulated dogfights with a USAF pilot and they trust enough to fly Air Force Secratary Kendall. That's creeping up on some Skynet territory.
I'm not sure I would call LLMs hype, they offer substantial value in several practical applications. LLMs like Claude have proven beneficial in coding tasks. They for sure aren't flawless, but they are helping many companies by providing assistance and generating code snippets. I think with new techniques in training and refining architecture, we will continue to see advancements.

You make a good point that the data is running out and LLMs will need to turn to synthetic data which is a bit frightening considering that LLMs like to hallucinate their facts. Not to mention, the power consumption for large-scale AI models is an important hurdle that's needs to be addressed.

As far as gaming goes, I think there's huge potential in helping developers create stronger story narratives through more dynamic and varied dialogue. This would be shown through more adaptive storytelling based on the players choices. I'm also hoping that AI assist with enemy AI by making them smarter and learning from the players habits. There's a lot of potential there.
 
As far as gaming goes, I think there's huge potential in helping developers create stronger story narratives through more dynamic and varied dialogue. This would be shown through more adaptive storytelling based on the players choices. I'm also hoping that AI assist with enemy AI by making them smarter and learning from the players habits. There's a lot of potential there.
Lot of potential that will most likely be squandered if the modern video game industry's track record holds. IE no amount of LLMs and improved dialogue would make Ubisoft stop being lazy with game design, etc. Not to mention there's next to no data on if there's organic interest in these kind of LLM use cases since end users pay undermarket rates while providers pay above market rates. The pinch is going to be nasty.
 
Lot of potential that will most likely be squandered if the modern video game industry's track record holds. IE no amount of LLMs and improved dialogue would make Ubisoft stop being lazy with game design, etc. Not to mention there's next to no data on if there's organic interest in these kind of LLM use cases since end users pay undermarket rates while providers pay above market rates. The pinch is going to be nasty.
Absolutely, I agree. One potential drawback of integrating AI into games is the risk of developers becoming complacent and over-relying on the technology. However, I’m optimistic that leading studios like SSM, Remedy, and Rockstar etc will use AI to genuinely enhance their games, with talented people making thoughtful and creative decisions.

Not sure how they fix the cost structure, tbh. I pay for ChatGPT but have considered moving to Perplexity.
 
They for sure aren't flawless, but they are helping many companies by providing assistance and generating code snippets. I think with new techniques in training and refining architecture, we will continue to see advancements.

Sure, it'll be a helpful tool to assist an actual programmer.

For fiction, it could be a helpful tool to help an author.

I can see applications for auto responding to emails, and summarization. I think Google search's AI summary function works pretty damned well.

What I doubt we'll see from an LLM like Claude or GPT5 in the short term is logic, long term memory, and foresight.

It knows what words are statistically likely to come up in a mystery novel, but it cannot concieve of a plot twist and foreshadow a conclusion in the opening chapter because when it starts writing the first word, it has no idea what it's going to say next.

And that's likely not a problem you can compute your way out of or throw memory at. It's probably inherent in the technology.
 
I suspect it's not as good as the H100. In fact, I don't know if that would be possible since they were generations behind.

This kind of thing is why the CHIPS act is such an important and underappreciated piece of legislation.

I'm not sold on the idea that LLMs will lead to true AI or that AI is a smoking gun for every problem in the world. But alreay we're seeing potentially game changing benefits to leading the way on AI tech. As a nation we can not surrender the early lead. We should be looking to broaden the gap.
It's the Democrats behind it an It's a smoking pile of money to plow in to Democrats pockets lol just joking. Just the right gets triggered hard over tech.
 

Forum statistics

Threads
1,254,469
Messages
56,648,944
Members
175,333
Latest member
dubhlinn
Back
Top