Official AMD "Ryzen" CPU Discussion

Status
Not open for further replies.
AMD-Threadripper-Whitehaven-wccftech-watermarked-image-840x473.jpg


AMD already getting ready to show their next generation high end chip. Get this it has 4096 pin socket and 16 cores. But the real winner is it has 2 times as much level 2 cache as the rizen 1800/1700 series.

They are going to be shooting for 12, 16, 32 cores in the high end enthusiast market. They are also targeting it for server market. But here is the part that kind bothers me the enthusiast chip is not pin compatible with the 4096 pin server chip.

AMD of course stated that there are features in the server chip that is not needed on the high end enthusiast chip.

AMD-X399-Chipset.jpg


It's going to have quad DDR4 channels

WCCFTECH Whitehaven
(Threadripper)
Summit Ridge
(Ryzen)

Cores Up to 16 Up from 8,Threads Up to 32 Up from 16, Base Clock 3.1GHz 3.6GHz ,Boost Clock 3.6GHz 4.0GHz L3 Cache 32MB up from 16MB,TDP Up To 180W Up from 95W, DDR4 Channels Quad Dual
Socket S3 (LGA) VS AM4 (PGA)
Launch Mid 2017 Q1 2017

http://wccftech.com/amd-ryzen-16-core-threadripper-whitehaven-4094-socket/

In other news Intel reworking their lines to continue to lead AMD. At least improvements on their CPU lines.

"
Intel Skylake X Core i9-7920X, Core i9-7900X, Core i9-7820X, Core i9-7800X Mega-Tasking CPUs Leaked – Kaby Lake X Core i7-7740K and Core i7-7640K To Be Entry Level LGA 2066 HEDT Chips"

http://wccftech.com/intel-skylake-x-core-i9-7920x-7900x-7820x-7800x-x299-leaked/

jefferz one of those god awful Acer monitors LOL.

Intel-Skylake-X-Core-i9-7920X-Core-i9-7900X-Core-i9-7820X-Core-i9-7800X-Processors.jpg


"Starting with the flagship, we have the Core i9-7920X processor. This chip is a juggernaut featuring a total of 12 cores and 24 threads. The total cache on this behemoth is 16.5 MB (L3). Although lower than whole cache featured on previous HEDT processors, the new cache runs more efficiently and reduces chip size and cost while delivering better performance. The chip will ship with a TDP of 160W."
Diminishing returns on parallel processing. 16 threads is stupid.

If AMD wants to double down on throwing cores at the wall, then they should probably adopt the big-little strategy that ARM chipset manufacturers for smartphones like Qualcomm have been employing.
 
Diminishing returns on parallel processing. 16 threads is stupid.

If AMD wants to double down on throwing cores at the wall, then they should probably adopt the big-little strategy that ARM chipset manufacturers for smartphones like Qualcomm have been employing.


They are mimicking the same design language as Intel with more cores and larger bus. No doubt that current applications really do not benefit having so many cores and threads. Having a larger cache can improve performance and continues to pressure Intel to improve their designs.
 
They are mimicking the same design language as Intel with more cores and larger bus. No doubt that current applications really do not benefit having so many cores and threads. Having a larger cache can improve performance and continues to pressure Intel to improve their designs.
With everything still locked in at four cores I just don't see the point of a 16-core processor for the gaming or editing markets. That's server hardware.

I think it would make so much more sense to use four "big" cores set to the highest frequency possible (on that Wraith Max cooler) with two smaller cores attached to each one set to a lower frequency that can kick in when parallel processes are available to offload. Don't worry about hyperthreading; worry about achieving higher frequencies with your architecture, and then let the little cores handle the rest. Those additional cores are NEVER exhausted. Ever. When have you ever seen a 5th core or a 7th core operating above 60% outside a benchmark?

The big-little configuration just makes so much damn sense.
 
With everything still locked in at four cores I just don't see the point of a 16-core processor for the gaming or editing markets. That's server hardware.

I think it would make so much more sense to use four "big" cores set to the highest frequency possible (on that Wraith Max cooler) with two smaller cores attached to each one set to a lower frequency that can kick in when parallel processes are available to offload. Don't worry about hyperthreading; worry about achieving higher frequencies with your architecture, and then let the little cores handle the rest. Those additional cores are NEVER exhausted. Ever. When have you ever seen a 5th core or a 7th core operating above 60% outside a benchmark?

The big-little configuration just makes so much damn sense.
tenor.gif
 
With everything still locked in at four cores I just don't see the point of a 16-core processor for the gaming or editing markets. That's server hardware.

I think it would make so much more sense to use four "big" cores set to the highest frequency possible (on that Wraith Max cooler) with two smaller cores attached to each one set to a lower frequency that can kick in when parallel processes are available to offload. Don't worry about hyperthreading; worry about achieving higher frequencies with your architecture, and then let the little cores handle the rest. Those additional cores are NEVER exhausted. Ever. When have you ever seen a 5th core or a 7th core operating above 60% outside a benchmark?

The big-little configuration just makes so much damn sense.

Nvidia plans on proving more cores at least on the GPU side equals improved gaming and rendering cinematic quality immersion.

Experiences that drive extremely high quality images and emotions that could not be easily achieved with things in their current state.

Just because current games are not using more then 4 cores it will not stop future games from being optimized to use more then 4 cores. People like AMD and Nvidia will soon thanks HBM2 memory be able to build 20 to 30 plus teraflop desktop supercomputers.

With improvements with VR and AR on the horizon people will never feel more a part of an electronic created experience.
 
Nvidia plans on proving more cores at least on the GPU side equals improved gaming and rendering cinematic quality immersion.

Experiences that drive extremely high quality images and emotions that could not be easily achieved with things in their current state.

Just because current games are not using more then 4 cores it will not stop future games from being optimized to use more then 4 cores. People like AMD and Nvidia will soon thanks HBM2 memory be able to build 20 to 30 plus teraflop desktop supercomputers.

With improvements with VR and AR on the horizon people will never feel more a part of an electronic created experience.
Current games already ARE using more than 4 cores, but not well, and that's because of Amdahl's Law.

Don't sell me the same future you've been selling me for 10 years while getting your ass kicked in the present.
AMD could be closer then anyone expected with a ZEN +Vega APU then anyone thought. It could use a combination on HBM2 and DDR4 memory.

ASUS is promoting a new laptop with hints on ZEN.

https://www.google.com/amp/www.pcwo...es-the-worlds-first-amd-ryzen-laptop.amp.html


How could they be "closer than anyone expected"? Everyone has been expecting Summer or Fall this year.
 
Current games already ARE using more than 4 cores, but not well, and that's because of Amdahl's Law.

Don't sell me the same future you've been selling me for 10 years while getting your ass kicked in the present.

How could they be "closer than anyone expected"? Everyone has been expecting Summer or Fall this year.

There is a ton of work being done to make physically based rendering a reality but doing so using a distributed workflow model.

In this video they are using 360 cores to produce this model 3 years ago. People like Nvidia and Intel would like to be able to develop systems and software to make it possible to achieve the same effect using less resources but still well beyond what is available today. There is a lot of cheating in game programming to achieve nice effects with using maps that have already been designed to handle the limited computing resources of current computers. Application code is being developed to reduce the workload that is required to best optimize you game to improve the environmental shading within your character and assets stores. People like Unity are building in extensions to allow programmers to access these systems from within their game.



Another example using light fields.

 
It looks like AMD may have made a good improvement on Ryzen AM4 motherboards without requiring an new motherboard. They finally are allowing for overclocking memory and using faster memory.

This with just a software patch and should reduce the performance gap between the Ryzen and Intel i7-7700 that is used constantly to beat up Ryzen overall gaming performance. Even as Intel i7-6950X overall performance is only slightly better then the Ryzen overclocked 1700. It looks like AMD finally put and end to this big problem.

http://www.pcworld.com/article/3198...d-software-cures-ryzens-memory-headaches.html
 
It looks like AMD may have made a good improvement on Ryzen AM4 motherboards without requiring an new motherboard. They finally are allowing for overclocking memory and using faster memory.

This with just a software patch and should reduce the performance gap between the Ryzen and Intel i7-7700 that is used constantly to beat up Ryzen overall gaming performance. Even as Intel i7-6950X overall performance is only slightly better then the Ryzen overclocked 1700. It looks like AMD finally put and end to this big problem.

http://www.pcworld.com/article/3198...d-software-cures-ryzens-memory-headaches.html

If you think this is going to close the gap, I've got some ocean front property in Arizona for cheap that I think you'd be interested in.
 
Rise of the Tomb Raider released a Ryzen performance patch.
Someone run the benchmark at 1080p maxed and report back please @Slobodan @TSO @PEB
 
Status
Not open for further replies.
Back
Top