Tech Gaming Hardware discussion (& Hardware Sales) thread

Needed a new controller for my PC because I'm tired of having to reconnect my DS4 to my PS4, so I picked up the Xbox One Elite. I'm not sure it's really worth the price, but it is a pretty nice controller. I need to find a game that actually might make use of the paddles. For now I just took them off (I am pretty impressed with how easy it is to swap out the sticks, paddles, and dpad).
Its honestly not bad, ive only ever needed to use 2 paddles. That price is hefty though, love the build quality and interchangeable parts.

But compared to scuf its not up to par, imo.
 
Amd killin it. Im set for awhile w/ r7 2700 and Vega 64 though.

Anyone order the glorious odin? Its been two weeks and not even a tracking number yet
 
NVIDIA certifies another 16 gaming monitors as 'G-Sync Compatible'

This means there are now 28 total "G-Sync Compatible" Freesync monitors:
https://www.nvidia.com/en-us/geforce/products/g-sync-monitors/specs/
47967661798_fd0c36e9f7_b.jpg
 
it's times like this that really make me wish i didn't just build my pc in december.
 
it's times like this that really make me wish i didn't just build my pc in december.
That was a great time to build. Great prices. No need to dwell on X570 and PCIe4.0; the revisions to compatibility with this announced specification itself demonstrates how even AMD can't simultaneously deliver these gains and ongoing socket compatibility with 3+ generation endurance. If you built in December 2016, or were forced to upgrade your GPU in mid-2017...I'd feel for you.

Yep. Did you see this from three days ago?
AMD Ryzen 3000 CPUs Can Overclock To 5 GHz Single & 4.5 GHz All Core OC at 1.35V, 5 GHz Special Variant In The Works For 2020
WCCFTech said:
The overclocking details come straight from Chiphell where forum member ‘One Month’, the same guy responsible for showing us the first look at the Ryzen 3000 APUs has given us the overclocking details for Ryzen 3000 CPUs. The new details are rounded up over at Reddit so let’s take a look at them before heading into details.
  • 4.8GHz is achievable on all cores
  • ~4.4GHz performs similar to a 5Ghz 9900k – in Cinebench
  • 5.0GHz is doable, but it’s a challenge
  • Overclock for overclocking, Ryzen 3000 is still faster
  • 5GHz boost isn’t infeasible
  • 5Ghz all core is pretty much a no-go.
  • 1.35V for all core 4.5Ghz
  • Memory is being run very loose and slow to assure the stability for testing
This guy delids, but I don't think he is cooling with LN2.

This reinforces the suspicion that 5 GHz almost certainly won't be the peak single-core turbo boost even with the 16-core flagship once it's unveiled. Nevertheless, hitting 4.5 GHz across all cores at 1.35V is thrilling, and a factory-stock 4.7GHz peak turbo does seem realistic given these figures. It's really tough to tell.

I love the video Greg Salazar (i.e. Science Studios) made where he questioned if we are yet another generation away from AMD cannibalizing Intel at the top. After all, even if this thing utterly destroys the 9900K in Cinebench, as it obviously will, but still comes up 2nd place in gaming benchmarks to the 9900K and 9700K, because games are still heavily quad-core oriented, even if only by a few percent, then will those buyers really want it? Can that decision be justified?

After all, the 9700K and 9900K are cheaper right now than the MSRP of the 12-core chip which has a factor 4.6 GHz peak turbo. It's still all about those single thread scores.
 
Last edited:

I have a G-Sync compatible monitor, Acer XG-270HU, and I have issues with G-Sync.
When my side monitors are hooked up, non G-sync panels, the XG-270HU randomly flickers.
One driver update makes so I don’t get as many flickers, then the next makes it horrible.
 
That was a great time to build. Great prices. No need to dwell on X570 and PCIe4.0; the revisions to compatibility with this announced specification itself demonstrates how even AMD can't simultaneously deliver these gains and ongoing socket compatibility with 3+ generation endurance. If you built in December 2016, or were forced to upgrade your GPU in mid-2017...I'd feel for you.

oh, i know. plus, i had waited about as long as possible and etc and build a pretty great system (esp for the cost).

but all these new CPU/GPU/mobo/SSDs look fun to play with. not that i'd want to pay the premium for doing so.
 
All AMD Ryzen 3000 CPUs Will Feature Soldered IHS, Expect Impressive Thermals & Great Overclocking Capability
AMD has confirmed that all of their upcoming 3rd Gen Ryzen 3000 series processors featuring the Zen 2 core architecture will feature a soldered design. Since the launch of their 1st generation, the Ryzen series has been utilizing a soldered design which helps deliver better thermal results when compared to traditional TIM application.

AMD Ryzen 3000 Series CPUs To Utilize Soldered IHS With Gold Plating To Deliver Better Thermals & Overclocking Results
AMD has so far used solder on all of their Ryzen CPUs, Ryzen Theadripper CPUs and the upcoming Ryzen ‘Picasso’ APUs are also confirmed to feature Soldered IHS. Continuing the tradition, AMD will also feature solder IHS on their upcoming Ryzen 3000 series processors which are based on the Zen 2 core and supported by the X570 platform. This was confirmed by AMD’s Senior Technical Marketing Manager, Robert Hallock
Wonderful confirmation: no short cuts to deliver those prices.

I took the time to look over the Single Core scores in more depth for the stock frequency R5-3600 from that leak @PEB posted above as covered by WCCF Tech. They also covered a few other leaked benchmarks in addition to the Ciphell forum one (Geekbench and UserBenchmark). The WCCFTech headline covering that video insisting it is crushing Coffee Lake in IPC is garbage clickbait. I don't know where they are pulling the "average" 8700k single core score of 5,400. Being more objective:
AMD Ryzen 5 3600 6 Core CPU Benchmarks Leak Out, Knocks Out Intel’s i7-8700K At Price/Perf – Higher IPC Than Intel Coffee Lake, 6% Faster in PUBG
Screen-Shot-2019-05-31-at-11.52.32-AM.png


Here is the leaderboard:
https://browser.geekbench.com/processor-benchmarks
47981155091_6817ce6a82_b.jpg


R5-3600 is 3.6 GHz base clock with 4.2 GHz turbo.
i7-6700K is 4.0 GHz base clock with 4.2 GHz turbo.

The latter would seem to be the near perfect analogue, and the R5-3600 is beating it, but only by about 1%, and that is the Skylake i7 which is 2 1/2 generations back (i7-7700 = Kaby Lake, i7-8700k= Coffee Lake, i7-9700K = Coffee Lake refresh). It's still a massive gain, considering the R7-2700X scores a 4,802, the R7-1800X scores a 4,275, and those are the flagships from the respective 2nd/1st gens, not the baseline R5. Comparing its predecessors within those lines, the R5-2600 scores a 4,365, and the R5-1600 scores a 3,928. Given, they have lower turbo frequencies, but ignoring IPC, and looking at practical real-world improvement, that is an astonishing leap forward. That's a gain of 23% and 37%, respectively, in terms of factory-clocked single core performance.

They really went full ham-schill with the analysis on the IPC from Userbenchmark; a user determined the R5-3600 is 7% superior to the i7-7700K in IPC, and from this they conclude it will also beat Coffee Lake and Coffee Lake R in IPC because those were built "on the same architecture".
{<jordan}

https://cpu.userbenchmark.com/Compare/AMD-Ryzen-5-3600-vs-Intel-Core-i7-7700K/m810675vs3647
I don't know why they're working so hard to complicate that analysis, but from those two benchmarks, it looks like we can expect the R5-3600 to average over 130 when at its proper 4.2 GHz turbo.

Cliffs: Don't expect the R9-3800X to practically outperform the i7-9700k or i9-9900k in games. Disregarding IPC, it will be just slightly inferior in terms of single core performance. I expect it to trail by 3%-5%.
 
Just bought an LG 34” IPS LED UltraWide FHD FreeSync Monitor with HDR for $299. I love it.
 
i'm more interested in the actual overall performance/ranges of performance from the CPUs. ie: how they handle typical/realistic hard usage/OC vs set up with ideal conditions for benchmarks. high benchmarks might not mean a whole lot when you have to disable hyperthreading/etc.
 
i'm more interested in the actual overall performance/ranges of performance from the CPUs. ie: how they handle typical/realistic hard usage/OC vs set up with ideal conditions for benchmarks. high benchmarks might not mean a whole lot when you have to disable hyperthreading/etc.
The entire point of post #710 above is to predict this as accurately as possible. Single Core scores are the most useful indicator for real-world performance.
 


There are new rumors that the 20X0 series is going to be getting a huge price cut the new updated prices are 2060 for 249, 2070 for 399 and 2080 at 599 and the super additions are updated 2070 and 2080 and is going to be called super RTX 2070 and super RTX 2080. This is supposedly in response to AMD new GPU offerings and the amount of negative press the RTX series is getting. It may not hurt that Twain Semi has made a new discovery that makes the chip producing process cheaper.

https://www.guru3d.com/news-story/r...force-rtx-2060-(249)2070-(399)2080-(599).html
 


There are new rumors that the 20X0 series is going to be getting a huge price cut the new updated prices are 2060 for 249, 2070 for 399 and 2080 at 599 and the super additions are updated 2070 and 2080 and is going to be called super RTX 2070 and super RTX 2080. This is supposedly in response to AMD new GPU offerings and the amount of negative press the RTX series is getting. It may not hurt that Twain Semi has made a new discovery that makes the chip producing process cheaper.

https://www.guru3d.com/news-story/r...force-rtx-2060-(249)2070-(399)2080-(599).html


They need to lower the price of the GTX cards as well.
 
They need to lower the price of the GTX cards as well.

They may but they could end of life them instead focus on the RTX line for lower end to the super for high end gaming. Some people online are saying the super is just to give them an excuse to lower the prices of the RTX line while not admitting they are concerned about AMD and I would add Intel.
 
The entire point of post #710 above is to predict this as accurately as possible. Single Core scores are the most useful indicator for real-world performance.
That varies GREATLY on application. Sure lots of programs today still mostly run on 1-2 cores, as do most games.

But as you know pure performance is hard to just swing away arbitrarily at, without knowing the intended usage.

What if the gamer has 40 chrome tabs open at all times on thier other screen? What if they are streaming from the same pc?

What if they are actually using work applications that will run as many cores as possible?

So many “what if’s” it’s hard to just make a blanket statement like that TODAY
 
That varies GREATLY on application. Sure lots of programs today still mostly run on 1-2 cores, as do most games.

But as you know pure performance is hard to just swing away arbitrarily at, without knowing the intended usage.

What if the gamer has 40 chrome tabs open at all times on thier other screen? What if they are streaming from the same pc?

What if they are actually using work applications that will run as many cores as possible?

So many “what if’s” it’s hard to just make a blanket statement like that TODAY
Everything varies by application. The general truths I have outlined do not.

40 Chrome tabs will have almost no impact on CPU load. That is primarily memory intensive. Beyond that, my commentary quite specifically restricted itself to game performance. Not streaming, not intensive multitasking while gaming.
 
Everything varies by application. The general truths I have outlined do not.

40 Chrome tabs will have almost no impact on CPU load. That is primarily memory intensive. Beyond that, my commentary quite specifically restricted itself to game performance. Not streaming, not intensive multitasking while gaming.
Sorry, you specifically said “real world performance” in the message I quoted.

Real world performance to many means just that, performance across multiple applications , not just gaming.

But while we are talking gaming, streaming is becoming a larger and larger part of gaming these days.

Which will put a larger impact on performance with less cores.
 
Sorry, you specifically said “real world performance” in the message I quoted.
You've been around long enough you should understand that refers to performance in applications outside benchmarks.
Real world performance to many means just that, performance across multiple applications , not just gaming.

But while we are talking gaming, streaming is becoming a larger and larger part of gaming these days.

Which will put a larger impact on performance with less cores.
And this performance still favors Intel despite fewer cores. One has to bend over backwards to devise environments that favor AMD. No, streaming is not that big. The number of gamers who stream is a tiny fraction of the overall gaming base.
 
Back
Top