Comments by "Winnetou17" (@Winnetou17) on "" video.
-
For GN: if you have the time and curiosity... since you can disable HT per individual core, and, if I'm not mistaken, also each individual core can be disabled completely, could you test a handful of games, to see if using these could notificeably increase performance (in a handful of games, theoretically each will differ from the others by A LOT) or decrease power&temp ? For example, in GTA 5, the CPU could be set to only have 8 cores, all without HT. I realize that this game is a pretty bad example, since it has that stupid 187 FPS limit, but you get the idea. Disabling HT should lead to performance because of less resource contention, while disable a core completely should help with power draw & temps and maybe latency ?
Related to this, I'm personally very curious how did the skylake architecture/platform/manufacturing did improve after all these years. Could you take a 10900K, set it to have the same a) core & threads, b) power targets, c) clock frequencies and d) all combinations of the before against a 6600K and 6700K ?
So, for example, a 10900K configured to match the 4/4 and 4/8 cores/threads of 6600K and 6700K also with the same frequencies of said CPUs, how much less power it consumes ? Does it perform the same ? On another test, with the same cores/threads and the same power limits, what frequencies and performance does it achieve ? Is that also the same ?
To remind here that the 10900K should have some security fixes directly in the hardware. I'm not certain if right now for a 6700K the security fixes would be in software or none at all.
1
-
Regarding the GPU (or anything else) bottleneck... it usually isn't so simple.
Let's say that for a particular game, to draw a frame, it needs x CPU work and y GPU work. But, the GPU work can only start after, say, 20% of CPU work is done.
So, the frame length then would be computed like 0.2 * x + max (0.8 * x, y).
This, is, of course oversimplified. But even so, it can be seen that if the CPU work is done very quick (let's say x is 2ms) compared to the GPU work (let's say y is 20ms), then this will appear as a GPU bottleneck. Because it will take 20.4 ms to finish the frame. And, if you double the CPU speed, it will take.... 20.2 ms to finish the frame.
That's why, even though in some scenarios the GPU bottleneck is clearly seen, there's still a smaaall difference between the CPUs.
I think we can say that in the scenarios where doubling the [whatever] performance, the net gain is less that 1% of total performance, then we have a hard [whatever-else] bottleneck.
If the increase is still too small, but actually visible, like 10%, then we can say there's a soft cap.
1