Comments by "Winnetou17" (@Winnetou17) on "" video.
-
There's something that doesn't sit well with me:
- the law assumes that the cores have the same performance characteristics. The Macs have different cores, so the estimate cannot be correct. The single core performance isn't mentioned if it's a performance core (which I assume) or efficiency core
- why is 12 core estimation of improvement 418%, but later a 10 core estimation of improvement is also 418% ?
- why is process creation 1900% better ? Theoretically it shouldn't be possible to surpass 1100% (11 extra cores). Is is just because there's less context switching ?
Lastly, I just have to talk about a thing that I see that many do not mention. The Amdahl's Law applies for a single program, more specifically a single algorithm. If you actually have multiple programs, multiple things that have to be computed, those should be basically 99% paralelizable between themselves. Say, playing a game and recording (encoding) a video of it, while also compiling something in the background. These are 3 main tasks, in which going from one CPU core to do them all, to say, 3 cores to do them (one for each program) I expect at least 99% improvement (assuming there's no bottlenecks at, say, HDD/SSD level). None of the programs needs to know what the other is doing, so it has 100% palalelization in theory (of course, in practice it can vary, a bit more if more cores alleviate bottlenecks and less with the overhead of scheduling and with the limitations of other hardware like memory and disk bandwidth)
In current day and age, we're not running like in DOS times, running a single program at a time. Usually it is a single main program, like a browser or a game, but there's plenty occasions where you run multiple things, like I said above. Having a browser with many tabs can alone benefit from more cores, even if each tab has only serial tasks in it (aka 0% paralelism achievable). If you also do some coding alongside, there goes more cores. And, of course, today in something like MS Windows, you can assume a core is permanently dedicated to the background tasks of windows - indexing disk data, checking for updates, collecting and sending telemetry, all kinds of checks and syncs, like the NTP/Windows Time, scheduled tasks and soo on.
In practice, 8 cores for casual workflows (browsing, gaming and office) is basically plenty, it is indeed little gain from more cores. In that sense I agree with the final thoughts.
But I fully disagree with the final thoughts on the server comparison. Virtualisation is not for performance, quite the opposite. If you need top performance, especially lowest latency, you have to go bare metal. Virtualization has two great benefits: sandboxing: you don't have conflicts with what anything else is running on that server, so you can have 10 versions of something with no problem, it's easy to control how much resources it can use and many more. Also, it makes for immediate (almost) identical development environment, reducing devops time and especially stupid bugs because some dev runs PHP on Windows and it behaves differently than the same PHP version on Ubuntu. Also again, thinking in this paradigm of small virtual computers makes your application easy to scale (just have more containers). But an appllication running in a virtual machine or in a container will NEVER be faster than the same app configured the same, on bare metal. The nice thing is that nowadays, in most cases, virtualizing has a negligible impact on performance, while the other benefits are massive. That's why everybody is using it now.
1