Comments by "Winnetou17" (@Winnetou17) on "DJ Ware"
channel.
-
I would argue that having processes that are tightly coupled are a direct violation of UNIX philosophy. One of the main takeaways of "do ONE thing and do it WELL" is that it can be used with other programs. That is, a "business" need can be done by simply running a specifically crafted chain of commands (programs). The "one thing" part mentioned before helps so that no matter the context, the program has no side/unwanted effects and doesn't waste resources for something you don't need. If you need something extra, it can be provided by another program or with an option.
However, if your program is tightly coupled with another, well, then you don't have one program that does one thing and does it well, you have 2 programs masked as one, so essentially one program that does two things.
Not being able to run a program with all its features standalone compromises its interoperability and use for composing a bigger program. Which is exactly what the UNIX philosophy stands for, whatever bigger/complex program you need, you can build it by using small dedicated programs. With systemd's dedicated programs you can reasonably build a single program: systemd itself.
In conclusion, systemd is mostly a monolyth. It gets away because it does its job pretty well, it does cover most of the needs that people have. But not the concerns about its future.
13
-
6
-
5
-
4
-
3
-
3
-
2
-
2
-
2
-
I don't understand that part at the start with the cost of delete vs write. Both in CPU registers, RAM and disk (be it solid state or HDD), a delete IS a write. Going over the special case of SSDs which have multiple bits per cell and writing would mean writing all 4 bits, the delete vs write doesn't make sense to me. At least, not in the current compute landscape.
And the "store the input so later you can simply switch, instead of delete" sounds like basic caching to me.
For consumer hardware, we are both getting more effiecient and not. If you look at smartphones and laptops, it is inarguable that we're getting much more efficient. And in general staying in the same power envelope, though the highend desktop replacement laptops are indeed more power hungry that what was 10 years ago.
On the desktop side... if we ignore a couple of generations from Intel (13th and 14th) then the CPUs I'd say are getting more efficient and also staying at a reasonable power draw, so same power envelope. Same for RAM and disks. It's the GPUs that are also more efficient, but have expanded, by quite a lot, the power envelope at mid and high end levels. But I would say that the raw hardware power is impressive.
On the datacenter side, 30,000 tons of coal seems quite little. I expected something like 1 billion tons of coal. Funnily enough, a lot of electrity nowadays is consumed in AI. Feels like creating the problem in order to create the solution to me. Waaay too much desperate-ness in getting the AI upper hand is quite a clown show to me. I am expecting more and more regulations on AI as the data used is still highway robbery in most cases, and the energy used is just ludicrous, at least for the current and short-term future results. In the context of having to use less energy, so we can stop putting carbon into air.
Lastly on the prebuilt power limits or something similar. I don't know of having such a law, neither in EU nor in Romania where I live. However I do know that there is one for TVs (and other household electronic appliances, if I'm not mistaken) which actually limits the highend TVs quite a lot. Which, frankly, is quite stypid to me. If I get an 85" TV, you expect it to consume the same as a 40" inch one ? Not to mention that maybe I'm fully powered by my own solar panels. Who are you to decide that I can't use 200 more Watts for my TV ? On this theoretical setup, it would generate literally 0 extra carbon. And what's worse, because of this st00pid law, now people are incentivised to buy from abroad, which is worse for energy used (using energy to ship from the other side of the world instead of local) and worse for the economy (EU manufacturers cannot compete as well as those in other countries). Anyway, rant off.
2
-
2
-
2
-
2
-
There's something that doesn't sit well with me:
- the law assumes that the cores have the same performance characteristics. The Macs have different cores, so the estimate cannot be correct. The single core performance isn't mentioned if it's a performance core (which I assume) or efficiency core
- why is 12 core estimation of improvement 418%, but later a 10 core estimation of improvement is also 418% ?
- why is process creation 1900% better ? Theoretically it shouldn't be possible to surpass 1100% (11 extra cores). Is is just because there's less context switching ?
Lastly, I just have to talk about a thing that I see that many do not mention. The Amdahl's Law applies for a single program, more specifically a single algorithm. If you actually have multiple programs, multiple things that have to be computed, those should be basically 99% paralelizable between themselves. Say, playing a game and recording (encoding) a video of it, while also compiling something in the background. These are 3 main tasks, in which going from one CPU core to do them all, to say, 3 cores to do them (one for each program) I expect at least 99% improvement (assuming there's no bottlenecks at, say, HDD/SSD level). None of the programs needs to know what the other is doing, so it has 100% palalelization in theory (of course, in practice it can vary, a bit more if more cores alleviate bottlenecks and less with the overhead of scheduling and with the limitations of other hardware like memory and disk bandwidth)
In current day and age, we're not running like in DOS times, running a single program at a time. Usually it is a single main program, like a browser or a game, but there's plenty occasions where you run multiple things, like I said above. Having a browser with many tabs can alone benefit from more cores, even if each tab has only serial tasks in it (aka 0% paralelism achievable). If you also do some coding alongside, there goes more cores. And, of course, today in something like MS Windows, you can assume a core is permanently dedicated to the background tasks of windows - indexing disk data, checking for updates, collecting and sending telemetry, all kinds of checks and syncs, like the NTP/Windows Time, scheduled tasks and soo on.
In practice, 8 cores for casual workflows (browsing, gaming and office) is basically plenty, it is indeed little gain from more cores. In that sense I agree with the final thoughts.
But I fully disagree with the final thoughts on the server comparison. Virtualisation is not for performance, quite the opposite. If you need top performance, especially lowest latency, you have to go bare metal. Virtualization has two great benefits: sandboxing: you don't have conflicts with what anything else is running on that server, so you can have 10 versions of something with no problem, it's easy to control how much resources it can use and many more. Also, it makes for immediate (almost) identical development environment, reducing devops time and especially stupid bugs because some dev runs PHP on Windows and it behaves differently than the same PHP version on Ubuntu. Also again, thinking in this paradigm of small virtual computers makes your application easy to scale (just have more containers). But an appllication running in a virtual machine or in a container will NEVER be faster than the same app configured the same, on bare metal. The nice thing is that nowadays, in most cases, virtualizing has a negligible impact on performance, while the other benefits are massive. That's why everybody is using it now.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
This was quite interesting. I feel like this can be great for single purpose computers when there's no "other applications". And if you think that you need things like apt, ls etcetc, those can simply be available by rebooting into a traditional kernel. I kind of want to build something like this with Gentoo. Have multiple installs for single purpose, like some game. And the init will simply launch the game, no bash, no login, no desktop env, not even a window manager. Updating the game and other things (like opening up a browser) would have to be done by rebooting. Might seem like overkill, but if you know you don't need the other stuff, and you need the performance ... ain't that neat ? Also, I'm thinking it might be a good way to setup the computer so the kids can play game A and B, but have it locked to do other stuff.
However the dynamic downloading confused the hell out of me. What does that have to do with exokernels ? Can't that dynamic downloading happen very happily on traditional OS/kernels too ?
1
-
I'm on Gentoo. It's a distro that I can't fully recommend to somebody without knowing that person. In general you'd know best if Gentoo is for you or not.
That being said, I'm on it for almost one year and a half (from Dec 2023-Jan 2024) and had no significant problems of any sort. No system instability at all.
Gentoo is very good for learning and very good for control and customization. Because of the use flags, you can customize what an individual program/app/package has or doesn't, allowing you to enable experimental or esoteric features or remove things you don't want or don't need. It also allows you to have the binaries optimized for your specific CPU, which can help performance. If you happen to want to have patches for some programs, you can streamline that with Gentoo, so those programs are updated with the rest of the programs, while still having your patches applied.
One thing I have to add ... the compilations are really exagerrated IMO. The laptop I'm using is almost 9 years old. It's from 2016. It has a 4 core Intel i7-6700HQ CPU. While it was high-end in 2016, now it's equivalent to a dual core CPU. It does help that I have 64 GB of RAM. Still, knowing that I don't have a fast system, the only program that's annoying to upgrade/compile is Chromium. Last compiles took about 14 hours, when not doing anything else (I just left it running while I went out). Everything else, no exceptions, takes up to 2 hours. Firefox is between 60 and 80 minutes. I'd say that, on average, I have about 1 (one) round of updates per week that takes more than 30 minutes (for everything that's new, not just a single program) and that's while I'm doing something else, like watching YT and commenting (which is pretty lightweight, true).
I'm sure that if I had more Chromium based browsers, each would take that 14 hours to compile. It's true that I've also been lazy to dig deeper into ways I can speed it up. And I don't have KDE or GNOME, which I know are quite big, so those might add a bit of time compiling too.
Still, if you have something low end or simply don't want to deal with the bigger compiles, there's binary packages. Not for all, but the browsers and bigger packages in general have a precompiled binary from Gentoo.
1
-
1
-
1