Youtube hearted comments of Winnetou17 (@Winnetou17).
-
559
-
130
-
106
-
I think that Apple gets less hate because they're a more "opt-in" ecosystem / playground. That is, the default is windows, when you have no choice or don't know what to pick. So you'll use it and, in many cases, find things that irk you and some that you'll absolutely hate. But going to an Apple... you usually reasearch it a bit, before you choose to buy one. That is, you already have some idea if you'd like it or not and there's good chances that you'll simply not switch to it if there's possibility of incompatibility, so to speak. Getting back to Windows being the default option - you rarely are forced to use Apple, for, say, work.
So bottom line of the above, when going Apple you usually know what you're getting into, significantly reducing the number of people frustrating with using it. As some simply choose to not go Apple, they might've realized beforehand that what they're doing is simply incompatible (like most gaming). And the rest might've had done some research and learned how to do the basic things.
Me personally, I do hate Apple more than Microsoft. I do not deny that their engineers and designers are usually doing a very good job. Most people I know using Apple's products are happy, things work. Well, the thing they're using it for. But Apple is so focused on control and walling the garden as much as possible, so anti consumer, that I do not care how good their products are.
Microsoft, to be fair, is not that far off. But, I guess, because their current position, they have a much bigger garden, so closing it is much much harder. But their strides for requiring an online Microsoft account, and what they're doing to secure login and I forgot the next thing after secure login, that's also a no-no. I've used Windows since Windows 95 (used a bit 3.11 too, but it was on old computers in some places) to Windows 10, and I've been a happy Windows 10 user. I know I won't drive Windows 11, not by personal choice. I might have to, for work, but unless I REALLY have to for something specific, I won't install it on any of my personal systems. Even if their bullshit if bypassable.
79
-
10:41: "We've got the input list just here... same elements as before, I think..." "Well, you wrote it, mate!"
=)) What's a Creel playing both the teacher and the student for our amusement.
Question: for integers, if you know you're big-endian, isn't it easier to simply build the counter array to have power-of-two buckets (like 8 or 16) ? And then, instead of doing mod 10, simply read the relevant 3-4 bits, put in the relevant bucket, and so on. You also have to make sure to have the sign bit in it's own step. Also, would it help to construct all the count arrays at once, so you only traverse forward the array once ? It uses a bit more memory, but arrays of 16 integers... doesn't sound that bad. Worst case for normal integers would be to compare 64 bit signed integers, and that would amount to 64/4 + 1 = 17 count arrays (the extra one is for the sign count array).
Cheers!
30
-
22
-
15
-
I would argue that having processes that are tightly coupled are a direct violation of UNIX philosophy. One of the main takeaways of "do ONE thing and do it WELL" is that it can be used with other programs. That is, a "business" need can be done by simply running a specifically crafted chain of commands (programs). The "one thing" part mentioned before helps so that no matter the context, the program has no side/unwanted effects and doesn't waste resources for something you don't need. If you need something extra, it can be provided by another program or with an option.
However, if your program is tightly coupled with another, well, then you don't have one program that does one thing and does it well, you have 2 programs masked as one, so essentially one program that does two things.
Not being able to run a program with all its features standalone compromises its interoperability and use for composing a bigger program. Which is exactly what the UNIX philosophy stands for, whatever bigger/complex program you need, you can build it by using small dedicated programs. With systemd's dedicated programs you can reasonably build a single program: systemd itself.
In conclusion, systemd is mostly a monolyth. It gets away because it does its job pretty well, it does cover most of the needs that people have. But not the concerns about its future.
13
-
10
-
7
-
6
-
6
-
4
-
There is a video here on youtube with a madman who installed the latest Gentoo at that time on a 486, in 2018.
And another video from this year, if I remember correctly, installing ... Gentoo (of course it's Gentoo) on a Pentium 133 MHz.
And yeah, they boot really slow 5-10 minutes to boot, another 5 minutes just to shut down. Though you can actually connect to the internet and see or download stuff safely. But overall, there's little reason to install a modern kernel. Though, regarding the speed, the one on the Pentium doesn't seem like he went full optimisation, so it actually might run much better.
I'm with Linus too here. Part of me hates to see support like this dropping, but in reality, it keeps the code more maintainable, without an everlasting list of things to check or maintain compatible. And the devices who get to be dropped support truly are obsolete and also don't really benefit from having the latest kernel.
Now please excuse me while I stash a nice whiskey bottle for 2032 when 586 support will be dropped.
4
-
3
-
3
-
3
-
3
-
2
-
2
-
2
-
This certainly looks nice for those looking for jumping ship, or for you to install to your (grand)parents, without much worry that they'll struggle :)
I have to ask (every OS video has somebody like this, right?) Have you checked SerenityOS ? It's not ready for daily driving, but is very interesting and, dare-I-say-it, appealing.
Also, I know it's much more "hardcore", which might be out of the scope for this channel, but maybe a Gentoo video some day ? Maybe just to showcase a minimalistic setup and very customized apps, with a more streamlined installation (as compared to manually compiling). Even if few would ever use it, knowing that it exists and knowing what's possible should be good education.
2
-
2
-
@NoBoilerplate But Java had trillions of dollars of optimisations too! That's why I was shocked. If I'm not mistaken Java has about the most advanced (and complex) garbage collection algorithms. And I know the JDK is quite a beast. And I say that as someone who doesn't like Java (and I like javascript, though I kind of hate about all frameworks on it).
Of course, there's no ceiling in optimisations. Unless you don't have enough data. And Javascript (unlike Typescript) does lack having strong typing everywhere for example. That by itself includes some run overhead. I guess it's a matter of things like Python having those ML libraries that are basically implemented in C and calling them in Python gives you basically the same speed (for those specific functions). Also like the Falcon framework/module in PHP is basically a collection of C functions presented in PHP which, if you only use them, you're close to C speed. But in both cases, you're restricted to a set of functions. And the language itself, being dynamic, has an overhead of its own.
I think that the benchmarks in which JS runs well are just that - cases where the engine already has an optimized solution implemented. Though if these cases are in good enough number, especially for the domain where JS is used, that's good and I think they can be used as representative of the language. But in general I'd go by worst case.
I guess I'll just have to up my game on current language speeds. I did a quick search now and I can't say I'm happy with the first page of google's results. To be fair, I did find in there some instances where JS is faster or on the same plane (to say so) with Java. But I'm still not convinced. To be frank, I really don't see something like Elasticsearch being implemented and running as well on Javascript.
While on this topic, do you have any good benchmarks sites ?
2
-
1