Comments by "Winnetou17" (@Winnetou17) on "Brodie Robertson"
channel.
-
130
-
108
-
I think that Apple gets less hate because they're a more "opt-in" ecosystem / playground. That is, the default is windows, when you have no choice or don't know what to pick. So you'll use it and, in many cases, find things that irk you and some that you'll absolutely hate. But going to an Apple... you usually reasearch it a bit, before you choose to buy one. That is, you already have some idea if you'd like it or not and there's good chances that you'll simply not switch to it if there's possibility of incompatibility, so to speak. Getting back to Windows being the default option - you rarely are forced to use Apple, for, say, work.
So bottom line of the above, when going Apple you usually know what you're getting into, significantly reducing the number of people frustrating with using it. As some simply choose to not go Apple, they might've realized beforehand that what they're doing is simply incompatible (like most gaming). And the rest might've had done some research and learned how to do the basic things.
Me personally, I do hate Apple more than Microsoft. I do not deny that their engineers and designers are usually doing a very good job. Most people I know using Apple's products are happy, things work. Well, the thing they're using it for. But Apple is so focused on control and walling the garden as much as possible, so anti consumer, that I do not care how good their products are.
Microsoft, to be fair, is not that far off. But, I guess, because their current position, they have a much bigger garden, so closing it is much much harder. But their strides for requiring an online Microsoft account, and what they're doing to secure login and I forgot the next thing after secure login, that's also a no-no. I've used Windows since Windows 95 (used a bit 3.11 too, but it was on old computers in some places) to Windows 10, and I've been a happy Windows 10 user. I know I won't drive Windows 11, not by personal choice. I might have to, for work, but unless I REALLY have to for something specific, I won't install it on any of my personal systems. Even if their bullshit if bypassable.
79
-
I stand with Drew here. He might've been nagging and snarky a bit, but was in the right. And the later responses from Go's side were quite lacking, I'd say much more impolite that Drew ever was in those threads.
First, like Brodie said, they initially said they can do a temporary solution, and that they're working on a solution or at least some improvements. Later, with no further notice, they act like the temporary solution is the well-established-and-acknowledged one, with no more information on the improvement/solution.
Second, the responses like "because of boring details, we don't want to do this" (paraphrase). But it's exactly the place and time to actually post those admitedly boring details since at least those who are impacted can understand why it takes so long or why it's not practical to be done, and the people that are in those threads can understand a lot on the subject. Not to mention that it would've also shed some light into the prioritisation, if it's something that is hard to do, it can be understood that there are more important things to do first. But no, they treated those impacted like kids who want too much, and whose pains are not important.
Sigh
Edit: I forgot to add. When you have 3 requests from the same IP in the same second... I find it hard to believe that something can't be done about it, with reasonable effort. Example: 5:18
67
-
40
-
Ok, here's a hot take: I fully disagree with Drew. Well, most of his points are actually ok, and I agree with some (like decoupling GNU and FSF and the need for new licences). But I said fully disagree because I totally do not agree with the method of achieving said reforms.
There is this case that FSF is kind of tone deaf, that is extreme in its philosophy. I do think that is good. That is should stay that way (off topic and that Richard Stallman should stay in FSF, including leading it). Why (to answer Brodie's question in the end) ? Because it is objectively pure. It is a golden standard. When FSF endorses something, so far you can be sure that it actually, absolutely is free software, no "practical considerations", no "in a manner of speaking" no "for all intents and purposes" and so on. That is very valuable.
If someone like Drew likes to improve the situation and cannot do so with/within FSF for reasons like FSF being very rigid, I don't understand this need to change FSF, when it has a clear stated goal and philosophy. He should begin another foundation and achieve those things like that. A milder FSF, more in tone with the masses I'm sure would attract a lot of people that are in the sentiment of FSF, but are not willing to go to the lengths that Richard Stallman go (and why I have huge respect for him). This doesn't have to be at the expense of the current FSF, it should be alongside.
Also, I cannot agree with that 5-year-old mentality that if red people are known to be good with something, then to have blue people good with that, we should put blue people in charge. That's downright insulting for anybody with 3-digit IQ. If the blue people want to weave, then they should start learning. And only deal with the cases when they are not allowed to learn, that's the only thing that should be done. Equality of chances, not equality of outcome. Leadership should be on merit. Assuming that blue people need to be put in charge automatically assumes that both red and blue people are tribalist cavemen-level people who cannot be impartial and cannot see value in the people of other color. How can I take this man seriously when he's so gigantically wrong about such a simple issue ?
Also, that "we're told that unfree Javascript" is stupid and cringe, I have to agree. That should be improved. By FSF.
24
-
22
-
In regards to "not forking Chromium" vs "use OS-based rendering". My take on this, is that they simply use whatever rendering engine is installed.
The rendering engine (webkit, for example) will be used by the browser like "hey, here's the HTML + CSS, give me pixels to show on the screen". Inherently, it cannot mess with privacy, since it cannot know where that HTML+CSS comes from, it has no access to things like cookies, it doesn't know and cannot connect to anything external and so on. I'm not so sure on the Javascript side, but I'm pretty sure that JS will be run separately, or containerized somehow.
So, the browser, whatever DuckDuckGo are working on, will manage things like connecting to the server, fetching the files (what you can do with cURL), managing cookies and local storage, and so on, and on this level you have privacy concerns.
So that's why I think they're going for this approach. No need to fork and maintain the rendering engine, they'll use whatever is available, since it cannot interfere with the privacy. And using it as an external program is different than forking it, even though it's, I guess, more on the nuance level.
20
-
20
-
18
-
18
-
17
-
I think the FSF attitude is EXACTLY what is needed and what they should do. No cracks in the armor as you say. It protects us from getting complacent and "slowly boiled". It protects against slippery slopes. It defines the golden standard and is very nice to see people catering to that, despite the immense hurdles in doing so.
It really saddens me the stance of many people who think FSF as irrelevant or extremists just because they actually stand by their stance and don't compromise on their ethics. They are important to see what the golden standard is. It's up to you how much of it you want. In practicality, for now, going 100% is very limiting. But that's the good thing, we know, we are aware of that! If you want to go full privacy and full freedom, you know what to do, you know how to get there, you know what you have to ditch. And I haven't heard of an FSF endorsed software to be actually non-free in any regard, so they are doing a good job there, as far as I know.
It also REALLY saddens me that some people think that endorsing FSF somehow needs that you yourself, on all your computers, have to run 100% free software and then they see the impracticality of it (like Richard Stallman going without a cellphone and running 15 year old laptops) and promptly reject the idea in its entirety. When it's actually should be used a sign that more work has to be done to get free software to be a decent alternative. You can run and use whatever you want, just try to help the idea (mind-share, testing, documentation and, of course, programming & others) to move it into a better place.
Its akin to someone seeing a poor, weak person, which is poor and weak of no fault of its own, and being disgusted by it and running away from it. That's not the right attitudine, that person should be helped. Same with free software, it should be helped so it grows into a decent alternative.
17
-
16
-
15
-
15
-
15
-
15
-
15
-
14
-
14
-
13
-
12
-
12
-
@BrodieRobertson I get the "historic" ? part of "tiling" = tiling manager, but I don't think it should be a be-all end-all argument. Like Nicco said, while it's barebones, so to speak, or lacking, it's still tiling. Calling it like that it's the most descriptive way. I don't think a new term should be invented for this, I think the "in Linux, tiling means X" people should adjust. That's what makes the most sense to me, and what makes sense for the future (where, to a new guy, all these terms should make sense). Eventually we can maybe call this "manual tiling", versus "automatic tiling" and make sure to simply not use just "tiling".
12
-
11
-
10
-
10
-
10
-
10
-
10
-
9
-
"Free to play games are complete exploitative garbage" I don't know, I wouldn't call NetHack that...
Jokes aside, yeah, all free things need to have a method to pay, a method to support. While I'm young I'm not able to contribute. Or those in poor countries. Now that I'm mid aged, have some disposable income and that I understand that things needs funds in order to have them stay nice, I can donate. Not to all of them, sadly, but, still, it can get sustainable in most situations.
8
-
While I get the frustration, and that it certainly can be better, the communication, that is. But I also have to point out that it's good that very specific examples arise with Wayland's developers refusing to simply implement something on a simple request. It's like a high barrier of entry (frustratingly so at times) that does, in its way, allow for proper thought on how it should be standardised and implemented. And with the requirements made clear, it's also kind of documenting why X or Y is like it is. And why it exists.
So I kind of support their way of asking about the problem, until Every Single Detail is laid out. It does help with building the proper future, so to speak. The "trust the users", even the those users are hand selected to be trustworthy, that does not fly with me. They can be very thoughtful and well-motivated, but going into ALL the details upfront allows for improvements. After all, Wayland is not about simply recreating X, but something better, that's good, efficient and maintainable. For that you need to know exactly what's needed, exclude what's not needed, and have good documentation.
In the end, they're the ones which give their sign of approval, so it only makes sense that they ask all the details upfront. And the devil is in the details, never underestimate the ability of a small detail to ruin everything.
8
-
8
-
7
-
7
-
7
-
Call me pedantic, but I think it would be best to have a fork with the current way of small, required proprietary blobs included, just to be called something else, like Chadboot. And have Libreboot be.... you know... libre? Without any asterisks. Fully, 100% free, like it was before nov 2022. And to have them easy to recognize what they do like "oh, Libreboot is just Chadboot without any proprietary code". It's basically like it is now already, but under two different names, for different purposes and less confusion.
Otherwise, I'm happy that both a) they added the proprietary stuff to make it usable on much more systems and b) that they still give you the option of having 100% free software.
7
-
7
-
7
-
6
-
6
-
6
-
6
-
6
-
6
-
5
-
5
-
@sammiller6631 Wrong. This direction that we're talking about is something objective. When they say that something is free software, it means that the code for the software is available, no binaries. That's not faith, it's not zealotry, is the objective truth.
Now, you can say that they are inqusition-level zealous, because they don't compromise. Like, say, for practicality. If the only full free software laptop is one from 17 years ago, then so be it. I like and respect that.
Because (here's the big thing) I can always choose how much poison I take. That is, if I go full free software or only partially and how much. And this differs from person to person, everybody has a different tolerance and needs, so what's practically free software might be too compromised and unsafe for others and viceversa.
That's why having FSF tied to objectively full free software is very valuable, even if it's impractical (for now). I also value that is shows us how far we are from that ideal standard. Even when that distance is sooo big and they're actually getting blamed for it.
5
-
5
-
5
-
5
-
I think that Linus is ok with the english way of pronouncing Linus, because he's usually very pragmatic. It's close enough to be clear it's his name, and it certainly the way people pronounce things can't be changed over night, people will still call him in the english way. It would be a pointless debate.
Out of respect, unless said otherwise by the person, I always think that pronouning in its native language is the polite way. Of course, in practice I don't always get to do that, sometimes I'm not even aware that it's in a different language, but I do think that's the most polite way, most showing of respect. I would go out of my way to learn and use this "origninal" pronunciation for people like Linus (well, it's easy here anyway, no real effort required) or, to give another easy example, for Michael Schumacher. While here, it's also a sign of respect (some would even call that normal decency) to use the diacritics where required.
5
-
5
-
4
-
I take when we have things that have specific words with specific meanings and those get ignored.
Case in point, having EXPERIMENTAL code. What more do you need to be told to not use this in production, and if you do, do it at your own risk ? Experimental code is so you know what's coming, to be able to work with it and test it, not ship it to production.
To the folks saying that "well, they shipped it, so it's their responsability", I disagree, at least on this particular case. Nginx is opensource and you can compile nginx without http3 (omit --with-http_v3_module), so you can have it without experimental code at all. You can have it removed (not just disabled) at a binary level. So you can be VERY EASILY shielded from "oh, someone just flipped a flag, by mistake or intent". Also, F5's point was that they know people/projects/companies that USE it, not that they simply might have it in the binary.
I understand F5's position here, and I can't really blame them that much, but I do think it's just bad practice and it incentivises bad practice in general, and should be avoided. So I'd be happier if they didn't go the CVE route, but using other means of communication.
4
-
@SisypheanRoller Damn it, if my net wouldn't had dropped at the exact time, I would've posted this hours ago and the many replies that I see now would've been ... better.
So, regarding the monolithic part - the number of binaries is indeed not relevant (though often easy to tell at a glance). The idea is the coupling. If you have one giant binary or one main binary and another 10 binaries, but there is a hard dependency to them (or just one of them) then you have a monolithic program.
In our case (unless it has changed recently, I haven't checked) journald is a prime example. It is a component of systemd that cannot be (safely) removed or changed. It is a separate binary from systemd but because of the hard coupling, it effectively is part of systemd.
To systemd's credit, the amount of binaries that have stable APIs and that can be changed with 3rd party binaries safely has increased over the years. One can hope that eventually that will be the case for all of them and that everyone will then be able to use as much of systemd as it needs and replace anything that they don't like.
Getting back to the UNIX philosophy of "do one thing and do it well" unfortunately many many people don't understand it and spew bullsh!t that it's outdated or other such nonsense.
The idea of it is that such programs (tools or in a broader sense, infrastructure) should do one thing and do it well in order to have effective interoperability. In order to have that program be able to be easily and effectively used for scripts or other programs.
Since you mentioned, the "one thing" is not important. It can be any thing as long as a) is complete enough that in most cases can be used alone and b) is small enough that you don't have to disable a lot of it for normal cases or that by simply running it the CPU/memory requirements are significantly higher than what is actually needed in the typical use case.
This can be as simple as listing the contents of a directory (ls) or transcoding video and audio streams with many options of editing and exporting (ffmpeg). Is ffmpeg massively complex ? Yes! Do people complain that it violates the UNIX philosophy ? Not to my knowledge. Why ? You can use it effectively with the rest of the system, you can script around it. And it works well. OBS using it under the hood is a testament of that too.
Lastly, here's a practical example of why not following the UNIX philosophy is bad, which hopefully also responses to Great Cait's question of why the hate:
Search CVE-2018-16865 . It's a vulnerability that was found on journald several years ago, and was later fixed. The problem is that it's pretty high severity. And... you cannot simply disable or remove journald (or couldn't at that time). You can use rsyslog alongside journald, but because they made it to be soo coupled, you cannot literally remove it and still have a working system. Imagine the stress levels for system administrators that found out that they have a big security risk that they cannot disable/remove/replace, they just have to wait for an update.
That's the hate. Yeah, it works pretty good. But it's not perfect. And it's being shoved down our throats in a "take it all or leave it" manner that is a slippery slope for potential big problems down the line when everyone is using it and suddenly some massive vulnerability hits it or Red Hat pushes something on it that everybody hates or things like that. And people will suddenly realize that "oh, sheet, what do we do now, we have no alternative, we cannot change 70 programs overnight". And it's annoying, because we know to do better. Hopefully it can change to be fully modular and non monolithic so something like what I wrote above cannot happen.
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
There is a video here on youtube with a madman who installed the latest Gentoo at that time on a 486, in 2018.
And another video from this year, if I remember correctly, installing ... Gentoo (of course it's Gentoo) on a Pentium 133 MHz.
And yeah, they boot really slow 5-10 minutes to boot, another 5 minutes just to shut down. Though you can actually connect to the internet and see or download stuff safely. But overall, there's little reason to install a modern kernel. Though, regarding the speed, the one on the Pentium doesn't seem like he went full optimisation, so it actually might run much better.
I'm with Linus too here. Part of me hates to see support like this dropping, but in reality, it keeps the code more maintainable, without an everlasting list of things to check or maintain compatible. And the devices who get to be dropped support truly are obsolete and also don't really benefit from having the latest kernel.
Now please excuse me while I stash a nice whiskey bottle for 2032 when 586 support will be dropped.
4
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
What does servo add ? Or to ask differently, what is Firefox missing ? Genuine question, I do use Firefox and I don't feel like it's slower or something, at least not noticeable. And features-wise I'd say it's about on parity with Chrome and Chromium, except google specific things.
Though, come to think of it, I do remember WebRTC working worse in firefox, while it was basically without issues on Firefox Chrome. Though on the other side, Firefox has better developer tools. Well, maybe not overall better, but it has 2 things that I do use that Chrome doesn't have: edit and resend for network requests (chrome has only resend) and "skip this file" when debugging javascript.
3
-
Genuine question: is that "they can do that in a few lines in the server" actually a well-spread, often-use thing, that makes sense to be in the server, FOR THE WHOLE ecosystem ? Because, hypothetically, if it was a stupid architecture before, where the server did way too many things, and got to be unmaintably huge and impossible to improve without breaking a lot of other things, then the replacement for that, it makes sense to have better separation and thought put in what should be where.
I have no idea of your exact use case, but in general, just because before it worked very easily and now it doesn't (from an effort to develop point of view), it's not enough of an argument in itself, if the new way is more maintainable and allows the ecosystem to evolve and be much better than the old way.
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
I think this is perfect with TWM. I mean, a normal keyboard has ONLY, what, 105 keys ? Getting 17 more from the mouse has to be a god send. With the 4 modifier keys (Ctrl, Alt, Shift and Super), that basically 68 more key combinations. Sweet! Oh, and you can use more than a modifier key too!
On a more serious note, those keys can actually allow you to permanently keep one hand on the keyboard and one on the mouse, as long as you don't have much text to type. With the modifier keys, you really could have all the TWM administration keybindings controlled with a modifier key (or none) + a mouse key.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
I'm on the side of people freaking out.
The thing is, Ubuntu/Canonical, have done bad things on this "topic" before, so them "teasing" r/linux is truly poor taste. That is, I can accept this sort of joke from someone that has spotless background/history on the matter. If you don't have a spotless history on the matter, then you joking about it is totally inappropiate, you don't know to read the room, you deserve all the backlash so you learn to behave. When you do something stupid, you do not remember people about it!!!
So, even if it was an acceptable joke, there's still the problem that there's no place for a joke there. I'm human and I do have a sense of humor. I can accept a joke here, in VERY VERY rare occasions for EXCEPTIONALLY good jokes. Which is totally not the case here.
The thing is, these people putting this joke, they think they're funny, but they don't think of the impact. Several weeks or months down the line, when I'll upgrade my sister's computer, seeing the joke for the 34th time is not only not funny, it wastes space on my terminal, wastes energy to my eyes to go past it, wastes brain cell cycles for me to understand that it's there and that I have to skip it. It's pollution.
I think the problem is the goldfish attention span syndrome that seems to be more and more pervasive on the current society. We are not able to be focused on one thing anymore. Like to get into the mindset that you have something to do and for the next 5 minutes, 1 hour, 8 hours or whatever, only think, interact and do think exclusively about that, and nothing else, so you're as efficient and productive as you can be. Sure some people or areas (especially creative/art) can or want all sorts of extras. But that shouldn't become the universal only-way to do/have things. It should be the individual adding the extras, not the provider to come with them.
It's like now an action movie can't simply be an action movie. No, it has to have a comedic relief character and the main character must also have a love interest. It's not something bad if a movie has all 3, but it should be the exception, not the norm. There are places for jokes and comedy, I'll go there when I want jokes and comedy, stop polluting all other areas with unneeded (and rarely good) funny, that's not the reason I'm here.
In conclusion, this particular act is certainly of very small degree and by itself shouldn't cause much rage. But it shows a fundamental lack of understanding from those at Canonical, and as such, everybody expects them to continue on this stupid path, unless someone tells them to not do that. So, that's why the rage is justified and actually needed right now, so they learn that it's not ok and they stop, BEFORE doing something truly stupid and distruptive.
2
-
2
-
It's the idea of having sense, in general. If you see people doing wrong stuff, you should be bothered, to a point, especially if it impacts you (more) directly.
In this case, the main point is that installing into a VM and spending mere hours on a distro is not a review. And I'm totally down with that, it should be called out so people doing these "first impressions" don't label them as reviews. Having proper terms, that is, terms that are not ambiguous and/or that people all generally agree upon make for better, more efficient communication.
For example, I might've heard that Fedora is a really good Linux distro.
Now, the nuance is that if I'm perfectly happy with what I have right now, I might only want a quick look on it, to know what's it about, to see why people call it great. Unless it blows my mind, I won't switch to it, so I don't need many details, including not needing if it works just as well on real hardware or how it is after a month, since I'm not intro distro hopping right now.
However, if I'm unhappy with what I have now and I'm thinking "hmm, this is not good enough, I should try something better, what would that be?" - well, in this case, I would like a review. Something that will give me extra details that make me aware of things I should know in order to make an informed, educated decision. I don't want to see a first look, install it, and after 1 month realize that this isn't working, as nice as it looks, I need to hop again. Here a review (long term or "proper" or "full" review however you want to call it) is something that probably would give me the information in 20-40 minutes so I can skip that 1 month and go and install directly what I actually need.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
@BrodieRobertson There's this ... let's say feeling, since I'm not so sure exactly how factual it is, but the idea is that Lunduke is apparently about the only one digging and reporting on all sorts of these issues. These foundations seemingly got more and more corrupt and woke trying to censor what they don't like. And also he apparently is banned from a lot of them, and even banned to be mentioned.
The uptick is that if you do find that his investigations are good, you could mention that he also covered the topic. In these clown-world times, this is needed. And it would also show that you're not under some control. Then again, people and fundations having a problem with Lunduke might start having a problem with you if you give him even a modicule of publicity.
Speaking of, if you feel bold and crazy, I would really enjoy a clip / take on this whole Lunduke situation. It's history and current status and how you think this whole situation is, how split the whole bigger Linux and FOSS community is about him. I personally started watching him recently and he seems genuine, but it's still early to be sure about that. And the things he's reporting on... not gonna lie, they kinda scare me. Linux foundation having a total of 2% of its budget reserved for Linux and programming, and 98% on totally unrelated stuff, that thing can't be good long term. It seems like all of these fundations, being legally based in USA, have a systemic problem of being infiltrated by people who do not care about the product(s) that the foundation was based originally on. If these aren't course-corrected, or others arise that are free from all this drama, I truly fear for the future of Linux and FOSS in general.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
1
-
1
-
1
-
1
-
1
-
I just want to add that the attitude of "Valve doesn't care about snaps, or your package manager. They don't want to support it. Not their job yadda yadda yadda" is ... not that good.
Yeah, they might not like it, but if it isn't fundamentally flawed, as in making it impossible or incredibly difficult, then Valve should consider supporting and working with distributions and package managers, so it can be nicely integrated. That is, the way I see it, the desired outcome for an app that wants to have large reach, to be used by a large mass of people.
You can say it's the same as making a FOSS for proprietary OS like Windows or iOS. You can hate the inferior OS (in case of Windows) and the hurdles you have to go to to bring compatibility, but if you do want to have a high reach, then it is something you have to do. While on this, thank you GIMP and LibreOffice.
So getting back to the topic, I think everybody will have to gain, including less headaches and issues on Valve side, if Valve worked with the distros and package managers to make Steam work directly from the package manager so you don't need to go and download from Steam's website. That's what cavemen using Windows Neanderthal Technology (NT for short) do. Ok, snaps might still be a headache, though I guess it would be more from Canonical than from the snap system. If that's the only system not supported, it would still be better than now. And I suspect that a lot of this work would be front-heavy, that is, you work hard to integrate it once, then it's easy to maintain after.
1
-
1
-
1
-
1
-
1
-
@terrydaktyllus1320 Everybody reading what you write and you (because you wrote it) would have a much more productive use of their time if you'd stop spewing bullshit that you have a very surface knowledge on.
In your fantasy cuckoo land there are these "good programmers" that somehow never make any mistakes, their software doesn't ever have any bugs.
In the real world, everybody makes mistakes. I invite you to name one, just one "good programmer" that doesn't ever write software with bugs. If it's you who's that person, let me know your non-trivial software that you wrote and that has no bugs.
And if you're going to bring the "I didn't say that good programmers don't make mistakes or don't make bugs" argument, then I'm sorry to inform you that Rust, or more evolved languages in general, were created exactly for that. Programmers, good AND bad, especially on a deadline, have to get the most help they can. That's why IDEs exist. That's why ALL compilers check for errors. A language that does more checks, like Rust, but still gives you the freedom do to everything you want, like C, is very helpful. Unlike your stupid elitist posts that "languages don't matter".
The bug presented in this video, that's a very classic example of something that would not happen in Rust.
With people like you, we wouldn't even had C, just assembler by now. Whenever there's something about programming languages, don't say anything, just get out of the room and don't come back until the topic changes. Hopefully to one that you actually know something about.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
Hey, nice pointers! I have a question, regarding point 1 & 2. As I was already thinking of using ramfs to handle compiling, so I don't wear out the SSD. And I was wondering if it's possible to do compiling on one or a series of Rasperry Pis. That is, instead of using my beefy laptop for that, delegate it to one/few R Pis to 1) still be able to fully use the laptop with no interruptions or slowdows and 2) use (much) less power.
But I worry that R Pis might be too weak or have too little RAM (max 8 GB per Pi) for that, especially if I set it to use the RAM instead of the internal storage. From your experience, is this feaseable ? Or are the Pis too weak/have too little RAM to make it work ? I'm ok if one Pi is only doing a package at a time, even on a single core, and if it takes 20 hours for one of the bigger ones, as overall I'll think I'll still be able to be up-to-date.
1
-
1
-
That's so blatantly false and wrong (that these kind of pushes are necessary) that I'm doubting the ability to reason. I'm not referring only to the OP comment but many who defend it too.
First, there is a GIGANTIC difference between
a) forcing users to try something new and giving the option to use the old, which is known to work
and
b) forcing users to try something new and if they're missing something ... well tough luck ? How is that not OBVIOUSLY irresponsible ? What are they supposed to do, stay on the old one ? Go to a different distro or a different spin (which might be more different than another distro but with KDE) ? Well then, don't be surprised if they don't come back.
Second, the reason that "if they don't do that, people would not try or switch to and it will not evolve" is also blatantly false. Wayland now is progressing very nicely and fast. Yet NOBODY forces Wayland as the only option. Proof that removing options and functionality from users is not needed (DUUH). Doing that will only alienate the users and feed the Wayland (or whatever is pushed) haters. It's a lose-lose situation by infatuated people who care more about being/feeling bleeding edge than providing and caring for their users. It adds, I would argue, nothing, while raising all kinds of concern and stress and conflict, like this very thread. While waiting until Wayland is truly ready and then doing the switch, nobody would bat an eye.
You can see they're searching for excuses rather than actually caring from that statement that they'd rather do the switch on a major version change. Because it makes sense, it's something to be expected. But they didn't thought (too much of a distance) that removing it now is 10 times the distress than removing it in, say, KDE Plasma 6.4.
1
-
1
-
@elmariachi5133 "But I don't care what it brings for developers, when I am talking from a user's perspective"
Well, Rust will add absolutely nothing to the users, directly. But what you're saying is that you don't want the tree cutters to get chainsaws as an upgrade from simple axes, because they won't bring you better wood for your stove. Yes, it's THAT silly.
You could say that lumberjacks with chainsaws will bring more wood, or deliver it faster and probably, on average, cheaper. Well, that's the same with Linux's kernel too. With Rust instead of C, it allows developers to write & get to a secure & stable state faster. But it can't be quantified how much faster, so no promises can be made. For some things it might really be the same time. Maybe even worse, who knows. But on average, it should be faster, as more complex things will be able to have statically-determined memory safety, allowing the developer to not spend many hours checking and testing that (well, where it can, some things in the kernel have to be made in unsafe mode). Or it might allow the developer to release something that is secure, instead of releasing something in the same timeframe that will have bugs.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
I guess I do have a bit of an uptime fetish. Several things to mention:
- in Windows 10, with quite some hassle and at least the Pro version, you can actually control the updates. As I speak I have 99 days of uptime. I didn't actually wanted to reach this high, since effectively I'm behind the updates, I cannot have (most of) the important ones without a restart. But Windows, as I configured it, doesn't pest me neither to do the updaters, nor to restart. And usually between 1 to 3 months I do the updates and restart.
- having a high uptime is a sign of a properly configured system. It shows that you don't have weird leakages or simply bad software. As having a system that you don't have to reinstall (and even reformat the drive). I've never understood the people who got to the conclusion that you have to format + reinstall windows once per year. I've always reinstalled because the old one was too old (like from Windows 98 to XP, from XP to 7, from W7 to W10), not because it wasn't running ok. Windows is bad, but not THAT bad. Anyway, I'm going off topic.
- for me at least, not restarting isn't the time to reboot. That is below a minute. Is also opening ALL the apps, and in the same exact state that I left them. Still something that should be below 5 minutes, but I simply like not having to. Some apps don't have the "continue exactly where you left off" feature when restarting them. For this reason, I usually hibernate the laptop instead of shutting down most of the times I'm carrying it around (which is less than 1/week since the pandemic started). I do acknowledge that it's mostly convenience on my part, not actual need.
- having the computer open 24/7, if on low power (and low heat) will not damage the components that much, if at all. One power cycle actually might do more damage than 50 hours of uptime (as I said, if the uptime is in a non-stressful manner, with no overclocking and no large amounts of heat). As to why you would do that, some have things open, like torrents, folding at home, or mining. In my case, when I leave it running when I'm sleeping or I'm away, and I'm only keeping it open for torrents, I'm putting it in a custom power mode, which is low power, but with everything still turned on, except the display. This way, it consumes quite little, despite still being "on".
1
-
1
-
1
-
1
-
Interesting to see what others said. I have 2 things to add:
1) in regards to a language being high-level: if it has the ability to abstract implementation (in any way), then it can be considered a high level language. It's not the only defining factor of a high-level language, but it's a very important one.
Having abstractions allows you to make the code easier to write, by being more human-readable and more compact, which are clearly (some of) the things that high level languages are wanted for (the reason they were created in the first place). C does have that, in functions. So, for me, it's clearly a high level language. In the grand scheme of languages, yeah, it probably is the lowest high-level language. But because you can abstract things away, it's still a high level language. Hell, if you want it, you could program a runtime for it and make it behave like a higher level language (of course, by limiting yourself to only use what was implemented for the runtime).
2) SQL is not a programming language. The high vs low only applies to programming languages. Ok, scripting languages too, though, by convention they're always high level (no need to create a scripting language to be low level). SQL, HTML, YAML etc do not qualify.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@BrodieRobertson Of course the companies are interested in not being regulated. But it's not for them to decide that. That's why the regulations exist in the first place, because otherwise they wouldn't be respected.
And the idea that's proposed here is exactly the difference between "regulated" and "regulated into the ground". At least perception-wise.
I'm sure that any company that does a good job can do so and survive without (very) targeted ads.
Still, the proposed solution (which I think is also the only actual solution proposed) allows them to have decently targeted ads. Making ad spending for them somewhat effective. Otherwise it will either be something illegal (bypassing privacy rights or regulations, if they appear) or it will be random targeting, making ads much more ineffective. Which might make them panic.
It's normal to first create the means to behave decently, and then to enforce behaving decently, because you have the means to. Without something like this proposed solution, the "behave decently" is to basically not have targeted ads. Which is basically not an option for these companies. Maybe will not be a bad thing, but realistically, that will not happen anytime soon, nobody will come that hard on them.
1
-
I'm kind of sad that games targeting Proton make the most sense right now, and that's definitely what the majority of developers have to do. Hopefully, with SteamOS gaining popularity in the gaming community, it might make developing native Linux games more worthwhile (both in the market-share department and in the ease/usefulness of developing), you know, to be able to unlock all the resources that the hardware has.
And if SteamOS does get big, that means that hardware manufacturers will finally (have to) think about drivers/support for Linux, which will make switching to Linux easier, with less/no concerns about hardware compatibility, which will further increase market share... ok, I'll stop babbling, but I do feel like this will be a big possitive loop and I can't wait to see it unfolding.
1
-
1
-
@nlight8769 Oh, wow, things got very complicated too fast.
The problem is actually much simpler. It's the word "performance". For some people it's not immediately obvious that it's about "something (a task) done in an amount of time". Well, where time is involved. That's the thing, it doesn't explicitly say the metric used. And if the metric is not explicitly said or obvious from the context, people make assumptions and that's how we got into this topic :D
But performance is very analogous to speed.
In our case the compile time is similar to lap time.
And speed is measured in kmph (some use miles per hour, but we've grown out of the bronze age). In our case it would be the not-so-intuitive compiles per hour. One could say that instructions run per second could also be a metric, but it has 2 problems: a) nobody knows how many instructions are run/needed for a particular compile, though I guess it can be found out and b) not all instructions are equal, and they NEED to be equal in order to give predictible estimations. For speed, all the seconds are equal and all the meters are also equal.
Here's another tip - degradation implies that it's worse and that the something degraded is *reduced*. If someone tells you something degraded by 80%, you KNOW that it's now at 20% of what it was (and not 180%). And something degraded by 100% would mean it's reduced by 100%, aka there's nothing left.
Lastly, correlating to the above - When the performance of anything degraded "fully", so to speak, we say it's 0 performance. Not that it takes infinity time.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
The way I see it, the main thing, the main advantages of this "do one thing and do it well" is for having easy composition and less/no code duplication. That is, it's not important to follow it to the letter, but it's a nice, short way of conveying the goals of achieving several benefits that I'll try to list below:
That program or library has to be small enough so it can be used in a chain of commands or inside a bigger app, with a minimal footprint.
If everything follows this philosophy, then it's also easy to replace without having a depency hell, coupling issues and performance struggles. This small enough pushes the developer to try to be as narrowly focused in the thing it's doing, and when it does need more things to do also try to see if it can use another program or library.
This also allows projects to have few developers, since they can only focus on their specific program and domain. To give an example (I don't know if the reality is anywhere close to what I'll present, but it seems like a nice example), the Lynx browser. Their devs can simply use internally curl to fetch the resources and only deal with building the DOM and rendering it. Internally curl might also use a ssl library and a tcp library to handle the lower level networking and only focus on the HTTP & related standards. In this example, if HTTP 3 gets released (woohoo) it might get implemented into Lynx with minimal effort, by just updating the curl library (well, usually minimal, there might be breaking changes or new stuff to take care of). Do Lynx developers have to care about HTTP 3? Nope. Do they have to care about available encryption hashes used for HTTPS connections ? Nope. Do they have to care about opening sockets and their timeouts and buffer sizes ? Nope. They can focus on their specific thing. And that means they can also know very little of the underlying features, meaning less experienced developers can start to contribute, the project has a lower barrier of entry.
Having a smaller project/library also allows having manageable configurations. I mean, it can be made to be very configurable (including being modular) without getting overwhelming, because it's in the context of a rather small program/library.
Another interesting example is ffmpeg. As a program and cli command, it's actually pretty big. But it's still made so it's easy to be used with other tools and programs.
Of course, in the real world, the separation cannot be made perfectly. For some developer the big thing A would be split into b, c and d. Another developer would see A split into b, c, d, e and f, and each also split into 2-3 smaller programs, with one of them being used in 2 places (say, program t is used by both b and e). As you can see, technicallly the second split is better from the "do one thing and do it well", but it's also much more complex. This cannot go ad-infinitum. Theoretically, it would be nice if we'd have only system functions and calls and we'd only run a composition of them. But in the real life it's never going to happen. Also in the real life, the example above, a third developer might see the split of program A into B, C, D and E, with B being say 80% of what b does in the vision of the first developer + 50% of what c does in the vision of the first developer. And so on. And there would be arguments for all approaches that make sense.
Lastly, doing one thing and well allows for easier optimisation. Especially in the context of a program or library to be used in bigger projects or commands, having it well optimized is important. And because the program/library is rather small and focused on one thing, that is, it's into a single domain usually, it's easier for the developer to go deep into optimisation. Of course in the extreme cases, having one big monolithic program can allow for better overall optimisation, but you'd also have to code everything yourself.
Regarding the Linux kernel, I'd say that it achieves the goals of "do one thing and do it well" perfectly because it's modular (and each module does one thing) and all of them play nice with each other and with the userspace.
The problem that I see with systemd is that their binaries, while neatly split, are basically talking their own language. They cannot be augmented or replaced by the normal tools we already have (well, sometimes they can be augmented). Somebody would have to create a program from scratch just to replace, say, journald. And this replacement program would be just for that. It's this "we're special and we need special tools" thing that is annoying. Ten years from now if one of the binaries is found with massive flaw, well... good luck replacing it. Oh, and it's critical and you cannot run systemd without it, so you have to replace ALL the system management tools ? Oh well, warnings were shot, those who cared listened...
1
-
1
-
1
-
1
-
@temp50 Well, not everything will be available from the getgo. If I'm not mistaken there's still some peripherals that are basically unusable on Linux because they don't have drivers and nobody has time or resources to reverse engineer one.
So initially it will be just rather basic stuff - CPU, GPU, mouse, keyboard, wired and wireless network, I guess bluetooth too. The rest... well, if what I wrote above will happen, the rest will come too, later, as the need for them will increase. But that's like more than 5 years into the future, I'm afraid.
1
-
1
-
Brodie, I agree that this being opt-out is bad. However some other things I disagree.
Especially the points that the CTO discussed. I fully disagree with your take at 12:05 "This system does not do anything about stopping those economic incentives". And at 14:44 "The way you get this fixed is by talking with the regulators clamping on the advertisers [...] and THEN you can implement the system that gives them very minimal data."
With the above, you are suggesting is that for an unspecified amount of time businesses to spend money for completely random ads, instead of targeted ones, basically to throw money in the air and light a flamethrower on it, and then in some mythical future they can get some data so they can be back on targeted advertising. And that somehow they won't be strongly incentivised to find and use ways around these regulations (that often get more and more terrible). Also you're saying that providing the service beforehand, so businesses can switch to it, in a specified window of time until the regulators can come raining down on them is somehow bad or useless. That somehow they'll have the same exact incentive to spend money to find or make ways around this. WTactualF. Please try having a business first, maybe it will be more apparent that what the CTO did and said on this approach makes the most sense.
To put it more simply you're asking people that don't have a garage to park their car to first sell their cars, be carless for some time and then buy them back when the authorities built some parking lots. Nobody will do that. And it will be a massive backlash. Learn how things work in a society. Learn to think how is like for the other side.
And they ARE doing something about the dystopian state of the web today. So far I haven't heard any other actual solution, something that is actually feaseable to be both useful, to work and also to be implemented.
Another thing, at 10:16 "If you're unable to explain to the user in a short form why a system like this is benefical to them, why they would want a system like this running on their computer, you shouldn't be doing it". I agree that they should explain to the user. But I disagree on the "shouldn't be doing it" part. Many things are somewhat complicated and many people wouldn't understand because they're not that interested. Frankly many things are simply very subjective if they're explained or not. It can certainly be summarized quite shortly, but some would argue is not explained enough. And a more proper explanation would then be too long for some people. From "hard to explain" to "don't implement it" is a LOOONG road and the "hard to explain" shouldn't solely be the reason of not implementing something. People receive drugs and medication or even things like surgeries with very little explanation too. You can argue that maybe it shuoldn't be like that, but if you compare it to our case, this is orders of magnitude less damaging in any sense of the word, so in the grand scheme of things it can be explained very shortly and whomever truly wants to understand it can find that in the code or somehwere on the web in a blog post or a video or something.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
The % that is preallocated, well, I don't think that it matters in this discussion.
Because, if you need 5% of 1000 GB, that's 50GB. If you have 2 partitions, say, 100GB for root and 900GB for the rest, well, then you'll have to sacrifice 5 + 45GB = the same 50GB for the filesystem's overhead.
The thing that might matter, is that you also need to leave some free space available. If it's a percentage, that it doesn't matter. But if the partitions get small enough then the actual absolute value might matter. Like, you need, say, 10GB free in any HDD-based partitions, for doing defragmentation. If you have one partition, you can safely make sure you always have 10-15GB free. If you go to 3 partitions... well, now you'll need 30+GB free.
Also, like other people noted, if using multiple partitions, the owner should know pretty well in advance how much space it needs for each of them, so it doesn't become a headache that one of them got full and you need more space there.
Of course, it's also a blessing, maybe something went haywire and the some app is logging like crazy. If that folder is on it's own separate partition, it won't make you root partition full, bringing a lot more problems.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@raughboy188 This "I don't care until it's relevant for me" is a pretty bad take. Thing is, one thing leads to another, and by the time it gets to be relevant to you, it might be too late to correct/repair/deal with it.
To give a very simpified example, somebody warns you that there's termites in the area and you should be wary. But you don't do anything until it's clear that the termites are actually causing problems. So one year later you see termites existing from one of your wall so you decide to act then. You inspect and see that they indeed crunched at the wall and you promply kill of all them. But the wall is already compromised beyond repair, it will crumble any day now and you have to replace it and you might not have the time and/or resources to do it.
It's the same thing here - by the time something directly affects you, the big contributors to GNOME might've been outed already. Other contributors soured and so on. Even if you ditch the Microsoft guy and abolish the CoC comitee, the damage has been done, and it's really difficult to recover, as there's no guarantee that the former contributors will or can return, and finding new ones takes a lot of time. And recovering trust in general (though there can be exceptions, if you have trust-worthy people in lead)
1
-
1
-
1
-
I have to say, every now and then I lose more respect about the USA.
Take this case: given how there were some absolutely insane cases, people going to court over really stupid things and the people having in general a "court-happy" attitude, seeing now that the big corporations have more and more pass on absolutely evil, obviously wrong, criminal stuff and people doing nothing about it... that's just sad, man.
A person can sue some company because it said that the noodles are done in 3 minutes, but it's actually not done in 3 minutes, it's 3 minutes to stay in the microwave oven, you still need about 1 minute to unwrap them and put them in a bowl and such. So, someone can do this, but when there's actually a really big problem, with big consequences, people having DAYS of their very expensive tools offline because they have to wait for the manufacturer to come over with their laptop and fix the problem in 10 minutes, of course with no compensation of any kind for the days when the tractor or whatever was offline. And when they clearly violated a license... like what more do you need ?
What a clown world... sigh
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
I remember, though I might be wrong, that Intel wanted to have a yearly release cycle. For now, Battlemage seems to be arriving at exactly 2 years after Alchemist, but like you said, I think they had to wait to sort the driver issues. The driver is still not perfect, but it's actually usable for a good bunch of people now.
What I fear most with Battlemage is that it's again a bit too late. If its top SKU fights with RTX 4060 or RTX 4070 or RX 7700X, at 250W ... and then both NVidia and AMD launch a new xx70 or xx60 class GPU 5 months later ... then Battlemage would have again to be extremely low priced, in order to be competitive ... which might mean very well that it's sold at a cost by Intel. If I'm not mistaken that was kind of the situation with Alchemist. And if it's again with Battlemage, well, Intel isn't exactly doing that good financially, so I'm not sure they can support it, if it doesn't have some profit.
The less gloomy part is that the same architecture and drivers will be used in Lunar Lake and the next CPU generation (rumors say that Arrow Lake has Alchemist+, not Battlemage). And those might sell quite well.
Right now the MSI Claw is basically the worst handheld, buuut, with some updates and tuning, it can ... get there, so to speak. I don't expect it to win against RoG Ally or Steam Deck, buut, it can get to be kind of on the same level, and with no issues. I'm so curious of seeing a Steam OS (or Holo or whatever it was called) on the MSI Claw, I'm really curious how it would work. Anyway, an MSI Claw 2 might actually be competitive this time. And be launched in time. Still speculation, but there is hope.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
LoL at the "it may expose CPU vulnerabilities" ... uhm... it's not a "maybe" it's a definitive "it WILL expose CPU vulnerabilities". I guess the caveat is it depends on your CPU. Skylake (6th gen) are probably the most exposed, and by the 10th gen I think both meltdown and spectre, at least the initial forms, were mitigated in the hardware directly.
I think this is a very nice option. Assuming you have some software you trust, you can simply have another kernel with maximum performance and no network support in which you boot to simply run that software you trust, as fast as possible. Of, course, also assuming you don't need internet in that time. And whenever you need the "normal" computing, you just reboot into the normal kernel. What's neat is that with Gentoo, you might even compile the software you use to not have networking support (very niche or rare, but still a bonus).
Of course, like others said, you might simply have a dedicated computer for some software which is not connected to the internet at all, in that case, yeah, even better.
1
-
1
-
1
-
1
-
1
-
@jasonthirded Sorry, I just remembered your comment that I forgot to answer. Apparently there's many problems, mostly in NVidia. Screen capturing when it's something not using Vulkan might be problematic. Global hotkeys for app, I understand, are quite missing. Some people (I think only on NVidia) have quite some stutters and lagging. VSync enabled non optional is also a problem for some. Apparently scaling on monitors with different resolutions might also not work correctly. And there's more.
Michael Horn, a channel here on YouTube just relased a video called "Wayland is NOT ready...". His experience was quite bad, below average, but still if you go through the comments there's many other complaints. Apparently most of those which had no issues have AMD cards like RX 570 and RX 580. And use KDE.
Also Linus from LTT, he had recently a video, I think on Shortcircuit, about a laptop which was shipped with Ubuntu and NVidia and, strangely, it had all sorts of problems, as if the manufacturer never tested it. I think the video title is something like "I bricked it in less than an hour". And I think that most problems were NVidia + Wayland = bad. (also, while we're here, f**k NVidia!)
1
-
1
-
1
-
1
-
1
-
1
-
1