Comments by "Winnetou17" (@Winnetou17) on "Brodie Robertson" channel.

  1. 130
  2. 108
  3. I think that Apple gets less hate because they're a more "opt-in" ecosystem / playground. That is, the default is windows, when you have no choice or don't know what to pick. So you'll use it and, in many cases, find things that irk you and some that you'll absolutely hate. But going to an Apple... you usually reasearch it a bit, before you choose to buy one. That is, you already have some idea if you'd like it or not and there's good chances that you'll simply not switch to it if there's possibility of incompatibility, so to speak. Getting back to Windows being the default option - you rarely are forced to use Apple, for, say, work. So bottom line of the above, when going Apple you usually know what you're getting into, significantly reducing the number of people frustrating with using it. As some simply choose to not go Apple, they might've realized beforehand that what they're doing is simply incompatible (like most gaming). And the rest might've had done some research and learned how to do the basic things. Me personally, I do hate Apple more than Microsoft. I do not deny that their engineers and designers are usually doing a very good job. Most people I know using Apple's products are happy, things work. Well, the thing they're using it for. But Apple is so focused on control and walling the garden as much as possible, so anti consumer, that I do not care how good their products are. Microsoft, to be fair, is not that far off. But, I guess, because their current position, they have a much bigger garden, so closing it is much much harder. But their strides for requiring an online Microsoft account, and what they're doing to secure login and I forgot the next thing after secure login, that's also a no-no. I've used Windows since Windows 95 (used a bit 3.11 too, but it was on old computers in some places) to Windows 10, and I've been a happy Windows 10 user. I know I won't drive Windows 11, not by personal choice. I might have to, for work, but unless I REALLY have to for something specific, I won't install it on any of my personal systems. Even if their bullshit if bypassable.
    79
  4. 67
  5. 40
  6. Ok, here's a hot take: I fully disagree with Drew. Well, most of his points are actually ok, and I agree with some (like decoupling GNU and FSF and the need for new licences). But I said fully disagree because I totally do not agree with the method of achieving said reforms. There is this case that FSF is kind of tone deaf, that is extreme in its philosophy. I do think that is good. That is should stay that way (off topic and that Richard Stallman should stay in FSF, including leading it). Why (to answer Brodie's question in the end) ? Because it is objectively pure. It is a golden standard. When FSF endorses something, so far you can be sure that it actually, absolutely is free software, no "practical considerations", no "in a manner of speaking" no "for all intents and purposes" and so on. That is very valuable. If someone like Drew likes to improve the situation and cannot do so with/within FSF for reasons like FSF being very rigid, I don't understand this need to change FSF, when it has a clear stated goal and philosophy. He should begin another foundation and achieve those things like that. A milder FSF, more in tone with the masses I'm sure would attract a lot of people that are in the sentiment of FSF, but are not willing to go to the lengths that Richard Stallman go (and why I have huge respect for him). This doesn't have to be at the expense of the current FSF, it should be alongside. Also, I cannot agree with that 5-year-old mentality that if red people are known to be good with something, then to have blue people good with that, we should put blue people in charge. That's downright insulting for anybody with 3-digit IQ. If the blue people want to weave, then they should start learning. And only deal with the cases when they are not allowed to learn, that's the only thing that should be done. Equality of chances, not equality of outcome. Leadership should be on merit. Assuming that blue people need to be put in charge automatically assumes that both red and blue people are tribalist cavemen-level people who cannot be impartial and cannot see value in the people of other color. How can I take this man seriously when he's so gigantically wrong about such a simple issue ? Also, that "we're told that unfree Javascript" is stupid and cringe, I have to agree. That should be improved. By FSF.
    24
  7. 22
  8. 20
  9. 20
  10. 18
  11. 18
  12. 17
  13. I think the FSF attitude is EXACTLY what is needed and what they should do. No cracks in the armor as you say. It protects us from getting complacent and "slowly boiled". It protects against slippery slopes. It defines the golden standard and is very nice to see people catering to that, despite the immense hurdles in doing so. It really saddens me the stance of many people who think FSF as irrelevant or extremists just because they actually stand by their stance and don't compromise on their ethics. They are important to see what the golden standard is. It's up to you how much of it you want. In practicality, for now, going 100% is very limiting. But that's the good thing, we know, we are aware of that! If you want to go full privacy and full freedom, you know what to do, you know how to get there, you know what you have to ditch. And I haven't heard of an FSF endorsed software to be actually non-free in any regard, so they are doing a good job there, as far as I know. It also REALLY saddens me that some people think that endorsing FSF somehow needs that you yourself, on all your computers, have to run 100% free software and then they see the impracticality of it (like Richard Stallman going without a cellphone and running 15 year old laptops) and promptly reject the idea in its entirety. When it's actually should be used a sign that more work has to be done to get free software to be a decent alternative. You can run and use whatever you want, just try to help the idea (mind-share, testing, documentation and, of course, programming & others) to move it into a better place. Its akin to someone seeing a poor, weak person, which is poor and weak of no fault of its own, and being disgusted by it and running away from it. That's not the right attitudine, that person should be helped. Same with free software, it should be helped so it grows into a decent alternative.
    17
  14. 16
  15. 15
  16. 15
  17. 15
  18. 15
  19. 15
  20. 14
  21. 14
  22. 13
  23. 12
  24. 12
  25. 12
  26. 11
  27. 10
  28. 10
  29. 10
  30. 10
  31. 10
  32. 9
  33. 8
  34. 8
  35. 8
  36. 7
  37. 7
  38. 7
  39. 7
  40. 7
  41. 7
  42. 6
  43. 6
  44. 6
  45. 6
  46. 6
  47. 6
  48. 5
  49. 5
  50. 5
  51. 5
  52. 5
  53. 5
  54. 5
  55. 5
  56. 4
  57. 4
  58.  @SisypheanRoller  Damn it, if my net wouldn't had dropped at the exact time, I would've posted this hours ago and the many replies that I see now would've been ... better. So, regarding the monolithic part - the number of binaries is indeed not relevant (though often easy to tell at a glance). The idea is the coupling. If you have one giant binary or one main binary and another 10 binaries, but there is a hard dependency to them (or just one of them) then you have a monolithic program. In our case (unless it has changed recently, I haven't checked) journald is a prime example. It is a component of systemd that cannot be (safely) removed or changed. It is a separate binary from systemd but because of the hard coupling, it effectively is part of systemd. To systemd's credit, the amount of binaries that have stable APIs and that can be changed with 3rd party binaries safely has increased over the years. One can hope that eventually that will be the case for all of them and that everyone will then be able to use as much of systemd as it needs and replace anything that they don't like. Getting back to the UNIX philosophy of "do one thing and do it well" unfortunately many many people don't understand it and spew bullsh!t that it's outdated or other such nonsense. The idea of it is that such programs (tools or in a broader sense, infrastructure) should do one thing and do it well in order to have effective interoperability. In order to have that program be able to be easily and effectively used for scripts or other programs. Since you mentioned, the "one thing" is not important. It can be any thing as long as a) is complete enough that in most cases can be used alone and b) is small enough that you don't have to disable a lot of it for normal cases or that by simply running it the CPU/memory requirements are significantly higher than what is actually needed in the typical use case. This can be as simple as listing the contents of a directory (ls) or transcoding video and audio streams with many options of editing and exporting (ffmpeg). Is ffmpeg massively complex ? Yes! Do people complain that it violates the UNIX philosophy ? Not to my knowledge. Why ? You can use it effectively with the rest of the system, you can script around it. And it works well. OBS using it under the hood is a testament of that too. Lastly, here's a practical example of why not following the UNIX philosophy is bad, which hopefully also responses to Great Cait's question of why the hate: Search CVE-2018-16865 . It's a vulnerability that was found on journald several years ago, and was later fixed. The problem is that it's pretty high severity. And... you cannot simply disable or remove journald (or couldn't at that time). You can use rsyslog alongside journald, but because they made it to be soo coupled, you cannot literally remove it and still have a working system. Imagine the stress levels for system administrators that found out that they have a big security risk that they cannot disable/remove/replace, they just have to wait for an update. That's the hate. Yeah, it works pretty good. But it's not perfect. And it's being shoved down our throats in a "take it all or leave it" manner that is a slippery slope for potential big problems down the line when everyone is using it and suddenly some massive vulnerability hits it or Red Hat pushes something on it that everybody hates or things like that. And people will suddenly realize that "oh, sheet, what do we do now, we have no alternative, we cannot change 70 programs overnight". And it's annoying, because we know to do better. Hopefully it can change to be fully modular and non monolithic so something like what I wrote above cannot happen.
    4
  59. 4
  60. 4
  61. 4
  62. 4
  63. 4
  64. 4
  65. 4
  66. 4
  67. 4
  68. 4
  69. 4
  70. 4
  71. 4
  72. 3
  73. 3
  74. 3
  75. 3
  76. 3
  77. 3
  78. 3
  79. 3
  80. 3
  81. 3
  82. 3
  83. 3
  84. 3
  85. 3
  86. 3
  87. 3
  88. 3
  89. 3
  90. 3
  91. 3
  92. 3
  93. 3
  94. 3
  95. 3
  96. 3
  97. 3
  98. 3
  99. 3
  100. 3
  101. 3
  102. 2
  103. 2
  104. 2
  105. 2
  106. 2
  107. 2
  108. 2
  109. 2
  110. 2
  111. 2
  112. 2
  113. 2
  114. 2
  115. 2
  116. 2
  117. 2
  118. 2
  119. 2
  120. 2
  121. 2
  122. 2
  123. 2
  124. 2
  125. 2
  126. 2
  127. 2
  128. 2
  129. 2
  130. 2
  131. I'm on the side of people freaking out. The thing is, Ubuntu/Canonical, have done bad things on this "topic" before, so them "teasing" r/linux is truly poor taste. That is, I can accept this sort of joke from someone that has spotless background/history on the matter. If you don't have a spotless history on the matter, then you joking about it is totally inappropiate, you don't know to read the room, you deserve all the backlash so you learn to behave. When you do something stupid, you do not remember people about it!!! So, even if it was an acceptable joke, there's still the problem that there's no place for a joke there. I'm human and I do have a sense of humor. I can accept a joke here, in VERY VERY rare occasions for EXCEPTIONALLY good jokes. Which is totally not the case here. The thing is, these people putting this joke, they think they're funny, but they don't think of the impact. Several weeks or months down the line, when I'll upgrade my sister's computer, seeing the joke for the 34th time is not only not funny, it wastes space on my terminal, wastes energy to my eyes to go past it, wastes brain cell cycles for me to understand that it's there and that I have to skip it. It's pollution. I think the problem is the goldfish attention span syndrome that seems to be more and more pervasive on the current society. We are not able to be focused on one thing anymore. Like to get into the mindset that you have something to do and for the next 5 minutes, 1 hour, 8 hours or whatever, only think, interact and do think exclusively about that, and nothing else, so you're as efficient and productive as you can be. Sure some people or areas (especially creative/art) can or want all sorts of extras. But that shouldn't become the universal only-way to do/have things. It should be the individual adding the extras, not the provider to come with them. It's like now an action movie can't simply be an action movie. No, it has to have a comedic relief character and the main character must also have a love interest. It's not something bad if a movie has all 3, but it should be the exception, not the norm. There are places for jokes and comedy, I'll go there when I want jokes and comedy, stop polluting all other areas with unneeded (and rarely good) funny, that's not the reason I'm here. In conclusion, this particular act is certainly of very small degree and by itself shouldn't cause much rage. But it shows a fundamental lack of understanding from those at Canonical, and as such, everybody expects them to continue on this stupid path, unless someone tells them to not do that. So, that's why the rage is justified and actually needed right now, so they learn that it's not ok and they stop, BEFORE doing something truly stupid and distruptive.
    2
  132. 2
  133. It's the idea of having sense, in general. If you see people doing wrong stuff, you should be bothered, to a point, especially if it impacts you (more) directly. In this case, the main point is that installing into a VM and spending mere hours on a distro is not a review. And I'm totally down with that, it should be called out so people doing these "first impressions" don't label them as reviews. Having proper terms, that is, terms that are not ambiguous and/or that people all generally agree upon make for better, more efficient communication. For example, I might've heard that Fedora is a really good Linux distro. Now, the nuance is that if I'm perfectly happy with what I have right now, I might only want a quick look on it, to know what's it about, to see why people call it great. Unless it blows my mind, I won't switch to it, so I don't need many details, including not needing if it works just as well on real hardware or how it is after a month, since I'm not intro distro hopping right now. However, if I'm unhappy with what I have now and I'm thinking "hmm, this is not good enough, I should try something better, what would that be?" - well, in this case, I would like a review. Something that will give me extra details that make me aware of things I should know in order to make an informed, educated decision. I don't want to see a first look, install it, and after 1 month realize that this isn't working, as nice as it looks, I need to hop again. Here a review (long term or "proper" or "full" review however you want to call it) is something that probably would give me the information in 20-40 minutes so I can skip that 1 month and go and install directly what I actually need.
    2
  134. 2
  135. 2
  136. 2
  137. 2
  138. 2
  139. 2
  140. 2
  141. 2
  142. 2
  143. 2
  144. 2
  145.  @BrodieRobertson  There's this ... let's say feeling, since I'm not so sure exactly how factual it is, but the idea is that Lunduke is apparently about the only one digging and reporting on all sorts of these issues. These foundations seemingly got more and more corrupt and woke trying to censor what they don't like. And also he apparently is banned from a lot of them, and even banned to be mentioned. The uptick is that if you do find that his investigations are good, you could mention that he also covered the topic. In these clown-world times, this is needed. And it would also show that you're not under some control. Then again, people and fundations having a problem with Lunduke might start having a problem with you if you give him even a modicule of publicity. Speaking of, if you feel bold and crazy, I would really enjoy a clip / take on this whole Lunduke situation. It's history and current status and how you think this whole situation is, how split the whole bigger Linux and FOSS community is about him. I personally started watching him recently and he seems genuine, but it's still early to be sure about that. And the things he's reporting on... not gonna lie, they kinda scare me. Linux foundation having a total of 2% of its budget reserved for Linux and programming, and 98% on totally unrelated stuff, that thing can't be good long term. It seems like all of these fundations, being legally based in USA, have a systemic problem of being infiltrated by people who do not care about the product(s) that the foundation was based originally on. If these aren't course-corrected, or others arise that are free from all this drama, I truly fear for the future of Linux and FOSS in general.
    2
  146. 2
  147. 2
  148. 2
  149. 2
  150. 2
  151. 2
  152. 2
  153. 2
  154. 2
  155. 2
  156. 2
  157. 2
  158. 2
  159. 2
  160. 2
  161. 2
  162. 2
  163. 2
  164. 2
  165. 2
  166. 2
  167. 2
  168. 2
  169. 1
  170. 1
  171. 1
  172. 1
  173. 1
  174. I just want to add that the attitude of "Valve doesn't care about snaps, or your package manager. They don't want to support it. Not their job yadda yadda yadda" is ... not that good. Yeah, they might not like it, but if it isn't fundamentally flawed, as in making it impossible or incredibly difficult, then Valve should consider supporting and working with distributions and package managers, so it can be nicely integrated. That is, the way I see it, the desired outcome for an app that wants to have large reach, to be used by a large mass of people. You can say it's the same as making a FOSS for proprietary OS like Windows or iOS. You can hate the inferior OS (in case of Windows) and the hurdles you have to go to to bring compatibility, but if you do want to have a high reach, then it is something you have to do. While on this, thank you GIMP and LibreOffice. So getting back to the topic, I think everybody will have to gain, including less headaches and issues on Valve side, if Valve worked with the distros and package managers to make Steam work directly from the package manager so you don't need to go and download from Steam's website. That's what cavemen using Windows Neanderthal Technology (NT for short) do. Ok, snaps might still be a headache, though I guess it would be more from Canonical than from the snap system. If that's the only system not supported, it would still be better than now. And I suspect that a lot of this work would be front-heavy, that is, you work hard to integrate it once, then it's easy to maintain after.
    1
  175. 1
  176. 1
  177. 1
  178. 1
  179.  @terrydaktyllus1320  Everybody reading what you write and you (because you wrote it) would have a much more productive use of their time if you'd stop spewing bullshit that you have a very surface knowledge on. In your fantasy cuckoo land there are these "good programmers" that somehow never make any mistakes, their software doesn't ever have any bugs. In the real world, everybody makes mistakes. I invite you to name one, just one "good programmer" that doesn't ever write software with bugs. If it's you who's that person, let me know your non-trivial software that you wrote and that has no bugs. And if you're going to bring the "I didn't say that good programmers don't make mistakes or don't make bugs" argument, then I'm sorry to inform you that Rust, or more evolved languages in general, were created exactly for that. Programmers, good AND bad, especially on a deadline, have to get the most help they can. That's why IDEs exist. That's why ALL compilers check for errors. A language that does more checks, like Rust, but still gives you the freedom do to everything you want, like C, is very helpful. Unlike your stupid elitist posts that "languages don't matter". The bug presented in this video, that's a very classic example of something that would not happen in Rust. With people like you, we wouldn't even had C, just assembler by now. Whenever there's something about programming languages, don't say anything, just get out of the room and don't come back until the topic changes. Hopefully to one that you actually know something about.
    1
  180. 1
  181. 1
  182. 1
  183. 1
  184. 1
  185. 1
  186. 1
  187. 1
  188. 1
  189. 1
  190. 1
  191. 1
  192. 1
  193. 1
  194. 1
  195. 1
  196. 1
  197. 1
  198. 1
  199. 1
  200. 1
  201. That's so blatantly false and wrong (that these kind of pushes are necessary) that I'm doubting the ability to reason. I'm not referring only to the OP comment but many who defend it too. First, there is a GIGANTIC difference between a) forcing users to try something new and giving the option to use the old, which is known to work and b) forcing users to try something new and if they're missing something ... well tough luck ? How is that not OBVIOUSLY irresponsible ? What are they supposed to do, stay on the old one ? Go to a different distro or a different spin (which might be more different than another distro but with KDE) ? Well then, don't be surprised if they don't come back. Second, the reason that "if they don't do that, people would not try or switch to and it will not evolve" is also blatantly false. Wayland now is progressing very nicely and fast. Yet NOBODY forces Wayland as the only option. Proof that removing options and functionality from users is not needed (DUUH). Doing that will only alienate the users and feed the Wayland (or whatever is pushed) haters. It's a lose-lose situation by infatuated people who care more about being/feeling bleeding edge than providing and caring for their users. It adds, I would argue, nothing, while raising all kinds of concern and stress and conflict, like this very thread. While waiting until Wayland is truly ready and then doing the switch, nobody would bat an eye. You can see they're searching for excuses rather than actually caring from that statement that they'd rather do the switch on a major version change. Because it makes sense, it's something to be expected. But they didn't thought (too much of a distance) that removing it now is 10 times the distress than removing it in, say, KDE Plasma 6.4.
    1
  202. 1
  203. 1
  204. 1
  205. 1
  206. 1
  207. 1
  208. 1
  209. 1
  210. 1
  211. 1
  212. 1
  213. 1
  214. 1
  215. 1
  216. I guess I do have a bit of an uptime fetish. Several things to mention: - in Windows 10, with quite some hassle and at least the Pro version, you can actually control the updates. As I speak I have 99 days of uptime. I didn't actually wanted to reach this high, since effectively I'm behind the updates, I cannot have (most of) the important ones without a restart. But Windows, as I configured it, doesn't pest me neither to do the updaters, nor to restart. And usually between 1 to 3 months I do the updates and restart. - having a high uptime is a sign of a properly configured system. It shows that you don't have weird leakages or simply bad software. As having a system that you don't have to reinstall (and even reformat the drive). I've never understood the people who got to the conclusion that you have to format + reinstall windows once per year. I've always reinstalled because the old one was too old (like from Windows 98 to XP, from XP to 7, from W7 to W10), not because it wasn't running ok. Windows is bad, but not THAT bad. Anyway, I'm going off topic. - for me at least, not restarting isn't the time to reboot. That is below a minute. Is also opening ALL the apps, and in the same exact state that I left them. Still something that should be below 5 minutes, but I simply like not having to. Some apps don't have the "continue exactly where you left off" feature when restarting them. For this reason, I usually hibernate the laptop instead of shutting down most of the times I'm carrying it around (which is less than 1/week since the pandemic started). I do acknowledge that it's mostly convenience on my part, not actual need. - having the computer open 24/7, if on low power (and low heat) will not damage the components that much, if at all. One power cycle actually might do more damage than 50 hours of uptime (as I said, if the uptime is in a non-stressful manner, with no overclocking and no large amounts of heat). As to why you would do that, some have things open, like torrents, folding at home, or mining. In my case, when I leave it running when I'm sleeping or I'm away, and I'm only keeping it open for torrents, I'm putting it in a custom power mode, which is low power, but with everything still turned on, except the display. This way, it consumes quite little, despite still being "on".
    1
  217. 1
  218. 1
  219. 1
  220. 1
  221. 1
  222. 1
  223. 1
  224. 1
  225. 1
  226. 1
  227. 1
  228. 1
  229. 1
  230.  @nlight8769  Oh, wow, things got very complicated too fast. The problem is actually much simpler. It's the word "performance". For some people it's not immediately obvious that it's about "something (a task) done in an amount of time". Well, where time is involved. That's the thing, it doesn't explicitly say the metric used. And if the metric is not explicitly said or obvious from the context, people make assumptions and that's how we got into this topic :D But performance is very analogous to speed. In our case the compile time is similar to lap time. And speed is measured in kmph (some use miles per hour, but we've grown out of the bronze age). In our case it would be the not-so-intuitive compiles per hour. One could say that instructions run per second could also be a metric, but it has 2 problems: a) nobody knows how many instructions are run/needed for a particular compile, though I guess it can be found out and b) not all instructions are equal, and they NEED to be equal in order to give predictible estimations. For speed, all the seconds are equal and all the meters are also equal. Here's another tip - degradation implies that it's worse and that the something degraded is *reduced*. If someone tells you something degraded by 80%, you KNOW that it's now at 20% of what it was (and not 180%). And something degraded by 100% would mean it's reduced by 100%, aka there's nothing left. Lastly, correlating to the above - When the performance of anything degraded "fully", so to speak, we say it's 0 performance. Not that it takes infinity time.
    1
  231. 1
  232. 1
  233. 1
  234. 1
  235. 1
  236. 1
  237. 1
  238. 1
  239. The way I see it, the main thing, the main advantages of this "do one thing and do it well" is for having easy composition and less/no code duplication. That is, it's not important to follow it to the letter, but it's a nice, short way of conveying the goals of achieving several benefits that I'll try to list below: That program or library has to be small enough so it can be used in a chain of commands or inside a bigger app, with a minimal footprint. If everything follows this philosophy, then it's also easy to replace without having a depency hell, coupling issues and performance struggles. This small enough pushes the developer to try to be as narrowly focused in the thing it's doing, and when it does need more things to do also try to see if it can use another program or library. This also allows projects to have few developers, since they can only focus on their specific program and domain. To give an example (I don't know if the reality is anywhere close to what I'll present, but it seems like a nice example), the Lynx browser. Their devs can simply use internally curl to fetch the resources and only deal with building the DOM and rendering it. Internally curl might also use a ssl library and a tcp library to handle the lower level networking and only focus on the HTTP & related standards. In this example, if HTTP 3 gets released (woohoo) it might get implemented into Lynx with minimal effort, by just updating the curl library (well, usually minimal, there might be breaking changes or new stuff to take care of). Do Lynx developers have to care about HTTP 3? Nope. Do they have to care about available encryption hashes used for HTTPS connections ? Nope. Do they have to care about opening sockets and their timeouts and buffer sizes ? Nope. They can focus on their specific thing. And that means they can also know very little of the underlying features, meaning less experienced developers can start to contribute, the project has a lower barrier of entry. Having a smaller project/library also allows having manageable configurations. I mean, it can be made to be very configurable (including being modular) without getting overwhelming, because it's in the context of a rather small program/library. Another interesting example is ffmpeg. As a program and cli command, it's actually pretty big. But it's still made so it's easy to be used with other tools and programs. Of course, in the real world, the separation cannot be made perfectly. For some developer the big thing A would be split into b, c and d. Another developer would see A split into b, c, d, e and f, and each also split into 2-3 smaller programs, with one of them being used in 2 places (say, program t is used by both b and e). As you can see, technicallly the second split is better from the "do one thing and do it well", but it's also much more complex. This cannot go ad-infinitum. Theoretically, it would be nice if we'd have only system functions and calls and we'd only run a composition of them. But in the real life it's never going to happen. Also in the real life, the example above, a third developer might see the split of program A into B, C, D and E, with B being say 80% of what b does in the vision of the first developer + 50% of what c does in the vision of the first developer. And so on. And there would be arguments for all approaches that make sense. Lastly, doing one thing and well allows for easier optimisation. Especially in the context of a program or library to be used in bigger projects or commands, having it well optimized is important. And because the program/library is rather small and focused on one thing, that is, it's into a single domain usually, it's easier for the developer to go deep into optimisation. Of course in the extreme cases, having one big monolithic program can allow for better overall optimisation, but you'd also have to code everything yourself. Regarding the Linux kernel, I'd say that it achieves the goals of "do one thing and do it well" perfectly because it's modular (and each module does one thing) and all of them play nice with each other and with the userspace. The problem that I see with systemd is that their binaries, while neatly split, are basically talking their own language. They cannot be augmented or replaced by the normal tools we already have (well, sometimes they can be augmented). Somebody would have to create a program from scratch just to replace, say, journald. And this replacement program would be just for that. It's this "we're special and we need special tools" thing that is annoying. Ten years from now if one of the binaries is found with massive flaw, well... good luck replacing it. Oh, and it's critical and you cannot run systemd without it, so you have to replace ALL the system management tools ? Oh well, warnings were shot, those who cared listened...
    1
  240. 1
  241. 1
  242. 1
  243. 1
  244. 1
  245. Brodie, I agree that this being opt-out is bad. However some other things I disagree. Especially the points that the CTO discussed. I fully disagree with your take at 12:05 "This system does not do anything about stopping those economic incentives". And at 14:44 "The way you get this fixed is by talking with the regulators clamping on the advertisers [...] and THEN you can implement the system that gives them very minimal data." With the above, you are suggesting is that for an unspecified amount of time businesses to spend money for completely random ads, instead of targeted ones, basically to throw money in the air and light a flamethrower on it, and then in some mythical future they can get some data so they can be back on targeted advertising. And that somehow they won't be strongly incentivised to find and use ways around these regulations (that often get more and more terrible). Also you're saying that providing the service beforehand, so businesses can switch to it, in a specified window of time until the regulators can come raining down on them is somehow bad or useless. That somehow they'll have the same exact incentive to spend money to find or make ways around this. WTactualF. Please try having a business first, maybe it will be more apparent that what the CTO did and said on this approach makes the most sense. To put it more simply you're asking people that don't have a garage to park their car to first sell their cars, be carless for some time and then buy them back when the authorities built some parking lots. Nobody will do that. And it will be a massive backlash. Learn how things work in a society. Learn to think how is like for the other side. And they ARE doing something about the dystopian state of the web today. So far I haven't heard any other actual solution, something that is actually feaseable to be both useful, to work and also to be implemented. Another thing, at 10:16 "If you're unable to explain to the user in a short form why a system like this is benefical to them, why they would want a system like this running on their computer, you shouldn't be doing it". I agree that they should explain to the user. But I disagree on the "shouldn't be doing it" part. Many things are somewhat complicated and many people wouldn't understand because they're not that interested. Frankly many things are simply very subjective if they're explained or not. It can certainly be summarized quite shortly, but some would argue is not explained enough. And a more proper explanation would then be too long for some people. From "hard to explain" to "don't implement it" is a LOOONG road and the "hard to explain" shouldn't solely be the reason of not implementing something. People receive drugs and medication or even things like surgeries with very little explanation too. You can argue that maybe it shuoldn't be like that, but if you compare it to our case, this is orders of magnitude less damaging in any sense of the word, so in the grand scheme of things it can be explained very shortly and whomever truly wants to understand it can find that in the code or somehwere on the web in a blog post or a video or something.
    1
  246. 1
  247. 1
  248. 1
  249. 1
  250. 1
  251. 1
  252. 1
  253. 1
  254. 1
  255. 1
  256. 1
  257. 1
  258. 1
  259. 1
  260. 1
  261. 1
  262. 1
  263. 1
  264. 1
  265. 1
  266. 1
  267. 1
  268. 1
  269. 1
  270. 1
  271. 1
  272. 1
  273. 1
  274. 1
  275. 1
  276. 1
  277. I remember, though I might be wrong, that Intel wanted to have a yearly release cycle. For now, Battlemage seems to be arriving at exactly 2 years after Alchemist, but like you said, I think they had to wait to sort the driver issues. The driver is still not perfect, but it's actually usable for a good bunch of people now. What I fear most with Battlemage is that it's again a bit too late. If its top SKU fights with RTX 4060 or RTX 4070 or RX 7700X, at 250W ... and then both NVidia and AMD launch a new xx70 or xx60 class GPU 5 months later ... then Battlemage would have again to be extremely low priced, in order to be competitive ... which might mean very well that it's sold at a cost by Intel. If I'm not mistaken that was kind of the situation with Alchemist. And if it's again with Battlemage, well, Intel isn't exactly doing that good financially, so I'm not sure they can support it, if it doesn't have some profit. The less gloomy part is that the same architecture and drivers will be used in Lunar Lake and the next CPU generation (rumors say that Arrow Lake has Alchemist+, not Battlemage). And those might sell quite well. Right now the MSI Claw is basically the worst handheld, buuut, with some updates and tuning, it can ... get there, so to speak. I don't expect it to win against RoG Ally or Steam Deck, buut, it can get to be kind of on the same level, and with no issues. I'm so curious of seeing a Steam OS (or Holo or whatever it was called) on the MSI Claw, I'm really curious how it would work. Anyway, an MSI Claw 2 might actually be competitive this time. And be launched in time. Still speculation, but there is hope.
    1
  278. 1
  279. 1
  280. 1
  281. 1
  282. 1
  283. 1
  284. 1
  285. 1
  286. 1
  287. 1
  288. 1
  289. 1
  290. 1
  291. 1
  292. 1
  293. 1
  294. 1
  295. 1
  296. 1
  297. 1
  298. 1
  299. 1
  300. 1
  301. 1
  302. 1
  303. 1