Youtube comments of Winnetou17 (@Winnetou17).

  1. 559
  2. 221
  3. On the Google doesn't support JPEG-XL because CPU costs... that's a cop-out. They've been asked multiple times, including by companies like Adobe, to implement it and they just straight up refused also acting like nobody cares about JPEG-XL. You can say whatever you want, but if you read the chromium thread about it, it's clear as day that Google DECIDED against both JPEG-XL and the community with no technical reasoning. Of course, they didn't say out loud why. Also, the CPU costs of JPEG-XL cannot be the only reason of rejecting it. There's loads of other use cases. Not to mention the forward looking of, JPEG XL is alone in the amount of features it provides/support that is outside of just some 300x300 images on a random website. You can have wide color gamut and HDR in it. People taking high-detail high resolution photos, can put the original images directly as jpegxl. But Google directly jumped in to say "nope, cannot see that directly in the browser. Think of the children in Africa!". I'll stop, my blood is starting to boil. I see that close to the end of the video these extra features were mentioned, so at least I'm at peace that Theo saw that. - I'm totally on the webp that it is good and it should still be supported as the bridge to JPEG XL. - AVIF has nice features, but it's a dead-end technology, we shouldn't bother putting more effort into it. - JPEG XL is truly the next generation of image formats and what we will use for decades. Hence why it's so infuriating of Google actively being against it and slowing massively its adoption. No, I'm not buying out the CPU cost argument. That's not the browser support problem. In a world where a website has a 6.6 MB JPEG with .png extension image, there's PLENTY of room for JPEG XL. Lastly, I really want to see benchmarks of just how slow JPEG XL supposedly is. I found out a benchmark, apparently from 2020 which unfortunately doesn't have a comparison to webp. And just citing X or Y MB/s is meaningless when you don't webp on the same exact hardware, as the numbers can differ massively from one CPU to another. Not to mention that 4 years have passed, things might've changed. EDIT: Initially mentioned about JPEGXL being lossless, but Theo corrected that very quickly. Also mentioned about PNG: PNG not being compressed ? Duuuude .... It's not a good compression for photographs (hence why people still use JPEG for that, it looks basically the same to a human, if you don't need to zoom in, and it is a smaller size as a JPEG). But PNG itself totally has compression. And multiple ways in which you can reduce the file size. There's a reason TinyPNG exists (or existed, didn't check recently)
    208
  4. 179
  5. 159
  6. 154
  7. 141
  8. 136
  9. 130
  10. 128
  11. 108
  12. 106
  13. 105
  14. 104
  15. 91
  16. I think that Apple gets less hate because they're a more "opt-in" ecosystem / playground. That is, the default is windows, when you have no choice or don't know what to pick. So you'll use it and, in many cases, find things that irk you and some that you'll absolutely hate. But going to an Apple... you usually reasearch it a bit, before you choose to buy one. That is, you already have some idea if you'd like it or not and there's good chances that you'll simply not switch to it if there's possibility of incompatibility, so to speak. Getting back to Windows being the default option - you rarely are forced to use Apple, for, say, work. So bottom line of the above, when going Apple you usually know what you're getting into, significantly reducing the number of people frustrating with using it. As some simply choose to not go Apple, they might've realized beforehand that what they're doing is simply incompatible (like most gaming). And the rest might've had done some research and learned how to do the basic things. Me personally, I do hate Apple more than Microsoft. I do not deny that their engineers and designers are usually doing a very good job. Most people I know using Apple's products are happy, things work. Well, the thing they're using it for. But Apple is so focused on control and walling the garden as much as possible, so anti consumer, that I do not care how good their products are. Microsoft, to be fair, is not that far off. But, I guess, because their current position, they have a much bigger garden, so closing it is much much harder. But their strides for requiring an online Microsoft account, and what they're doing to secure login and I forgot the next thing after secure login, that's also a no-no. I've used Windows since Windows 95 (used a bit 3.11 too, but it was on old computers in some places) to Windows 10, and I've been a happy Windows 10 user. I know I won't drive Windows 11, not by personal choice. I might have to, for work, but unless I REALLY have to for something specific, I won't install it on any of my personal systems. Even if their bullshit if bypassable.
    79
  17. 76
  18. 70
  19. 67
  20. 64
  21. 64
  22. 62
  23. 62
  24. 55
  25. 54
  26. 54
  27. 51
  28. 51
  29. 50
  30. 45
  31. 45
  32. 42
  33. 40
  34. 38
  35. 38
  36. I have some questions for AMD, though surely we'll never have an answer as their recent silence is already one answer for many of them. Anyway, here it goes: - Why was B550 so late ? - Why was this support/compatibility annouced so late ? Wasn't it known when Zen 2 launched ? If not, when was it known ? Even so, wasn't a lack of guarantee known in advance ? Couldn't AMD give some warnings going forward ? - When making the decision to absolutely not support any 3xx or 4xx chipsets for Zen 3 CPUs, were any board partners consulted ? - Wasn't AMD aware that many customers are buying B450 specifically to upgrade to a Zen 3 CPU ? Why wasn't there any communication ? - Why is AMD still so silent about the matter ? How could a customer not think that AMD simply pulled an Intel out of greed and/or lack of care ? That is, simply abandon a part of customers and move forward, because it's easier. How can an AMD fan have the benefit of the doubt now ? - Seeing customers and media perception (especially seeing MSI promises) and not having any comment on them, any try to address the issue as soon as possible (so there's as little damage as possible), isn't AMD concerned that the whole community will be less trustful of ANY marketing and promise going further ? Isn't that a bigger price to pay than being honest and trying to work with the partners and the community ? Does anyone at AMD think it's ok to say now that "well, we only said Socket AM4 support, nothing about chipsets" ? How could the community at large realize the difficulty of providing this kind of support when no attempts at it were made and when AMD is being so shady ? Sigh
    37
  37. 34
  38. 34
  39. 33
  40. 32
  41. 31
  42. 30
  43. 28
  44. 28
  45. 27
  46. 27
  47. 27
  48.  @Marlow925  What you say about the car having the option to be the fastest when there are multiple stops is true. However, there's two things which actually makes this impractical and which subway trains (only more than a century old idea) solve effortlessly. 1) Throughput: 4400 passangers per hour is super low. Make another stop at a stadium and you'll have 50 000+ angry people who can't get to /out from the stadium. Ok, not everybody will have to use this, but however you want to expand this, the very very low capacity that single cars have will immediately become very painful. 2) Cost: Having a lot of cars for 1-4 people is not efficient. Not only the car themselves will be quite expensive, but the operating cost will be quite high too. A car carrying 1-4 people and weighting 1 tonne is not efficient. Also, they'll have to recharge the battery (which will wear out) several times per day, which complicates things by quite a bit. A train can have powered lines, so no need of a battery which will inevitably become waste, and the weight to people ratio is much better, effectively more efficient. However you take it, if you have to scale it, the cars won't work, and the train will be the best option. Unless you want to keep it exclusive and expensive. In the end, I really don't understand what are people so excited for. I mean, yeah, nice, a new route was made, some stuff is easier to reach. But the technology is absolutely nothing new. Some say that the tunnelling was done much cheaper, but I really don't see that either. Maybe it's on the cheap side, but surely not 10 times cheaper or anything close.
    26
  49. 26
  50. Ok, here's a hot take: I fully disagree with Drew. Well, most of his points are actually ok, and I agree with some (like decoupling GNU and FSF and the need for new licences). But I said fully disagree because I totally do not agree with the method of achieving said reforms. There is this case that FSF is kind of tone deaf, that is extreme in its philosophy. I do think that is good. That is should stay that way (off topic and that Richard Stallman should stay in FSF, including leading it). Why (to answer Brodie's question in the end) ? Because it is objectively pure. It is a golden standard. When FSF endorses something, so far you can be sure that it actually, absolutely is free software, no "practical considerations", no "in a manner of speaking" no "for all intents and purposes" and so on. That is very valuable. If someone like Drew likes to improve the situation and cannot do so with/within FSF for reasons like FSF being very rigid, I don't understand this need to change FSF, when it has a clear stated goal and philosophy. He should begin another foundation and achieve those things like that. A milder FSF, more in tone with the masses I'm sure would attract a lot of people that are in the sentiment of FSF, but are not willing to go to the lengths that Richard Stallman go (and why I have huge respect for him). This doesn't have to be at the expense of the current FSF, it should be alongside. Also, I cannot agree with that 5-year-old mentality that if red people are known to be good with something, then to have blue people good with that, we should put blue people in charge. That's downright insulting for anybody with 3-digit IQ. If the blue people want to weave, then they should start learning. And only deal with the cases when they are not allowed to learn, that's the only thing that should be done. Equality of chances, not equality of outcome. Leadership should be on merit. Assuming that blue people need to be put in charge automatically assumes that both red and blue people are tribalist cavemen-level people who cannot be impartial and cannot see value in the people of other color. How can I take this man seriously when he's so gigantically wrong about such a simple issue ? Also, that "we're told that unfree Javascript" is stupid and cringe, I have to agree. That should be improved. By FSF.
    24
  51. 24
  52. 24
  53. 23
  54. 23
  55. 23
  56. 23
  57. 22
  58. 20
  59. 20
  60. 20
  61. 20
  62. 20
  63. 20
  64. 19
  65. 19
  66. 18
  67. 18
  68. 18
  69. 18
  70. 18
  71. 17
  72. 17
  73. I think the FSF attitude is EXACTLY what is needed and what they should do. No cracks in the armor as you say. It protects us from getting complacent and "slowly boiled". It protects against slippery slopes. It defines the golden standard and is very nice to see people catering to that, despite the immense hurdles in doing so. It really saddens me the stance of many people who think FSF as irrelevant or extremists just because they actually stand by their stance and don't compromise on their ethics. They are important to see what the golden standard is. It's up to you how much of it you want. In practicality, for now, going 100% is very limiting. But that's the good thing, we know, we are aware of that! If you want to go full privacy and full freedom, you know what to do, you know how to get there, you know what you have to ditch. And I haven't heard of an FSF endorsed software to be actually non-free in any regard, so they are doing a good job there, as far as I know. It also REALLY saddens me that some people think that endorsing FSF somehow needs that you yourself, on all your computers, have to run 100% free software and then they see the impracticality of it (like Richard Stallman going without a cellphone and running 15 year old laptops) and promptly reject the idea in its entirety. When it's actually should be used a sign that more work has to be done to get free software to be a decent alternative. You can run and use whatever you want, just try to help the idea (mind-share, testing, documentation and, of course, programming & others) to move it into a better place. Its akin to someone seeing a poor, weak person, which is poor and weak of no fault of its own, and being disgusted by it and running away from it. That's not the right attitudine, that person should be helped. Same with free software, it should be helped so it grows into a decent alternative.
    17
  74. 16
  75. 16
  76. 16
  77. 16
  78. 16
  79. 15
  80. 15
  81. 15
  82. 15
  83. 15
  84. 15
  85. 15
  86. 14
  87. 14
  88. 14
  89. 14
  90. 14
  91. 14
  92. 14
  93. 13
  94. 13
  95. 13
  96. 13
  97. 13
  98. 13
  99. 13
  100. 12
  101. 12
  102. 12
  103. 12
  104. 12
  105. 12
  106. 12
  107. 12
  108. 12
  109. 12
  110. 12
  111. 12
  112. 11
  113. 11
  114. 11
  115. 11
  116. 11
  117. 11
  118. 11
  119. 11
  120. 11
  121. 11
  122. 11
  123. 11
  124. 10
  125. 10
  126. 10
  127. 10
  128. 10
  129. 10
  130. 10
  131. 10
  132. 10
  133. 10
  134. 10
  135. 10
  136. 10
  137. 10
  138. 10
  139. 10
  140. 9
  141. 9
  142. 9
  143. 9
  144. 9
  145. 9
  146. 9
  147. 9
  148. 9
  149. 9
  150. 9
  151. 9
  152. 8
  153. 8
  154. 8
  155. 8
  156. 8
  157. 8
  158. 8
  159. 8
  160. 8
  161. 8
  162. 8
  163. Well, Thunderf00t did mention a bit about that. So, you need roughly 1 MWh of energy for a full load of the semi. And in decently good conditions, you can about an average of 10 hours of peak time for solar panels. So to get 1 MWh of energy in a day, you need 100 kW worth of solar panels, that in 10 hours (well, in more than that, but in average) will gather 1 MWh. So, solar panels are usually 20 to 22.something percent effective. That is, they can gather about 20 something of the incoming solar energy which is around 1kW per square meter in a normal bright sunny day. It can go a bit more than that, though not by much. And when very bright, you usually have the problem of cooling the panels, otherwise their efficiency drops when overheated. So, let's assume the panels are at 25% efficiency. That means than 1 square meter of that can generate 250 W, so in one hour it will get 250 Wh of energy. Simply put, 4 square meters make 1 kW. To get to our 100 kW worth needed, we simply need to multiple the above figure by 100. So a megacharger needs gasp 400 sq.meters of good solar panels, and to run in good conditions, which California does have, but the northern part of USA and most of Europe (or rest of the world, really) do not. Now, usually a solar panel is bigger than a square meter, but less than 2. So the number of typical solar panels would be between 200 and 400, with a strong bias to be around 300, I'd say. Actually, if the conditions aren't that great or the efficiency is not the 25% I used... well, then having 400 might actually be the more realistic number.
    8
  164. 8
  165. 8
  166. 8
  167. 8
  168. I'll sound very disrespectful, but this kind of a review (I know this is a refresh, it doesn't matter this time) is ... not good for this type of a CPU. First thing: too much gaming benchmarks. It's a waste of time for everybody. Not even streamers should look for this CPU. So, gaming benchmarks for this kind of a CPU should be between none or at most 2 games, with several seconds of airtime. The other thing: the productivity benchmarks... are too few. This kind of CPU is rarely for one person and rendering jobs are not everything. This kind of CPU is mostly used in servers. Besides rendering, there's also databases, of many kinds, applications, web servers and above all... virtual machines. That photoshop score was kind of meh. But how does it do with 6 photoshop at once, each in a different virtual machine ? How about 7 ? Or 8 ? How does the threadripper or R9 3950 fare in this ? How many queries per second can it do ? Requests per second ? How much RAM can it have ? How well does it run a special algorithm ? Or another algorithm, but in 50 instances ? Or Docker container farms ? In this video, W-3175X won confortably the 7-Zip compression benchmark. How many other applications/workloads does it win in ? Probably not many, if any. But we don't know. And this video sheds way too little light. If you start to factor all the things said above, you start to realize that this kind of a review misses the point for this kind of a CPU. It spends too much time on benchmarks that are not relevant, misses a lot of benchmarks or workloads that are relevant, and I guess it also kind of speaks to the wrong audience. All in all I think this is just mostly a time waster. The folks at the big corporations that buy these CPUs don't decide based on this review. And those of us who look at this never buy something like this.
    8
  169. 8
  170. 8
  171. 8
  172. 8
  173. 8
  174. 8
  175. 8
  176. 7
  177. 7
  178. 7
  179. 7
  180. 7
  181. 7
  182. 7
  183. 7
  184. 7
  185. 7
  186. 7
  187. 7
  188. 7
  189. 7
  190. 7
  191. 7
  192. 7
  193. 7
  194. 7
  195. 7
  196. 7
  197. 7
  198. 7
  199. 7
  200. 7
  201. 7
  202. 7
  203. 6
  204. 6
  205. 6
  206. 6
  207. 6
  208. 6
  209. 6
  210. 6
  211. 6
  212. 6
  213. 6
  214. 6
  215. 6
  216. @Ayy Leeuz I think you confuse "tool" with "abstraction". When you said: "that is not the purpose of abstraction at all. abstraction is not removed from actuality so you don't need to understand how it works in actuality, nonsense. abstraction is a means of simplifying for those who understand" That is totally wrong. That is what a tool (or library in broader computer term) does - allows you to apply your knowledge much more easily, without going into the details, but you have to understand how it works, what it does. An abstraction is just going to give you the ability to use a concept without knowing its implementation details. That includes to not know them at all. Of course, it should still let you know of its limits and side effects, if there are side effects. And because all abstractions have limits, eventually you will learn how they work, when you need something better/different. For example: can someone learning javascript get good at it and be productive, and give out good code without knowing what pointers are or basic structures, like linked lists ? Absolutely! Of course, at some point, it should learn, or it would be good to learn about them. Like say, they start with frontend development, go into backend development, start using databases and at some point they'll need to optimize a query and then learn that there are hash-based indexes and btree based indexes. Because the language (JS) abstracted the memory management, it lowered the barrier of entry, but that doesn't mean those who use it are useless or that whatever they code (until they learn how everything works from the transistor level up) is bad or garbage. It just means that they're limited in what they can do. And that "limited" is actually still quite sought-after and useful these days when there's a lot of programs, apps, APIs and so on, to be built. Blame it on the hardware which allows programs being literally 10000 slower than a decently optimized one to still be useful. Overall I think you're wrong, and I agree with Grog. I hope you never used malloc until you properly learned what it has to do, otherwise you're a bad programmer too and should quit the industry. See how ridiculous it is ? You're basically saying that everybody should know assembler before they can use any other language. Insane!
    6
  217. 6
  218. 6
  219. 6
  220. 6
  221. 6
  222. 6
  223. 6
  224. 6
  225. 6
  226. 6
  227. 6
  228. 6
  229. 6
  230. 6
  231. 6
  232. 6
  233. 6
  234. 6
  235. 6
  236. 6
  237. 6
  238. 6
  239. 6
  240. 6
  241. 5
  242. 5
  243. 5
  244. 5
  245. 5
  246. 5
  247. 5
  248. I disagree with the ranking. Yeah, if you know already what to expect, what do you have to do, it is easy. But, in some cases, let's take Gentoo, you do have to learn quite some things for it to work. And as nice as the wiki is, there's still things that you kind of need to know before hand, otherwise you'll be in for a bad time. Like someone mentioned, when it breaks... well, it's not so easy anymore, now, is it ? And just the amount of what you need to learn, what packages do what, what an OS requires in general and so on, there's a lot to learn. That's objectively difficult. Of course, there is such a thing as something being difficult no matter how much you know. I'd say that it doesn't apply here almost at all. Everything related to installing and maintaing an OS is knowledge based. There's no realtime dexterity or attention/observation/perception contest where if you do not act in less than 1 second to an event you lose. Back to the Gentoo example. I can bet you that more than 99% of people who a) didn't used Linux before or know that much about it and b) are not developers, now these people, if you would task them to install Gentoo (as their first Linux install) I bet you that more than 99% won't be able to do it in less than 24 hours. On a computer where compiling everything needed takes less than 4 hours. They'll have A LOT to learn. Well, maybe some may be able to do it faster, if they skip on the documentation, and happen that the code examples all work. If you take out the "time to learn" parameter from the equation, most things in life become easy.
    5
  249. 5
  250. 5
  251. 5
  252. 5
  253. 5
  254. 5
  255.  @barongerhardt  There are times where you normally use feet but have other things being expressed as "half a mile". Which is 2640 feet in length. Oh yeah, I forgot about yards. Those I haven't used or was forced to use much. Anyway, the age is not exactly an argument. I was using more as mocking, like neatherthal = uses foot, contemporary man = uses meters. First, using "foot" as a measure does come from around the Bronze age. Of course it wasn't the exact foot we have now. But the idea is the same, just that now we have tools and conventions, so it's the same everywhere (well, almost everywhere, that's how "International foot" got to be a thing). The meter is still superior in that it was designed from the start to not be something so subjective like a human foot or cubit or anything else human-related. First it was based on Earth's size, and quite soon a reference bar was created. But to get back on in, ft, yd, miles, pounds, ounces, gallons and the rest, the real benefit of the metric system is indeed that it's all in base of 10. You might barely, if ever need to care about mm in the same sentence as Mm (1000 km), but it does matter immensily. Because you use small units there, which interacts with medium units elsewhere, which eventually matter for big units somewhere else. Not having headaches in converting is very useful, both in time and in chances of mistakes. Basically the context will never matter, since it's so easy to convert from one size to another. Lastly, indeed, our current languages aren't neccessarily better because they are newer. On that, I do regret many new things because a lot of time they shed some old, useful things, just because they feel they won't need them or because they're expensive (like having a phone that can be used for 50 years, not what we have now with smartphones). However, I do have to point that using NAMES in latin has nothing to do with the language being good. Speaking in latin is an ENTIRELY different thing. And, no, basically nobody (including scientists) is speaking latin, which I think you could say that it is counter-productive. I understand your argument, but your example here is really not good.
    5
  256. 5
  257. 5
  258. 5
  259. 5
  260. 5
  261. 5
  262. 5
  263. 5
  264. 5
  265. 5
  266. 5
  267. 5
  268. 5
  269. 5
  270. 5
  271. 5
  272. 5
  273. 5
  274. 5
  275. 5
  276. 5
  277. 5
  278.  @marcogenovesi8570  "is Unix phylosophy against dependency now? Does your scripts work without all the grep/awk/count/sort/whatever? " I think you didn't understand the Unix philosophy. At least from your example which is terrible. A script that you write is meant to use the very unix-like-in-philosophy programs, because *they are to be used for composition*. That is the sole reason of "only" doing one thing and do it well, so it can be brought up and be used in other scripts/programs with as little unneeded stuff as possible. Does grep use awk/count/sort ? Does awk use grep/count/sort ? Does sort use grep or awk ? Do you understand the difference now ? Back to the topic at hand... have you even bothered to read the thread ? Just two replies above and 3 hours before you posted your first post, Hanabishi wrote: "can I use systemd as ONLY the init binary and throw out the entire rest of it? You can, technically. It is complicated, but much easier for servers though. You can get rid of logind because "systemd services are not sessions, they run outside of logind". So you can run for example sshd instead of logind. Journald also can be disabled (you will not have a replacement for logs of course, but kernel dmesg works though)." Even somebody new to the topic should see that systemd clearly isn't so unix-like some would believe. Having 70 binaries is not enough to say they follow the unix philosophy. Please, do check on your favourite search engine ""Systemd: The Biggest Fallacies" on "Jude's Blog.". It will take 15-20 minutes. This discussion will be much more productive after that.
    5
  279. 5
  280. 5
  281. 5
  282. 5
  283. 5
  284. 5
  285. 5
  286. 5
  287. 4
  288. 4
  289. 4
  290. 4
  291. 4
  292. This depends a lot on the hardware you have. For desktops and laptops Linux is still behind in hardware support (especially peripherals) because the companies making them didn't bother to write drivers for Linux, and OF COURSE that they don't opensource the ones on Windows or offer schematics or documentation of sorts. Also, even for Windows, the stability of the system is again, determined by hardware. And user behaviour. I'm not now writing this from my almost 7 year old laptop that I've used DAILY for both work and personal use (aka A LOT). It still has the original Windows 10 install, except that at one point I upgraded to the Pro version to get rid of those F^#^&$#ING automatic updates (best $11 spent ever). In almost 7 years I had 4 (four) blue screens of death. And after I configured it so it updates when I say so, not when it wants, I can get to really high uptimes (I usually try to be between 1 and 2 months, but the biggest I got was exactly 100 days. It still ran fine, but I went immediately to update it, to not risk being the idiot who got hit by a ransomware) Still, if you get a new laptop or premade desktop, you can check System76 or Tuxedo or Framework to get one that's guaranteed to work flawlessly with Linux. And even without that, Linux is evolving fast, so your experience from 1 year ago can be drastically improved. It's still not guaranteed. But in, say, at most 5 years, I think that more than 99% of people would be perfectly served by Linux. By then Wayland and HDR will be mainstream and mature, the GPU drivers will be perfect for all 3 vendors, the anti-cheat systems in games should no longer be a problem, all except the most obscure games should be playable, everything except Adobe software should also work with no hassle and drivers for over 99% of components and peripherals should be available.
    4
  293. 4
  294. 4
  295. 4
  296.  @SisypheanRoller  Damn it, if my net wouldn't had dropped at the exact time, I would've posted this hours ago and the many replies that I see now would've been ... better. So, regarding the monolithic part - the number of binaries is indeed not relevant (though often easy to tell at a glance). The idea is the coupling. If you have one giant binary or one main binary and another 10 binaries, but there is a hard dependency to them (or just one of them) then you have a monolithic program. In our case (unless it has changed recently, I haven't checked) journald is a prime example. It is a component of systemd that cannot be (safely) removed or changed. It is a separate binary from systemd but because of the hard coupling, it effectively is part of systemd. To systemd's credit, the amount of binaries that have stable APIs and that can be changed with 3rd party binaries safely has increased over the years. One can hope that eventually that will be the case for all of them and that everyone will then be able to use as much of systemd as it needs and replace anything that they don't like. Getting back to the UNIX philosophy of "do one thing and do it well" unfortunately many many people don't understand it and spew bullsh!t that it's outdated or other such nonsense. The idea of it is that such programs (tools or in a broader sense, infrastructure) should do one thing and do it well in order to have effective interoperability. In order to have that program be able to be easily and effectively used for scripts or other programs. Since you mentioned, the "one thing" is not important. It can be any thing as long as a) is complete enough that in most cases can be used alone and b) is small enough that you don't have to disable a lot of it for normal cases or that by simply running it the CPU/memory requirements are significantly higher than what is actually needed in the typical use case. This can be as simple as listing the contents of a directory (ls) or transcoding video and audio streams with many options of editing and exporting (ffmpeg). Is ffmpeg massively complex ? Yes! Do people complain that it violates the UNIX philosophy ? Not to my knowledge. Why ? You can use it effectively with the rest of the system, you can script around it. And it works well. OBS using it under the hood is a testament of that too. Lastly, here's a practical example of why not following the UNIX philosophy is bad, which hopefully also responses to Great Cait's question of why the hate: Search CVE-2018-16865 . It's a vulnerability that was found on journald several years ago, and was later fixed. The problem is that it's pretty high severity. And... you cannot simply disable or remove journald (or couldn't at that time). You can use rsyslog alongside journald, but because they made it to be soo coupled, you cannot literally remove it and still have a working system. Imagine the stress levels for system administrators that found out that they have a big security risk that they cannot disable/remove/replace, they just have to wait for an update. That's the hate. Yeah, it works pretty good. But it's not perfect. And it's being shoved down our throats in a "take it all or leave it" manner that is a slippery slope for potential big problems down the line when everyone is using it and suddenly some massive vulnerability hits it or Red Hat pushes something on it that everybody hates or things like that. And people will suddenly realize that "oh, sheet, what do we do now, we have no alternative, we cannot change 70 programs overnight". And it's annoying, because we know to do better. Hopefully it can change to be fully modular and non monolithic so something like what I wrote above cannot happen.
    4
  297. 4
  298. 4
  299. 4
  300. 4
  301. 4
  302. 4
  303. 4
  304. 4
  305. 4
  306. 4
  307. 4
  308. 4
  309. 4
  310. 4
  311. 4
  312. 4
  313. 4
  314. 4
  315. 4
  316. 4
  317. 4
  318. 4
  319. 4
  320. 4
  321. 4
  322. 4
  323. 4
  324. 4
  325. 4
  326. 4
  327. 4
  328. 4
  329. 4
  330. 4
  331. 4
  332. 4
  333. 4
  334. 4
  335. 4
  336. 4
  337. 4
  338. 4
  339. 4
  340. 4
  341. 4
  342. 4
  343. 4
  344. 4
  345. 4
  346. 4
  347. 4
  348. 4
  349. 4
  350. 4
  351. 4
  352. 4
  353. 4
  354. 4
  355. 4
  356. 4
  357. 4
  358. 4
  359. 4
  360. 3
  361. Hey Louis, thanks and congrats for all the work you do, especially this thing: contributing for a common sense law to be passed, aka working for a better world. Now, I feel the need to express some things: 1) Even though things like the right to repair don't have much sense to be something to be taken into account just on a local scale, I do feel that the senator is entitled to ask anyone from where it is, even if it sounds totally dumb. For all they care maybe the people from Nebraska actually don't want this law, but somehow a lot of people from other states come to plead for it. Surely there can be nebraskians found to come and plead so this is no longer an issue. 1.a) When the senator had nothing to good to say about that satisfied customer from Nebraska... I see that as perfectly normal. He is not there to congratulate anybody. It's normal, given the time constraints, to only ask/talk only about the things that he doesn't like/know etc. I'd say, if all he had to say/point was a stupid argument, then all for the better, as that seems that everything else was ok, and that stupid argument can be cleared with ease. 2) In general I think that in order for a law to be passed, or at least for it to pass further, after this kind of talk (I don't know exactly how it works in US, I'm from Romania), the senators DO have to ensure that all aspects are taken into account. You can think of them to be the devil's advocate. However a dumb question might be asked, you guys should be prepared to answer it so the thing you're pleading for is without a benefit of a doubt good/better for all people, especially law and politics people. Think of them being like "ok, so you want this law that seems pretty common sense. But, you know there's big companies (or anyone else for that matter) that might not want that, and we're not technical enough to call bullshit on their part. How will you tackle this?" Aka is your job to provide as much evidence as possible that this will have no secondary effects or unforseen situations or abusable situations or affect unrelated parties etc. And that the things affected are with a reason (repair right will lower Apple's income, but will provide the consumer their right of ownership over the bought part or their human right dunno). It does sound a little like you'll have to do their work, but ... such is life. 3) As AkolythArathok said in this comment section, there needs to be a more serious pose. Talking about Repair Family is kind of distracting from the point aswell. Or things like "hey I have here a customer which is so happy, yay!". You actually did her job here with very clearly and shortly/on point saying "we do data recovery for which the customer has no option to do at the manufacturer, for any amount of money". That is how I think a point should be made. All in all, it was kind of sad, but totally not surprising to see this. And I have to congratulate for your speech. It was very on point, with clear arguments and examples. Now all you have to do for next year is to have everybody supporting this be as efficient and articulate as you :) And have everybody be able to totally demolish all (dumb) counterarguments presented here. And as you very well observed, to have this lobbying prior to the talk. The talk is just a showcase.
    3
  362. 3
  363. 3
  364. 3
  365. 3
  366. 3
  367. 3
  368. 3
  369. 3
  370. 3
  371. 3
  372. 3
  373.  @spuriouseffect  Sorry, you keep confusing government or state owned with beaurocracy. And think that private = beaurocracy free. Which is totally wrong. They are very different things. At a very basic level, having beaurocracy is having written papers about rules and agreements. Any kind of contract, if it's written - BAM there you go, beaurocracy. Does your fancy Sweden have nothing but verbal agreements between all the companies ? I seriously doubt it. Beaurocracy is often seen as negative when it's done inefficient, which is very easy to do. The goal of beaurocracy is to have everything stated and noted, so everything can be tracked and everybody can know and check the current or past rules and agreements & their states at a given time. With a well-constructed beaurocracy, nobody can say "I don't know what I have to do, what are my responsabilities" or "I don't know if I'm allowed to do that" or "I don't know who made that or took that decision" and so on. Having that idea in mind, simple people think then that everything have to be written and whenever something new comes, one or more rules are created, without having regard of the existing rules and the need to have the shortest, most efficient list of rules. Cases where beaurocracy foster corruption and nepotism are cases where people are using or actively trying to create a very heavy beaurocracy, where it's impossible to track all the things so they can benefit from that. But that's not the idea of beaurocracy that's wrong, that's the fault of the people who created or let the system get into that state. It's like saying that houses are useless - they break all the time, you need a lot of time and money for all the repairs, in winter you're cold, in summer you're sweating buckets, insects can fly inside, the walls get moldy etc etc, so you conclude that houses suck, and everybody should move into caves. Well, that's not the idea of having a house that's to blame here, it's how that house in particular what was wrong - it should have been built better.
    3
  374. 3
  375. 3
  376. 3
  377. 3
  378. 3
  379. 3
  380. 3
  381. 3
  382. 3
  383. 3
  384. 3
  385. 3
  386. 3
  387. 3
  388. 3
  389. 3
  390. 3
  391. 3
  392. 3
  393. 3
  394. 3
  395. 3
  396. 3
  397. 3
  398. 3
  399. 3
  400. 3
  401. 3
  402. 3
  403. 3
  404. 3
  405. 3
  406. 3
  407. 3
  408. 3
  409. 3
  410. 3
  411. 3
  412. 3
  413. 3
  414. 3
  415. 3
  416. 3
  417. 3
  418. 3
  419. 3
  420. 3
  421. 3
  422. 3
  423. 3
  424. 3
  425. 3
  426. 3
  427. 3
  428. 3
  429. 3
  430. 3
  431. 3
  432. 3
  433. 3
  434. 3
  435. 3
  436. 3
  437. 3
  438. 3
  439. 3
  440. 3
  441. 3
  442. 3
  443. 3
  444. 3
  445. 3
  446. 3
  447. 3
  448. 3
  449. 3
  450. 3
  451. 3
  452. 3
  453. 3
  454. 3
  455. 3
  456. 3
  457. 3
  458. 3
  459. 3
  460. 3
  461. 3
  462. 3
  463. 3
  464. 3
  465. 3
  466. 3
  467. 3
  468. 3
  469. 3
  470. 3
  471. 3
  472. 3
  473. 3
  474.  @hiiaminfi  I think you're so obviously wrong, I'm not even sure if you're genuine. However there is one specific point I want to address here, to leave it written: In regards to "5. He is not fit to be in any position of power and the reasons for that are documented in a lot of detail in the stallman report 6. "He has done great things" is not a reason to keep him in a position of power. It is a reason to not stop using the great things he did in the past, e.g. Thomas Eddison was a terrible person and no sane person would argue that we should therefore stop using electrical light." He ABSOLUTELY is fit to be in a position of power and there are PLENTY of reasons for that. Because he has demonstrated countless times that he knows what free software is about. He knows what privacy is about. He knows how licensing works. Whenever he speaks about having free software or why proprietary is not good and about nuances on the topic, everybody should listen. Because, like I said above, he demonstrated time and again that on this topic he's VERY knowledgeable. I'm sorry that I have to inform you that FSF is about free software not about anybody's views on whatever sexual-related things he's criticised about that he had an opinion on. If you don't like it, go make your DEI Software Foundation and prove it that it's better. That way, the rest of us that actually care about basic human rights and software can continue working on software for everybody and not waste time on things like this.
    3
  475. 3
  476. 3
  477. 3
  478. 3
  479. 3
  480. 3
  481. 3
  482. 3
  483. 3
  484. 3
  485. 3
  486. 3
  487. 3
  488. 3
  489. 3
  490. 3
  491. 3
  492. 3
  493. 3
  494. 3
  495. 3
  496. 3
  497. 3
  498. 3
  499. 3
  500. 3
  501. 3
  502. 3
  503. 3
  504. 3
  505. 3
  506. 3
  507. 3
  508. 3
  509. 3
  510. 3
  511. 3
  512. 2
  513. 2
  514. 2
  515. 2
  516. Holy crap! An entire 54 reply-long thread and all but one of the comments are totally missing the point! @Torgo I'll answer to you, since at least you honestly said your opinion, argumented it and presented it in a non-flaming way. So, the most important thing: Free software DOES NOT mean the software cost 0$. This is a problem with this label, I really wish Stallman would make an effort to name it otherwise so there's less confusion about this. What RMS is advocating for is the freedom to see/check, modify and (re)distribute the (modified or not) software (though on the redistribution of a paid software is kind of weird). So, just like after you buy a table, you are free to check how it was made, and you can modify it and even sell it. RMS wants to be able to do the same things with the software. Theoretically you could check a program by disassembling it, but the time investment for that is massive. And in most cases illegal also. As a side topic, the means of revenue from software in these cases is complicated. Nothing would prevent someone from buying a software then putting it on piratebay and everyone else getting it for free. For games, I guess that one way of doing would be like John Carmack/Id Software did - make it closed source, and release it as open source several years after. But you still have the privacy concern. Highly specialized software (websites included) usually are already done just for a specific company and the company (aka the customer) does receive the source code too, so there is no problem. Getting back to Stallman, another thing is that (from what I've seen) he's not advocating that others should do like him and impose such enourmous restrictions just to be free. He is just stating at what lengths he (and anyone else) has to go in order to be mostly free. Just so that you can see how bad the situation is. So you shouldn't think about how impossible and impractical (paranoic if you like) it would be to do the same, but how good it would be for more and more people to work, collaborate and contribute to free software so being free is easier and more of a sane/easy choice.
    2
  517. 2
  518. 2
  519. 2
  520. 2
  521.  @DimitrisKanakis  "It s still unclear where the power moving the system forward is generated" I have some doubts too, but it's all from the wind. Let me see if I can make an argumentation: Let's say we have a 10m/s wind. The thing is that the wind is so strong, that it will push the cart no matter if it has 10 kg or 100 kg. In that sense, we can say that it's force and power is uncapped, for our purposes (since we don't know at which weight it won't push the cart to 10 m/s speed). But even if the cart has only 0.1 grams, the wind will only transfer enough power to get it to 10 m/s. In all cases it imposes a 10 m/s^2 acceleration, but the mass differs, so the force differs. Now, because of that, the cart can, normally, only use (is this the correct term?) the power required for it's own weight. Let's assume it has 10 kg. And that is on perfect ice (no friction with the ground). The power needed to push 10 kg at 10 m/s means that it will need to have a 10 m/s^2 acceleration directed forward, so the power needed would be m * a * v = 10 * 10 * 10 = 1000 kg * m^2 / s^3 which is 1000 W. And that is the main thing. At the wheel level, they have the same power, since they are forced to move with the cart, at the 10 m/s velocity and the torque is ... I guess, the same force of the cart. But, the wheels are also connected to the propeller. So they transfer (for simplicity) all the 1000W to it. But the propeller has a bigger "wheel", so while it has the same power & force, because it's bigger, it will result in lower speed. It's the torque formula which determines the correlation between force, radius and power. Basically at a given, constant, power, the bigger the radius, the smaller the force and viceversa. And likewise, at a given, constant force, the bigger the radius the smaller the power, since bigger radius means lower velocity. And because friction is a thing, at a low enough force, the thing won't rotate. In a way, I think that the propeller acts as extra weight, extra friction resistance, because it's connected to the wheels, which "saps" more power from the wind, which in turn it uses to propel the cart faster. So the initial 1000W that I said, some of it will be used by the propeller, so there's less power for the wheels. But, the wind will compensate the extra power so the cart still has 10 m/s^2 acceleration forward. So now (well, at the same time) the cart will receive power for both moving the wheels of the cart and the propeller. So you can say that the wind provides the cart with, say, 1500 W, which 1000W are used by the wheels to push the cart forward and 500W used by the propeller. Hope it makes sense. There's also the thing that, if the car is faster than the wind, then theoretically it doesn't receive any power from the wind. I'm still not sure how to explain that. In a way I can say that both the propellere and the wind are, combined, pushing the cart. But then again, if the cart has been faster than the wind for some time, then it should only go on the propeller, and maybe inertia... dunno.
    2
  522. 2
  523. 2
  524. 2
  525. 2
  526. 2
  527. 2
  528. 2
  529. 2
  530. 2
  531. 2
  532. 2
  533. 2
  534. 2
  535. 2
  536. 2
  537. 2
  538. 2
  539. 2
  540. 2
  541. 2
  542. 2
  543. Here I place a formal request for Theo to STOP DOING MULTIPLE MULTIPLICATIONS AT ONCE. And in general to STOP RUSHING over delicate things/parts like that. The rather large and significant math mistake at the end would've been TRIVIALLY avoided by not rushing so much and doing the math in multiple steps. Like first stating how many requests are per minute, then per hour, per day then per month. Or at least something like 3600 seconds per hour, 720 hours per month to keep things still simple. I really mean it that it was really a rush, as what I requested above takes SECONDS extra to write. Can you NOT feel dumb for doing a big mistake because you wanted to save, say, 1 minute, out of a 74 minute video, kinda spoiling it at the end ? Also, Theo, please place the multiplication in the written part too. If you miss something, you AND US can see it there much more easily and not have to search in the calc app history. Not to mention that you switch things so fast there, it's very hard to keep up and spot the mistake right then (that is, without pausing), it's almost looking like you WANT to hide something (which I know you don't, I'm just stating how bad this is) After all, it was stated that serverless is more expensive and that the cost difference might not matter that much anyway, so the overall conclusion is still kind of there, but it's still so SOO frustrating to see (not the first time) such glaring mistakes that also might make someone less knowledgeable really question the video and conclusion. For something so easily avoided. STOP RUSHIIIIIIIII
    2
  544. 2
  545. 2
  546. 2
  547. 2
  548. 2
  549. 2
  550. 2
  551. 2
  552. 2
  553. 2
  554. 2
  555. 2
  556. 2
  557. 2
  558. 2
  559. 2
  560. 2
  561. Good idea of a video! I have several points to make: - the term "review" is a bit subjective. For me it always hinted some sort of completeness. That is, to paint an accurate description of the entity reviewed, which implies going more in depth, in order to know what and how works, to not be fooled by something pretty that will break on the next update. After reading/watching a review, I expect that anything that's not there to be details of little importance. So in that regard, I, too, agree that what DT does is more "first look" of a Linux/BSD/GNU distro - I think there is value in having a more in-depth/longer-tried look on a distro, including for newer-to-Linux people. When looking at a "first look", you see mainly how it looks and the general idea of what a distro wants to be. However, it doesn't tell you much about how it actually functions, how it is to daily drive it. For people to switch to a(nother) distribution, they need to also know how hassle-free or hassle-light the new distro is to drive, not just to install and look at. Things like how often should you update stuff, and how does that work. Customizing something non trivial, how is that done. When an update breaks, how easy is to fix it or revert it. When something doesn't work, how easy is to make it work. Upgrading from one version to another, is that ok (where applicable, like Fedora 36 to 37 or Ubuntu 20.04 to 22.04) ? It's things like that that also help when choosing a distro, that give you some knowledge and peace of mind that you'll get along ok with that distro, since you have somewhat of a deeper understanding of how it works and how to tackle it in times of need. Of course, some of this is maybe not distro specific but specific to a package manager or specific to Linux in general. If I can make a really bad analogy, it's like having a first date and the girl tells you that she cooks. Three months into the relationship you realize that she only cooks pastas and doesn't even want to try to make a soup. Or 1 month after you installed a distro you realize that watching videos on YouTube makes your laptop an airplane simulator because the distro you chose doesn't have the browser with hardware acceleration or doesn't have the proprietary NVidia drivers. Things like there are very valuable to be known beforehand (man, do I like soup!) - lastly, regarding reviews or first looks, if they can be harmful. Well, they can, even if the reception from both viewers and the maintainer(s)/owner(s) is positive. It's all about setting expectations. The idea is that a distro might get a first look, where it's shown that is nice and all. And an user (probably a novice) might try to install that, but have all sorts of problems. Either at installation or later, when something doesn't work. And that person might quit Linux altogether after a bad enough experience. I think the best example of how to do it right is Gentoo. Everybody reviewing Gentoo mentions that it's source based and that is time consuming, unless you have a really fast computer. And that, even so, it's for advanced users, all the responsability is shifted to you, you basically only have several tools for automating stuff. And this is good, it gives the warnings that it needs to do so somebody seeing it, might like it, but realize that it won't have the time to actually maintain that.
    2
  562. I'm in the same boat. But because he's DT, I asked that, with the same disclaimer than I'm on the side of not believing this is actually useful for the society. I asked that on the first video on the other channel, but I think I'm shadow banned, I have no replies and no likes, kind of like nobody sees it. I'm actually curious, because I'm hoping that DT wouldn't go into this, unless he feels that it's at least okish, morally speaking. I can never know. But since I'm not that much into this, it's an opportunity to learn, maybe I'm the one that will change my mind. So far he explained what the options are, and overall I got the idea, it's a nice tool for lower risks, by the looks of it. But on the "how is this useful for the society" part I'm still clueless. I mean, for the companies and for actual investors (people that are actually there and put money to invest, not to simply profit by playing the trading game) for them I understand, it's a way for a company to receive funding fast, just by providing trust that it will do well (financially and/or for the society). But for those that are only there to play the trade game and profit from it, I feel like they're simply intermediaries, leeches that profit from the success of others and overall making the place worse (they can also amplify bad stuff like hype or other speculation, which sometimes can be fabricated). I really hope that I'm at least partially wrong and that these traders actually do provide some value for the system, but I highly doubt that. Sigh
    2
  563. 2
  564. 2
  565. 2
  566. 2
  567. 2
  568. 2
  569. 2
  570. 2
  571. 2
  572. 2
  573. 2
  574. 2
  575. 2
  576. 2
  577. 2
  578. 2
  579. 2
  580. 2
  581. 2
  582. 2
  583. 2
  584. 2
  585. 2
  586. 2
  587. 2
  588. 2
  589. 2
  590. 2
  591. 2
  592. ​ @sulai9689  Here is a list: 3:15 There was encapsulation before OOP, including in C 5:28 "before OO there were no maps or lists or sets" - this is 100% wrong from every possible angle. First no OO is required for that, check the article "Dataless Programming" by R. M. Bazler of RAND Corp from 1967. Second, a very trivial example, linked lists exist from 1956. LISP, one of the oldest high level programming language, extensively uses lists. Since 1958. And in general, there have always been abstractions in programming and the more complex languages become, the more abstractions they acquire, this has nothing to do with OOP at all 6:05 polymorphism is not an OOP idea either. Ad-hoc polymorphism if I'm not mistaken, first appeared in ALGOL 68. Very not OOP. What he refers more specifically is subtyping polymorphism, I assume specifically against interfaces. What he says about printers is extremely well shown to work without OOP in open, read and write syscalls. You have no idea, and do not have to bother at all knowing what exactly you are writing to. It's having an API what matters to allow polymorphism. Getting back to interfaces, the ML language has many of these (polymorphism, encapsulation, modularity) also without OOP, since 1973. You could say that this specific polymorphism - interface subtype - was created and popularized by OOP, but overall, he still presented it in a very misleading manner. 8:45 overwhelming majority of drivers are written in C. And not just in Linux.
    2
  593. 2
  594. 2
  595. 2
  596. 2
  597. 2
  598. 2
  599. 2
  600. 2
  601. Besides having well structured, well done etc documentation like other said, there's one more thing: context. Just saying is "having documentation" is too general. Does it mean a man page ? --help option ? A long web page ? A wiki ? A series of tutorials ? These all are good resources, but for different contexts. So, for command line programs, a man page is required. Unless I'm mistaken "man" comes from "manual". A manual has to be detailed and, as best as possible, complete. So I'd say that a massive 100-page long man page for find is ok. But, staying on the command line programs, you also, many times, only need to check some flags, get some examples and things like that. That's were something like --help or maybe even more flags (maybe --examples?) come to mind/help. You should have all the details in the man page, starting with how to get these quicker, simpler bits of help, so you can type your command in less than 1 minute and go on continue what you're working on. For complex programs (like Blender, or even ffmpeg) I think that a wiki and a series of tutorials is also needed. Especially if using images or video or audio would simplify the teaching/education (those cases where an image is worth 1000 words). For source code, yeah documentation is good. Since I'm a programmer myself, the problem is usually that very few people bother writing documentation (because it's hard, tedious and excessively boring. Few, few people have the drive or talent or passion to write documentation). However, a bit of suckless mentality does good here. The source code should be as easy to understand as possible (of course, without compromising quality) and only then, when needed (like explaining WHY a function or some line is needed, or what a flag or something like that might mean in an external context) the documentation should be written, usually trying to be as concise as possible. Back to contexts, installing an operating system is not like simply running a command to find all your php files. You do need to understand what is happening. So, while reading most of the stuff in the Arch or Gentoo wiki will take days, you will be better off it, have much less (potential) headaches from the mistakes that you didn't made, that could've also been spread to those who you asked for help. Since I mentioned, maybe everywhere where there's documentation, the scope of it should be mentioned too. Like a man page could state that it's a long read, if you need a quick nudge to get over your work, check these help/examples. If you do know that you want a complex case then you know you'll have to spend more time to read the man page. A blender wiki could state that learning it will take probably months and that you should start with understanding some terminology and flows and do some tutorials. A documentation like Arch or Gentoo install guide could (I don't remember if it does) also mention the important parts that you should know and that probably you'll need several days of reading + trying to install in a virtual machine, before achieving the install, if you've never done it before. Overall, the documentation should set the expectations of how much information it provides and how quick it is to comprehend it. Edit: and also, try to provide shorter/quicker bits of information, where it's safe to.
    2
  602. 2
  603. 2
  604. 2
  605. 2
  606. 2
  607. 2
  608. 2
  609. 2
  610. 2
  611. 2
  612. 2
  613. 2
  614. 2
  615. 2
  616. 2
  617. 2
  618. 2
  619. 2
  620. 2
  621. 2
  622. 2
  623. 2
  624. 2
  625. 2
  626. 2
  627. 2
  628. 2
  629. 2
  630. 2
  631. 2
  632. 2
  633. 2
  634. 2
  635. 2
  636. 2
  637. 2
  638. 2
  639. 2
  640. 2
  641. 2
  642. 2
  643. 2
  644. 2
  645. 2
  646. 2
  647. 2
  648. 2
  649. 2
  650. 2
  651. 2
  652. 2
  653. 2
  654. 2
  655. 2
  656. 2
  657. 2
  658. 2
  659. 2
  660. 2
  661. 2
  662. 2
  663. 2
  664. 2
  665. 2
  666. 2
  667. 2
  668. 2
  669. 2
  670. I'm on the side of people freaking out. The thing is, Ubuntu/Canonical, have done bad things on this "topic" before, so them "teasing" r/linux is truly poor taste. That is, I can accept this sort of joke from someone that has spotless background/history on the matter. If you don't have a spotless history on the matter, then you joking about it is totally inappropiate, you don't know to read the room, you deserve all the backlash so you learn to behave. When you do something stupid, you do not remember people about it!!! So, even if it was an acceptable joke, there's still the problem that there's no place for a joke there. I'm human and I do have a sense of humor. I can accept a joke here, in VERY VERY rare occasions for EXCEPTIONALLY good jokes. Which is totally not the case here. The thing is, these people putting this joke, they think they're funny, but they don't think of the impact. Several weeks or months down the line, when I'll upgrade my sister's computer, seeing the joke for the 34th time is not only not funny, it wastes space on my terminal, wastes energy to my eyes to go past it, wastes brain cell cycles for me to understand that it's there and that I have to skip it. It's pollution. I think the problem is the goldfish attention span syndrome that seems to be more and more pervasive on the current society. We are not able to be focused on one thing anymore. Like to get into the mindset that you have something to do and for the next 5 minutes, 1 hour, 8 hours or whatever, only think, interact and do think exclusively about that, and nothing else, so you're as efficient and productive as you can be. Sure some people or areas (especially creative/art) can or want all sorts of extras. But that shouldn't become the universal only-way to do/have things. It should be the individual adding the extras, not the provider to come with them. It's like now an action movie can't simply be an action movie. No, it has to have a comedic relief character and the main character must also have a love interest. It's not something bad if a movie has all 3, but it should be the exception, not the norm. There are places for jokes and comedy, I'll go there when I want jokes and comedy, stop polluting all other areas with unneeded (and rarely good) funny, that's not the reason I'm here. In conclusion, this particular act is certainly of very small degree and by itself shouldn't cause much rage. But it shows a fundamental lack of understanding from those at Canonical, and as such, everybody expects them to continue on this stupid path, unless someone tells them to not do that. So, that's why the rage is justified and actually needed right now, so they learn that it's not ok and they stop, BEFORE doing something truly stupid and distruptive.
    2
  671. 2
  672. It's the idea of having sense, in general. If you see people doing wrong stuff, you should be bothered, to a point, especially if it impacts you (more) directly. In this case, the main point is that installing into a VM and spending mere hours on a distro is not a review. And I'm totally down with that, it should be called out so people doing these "first impressions" don't label them as reviews. Having proper terms, that is, terms that are not ambiguous and/or that people all generally agree upon make for better, more efficient communication. For example, I might've heard that Fedora is a really good Linux distro. Now, the nuance is that if I'm perfectly happy with what I have right now, I might only want a quick look on it, to know what's it about, to see why people call it great. Unless it blows my mind, I won't switch to it, so I don't need many details, including not needing if it works just as well on real hardware or how it is after a month, since I'm not intro distro hopping right now. However, if I'm unhappy with what I have now and I'm thinking "hmm, this is not good enough, I should try something better, what would that be?" - well, in this case, I would like a review. Something that will give me extra details that make me aware of things I should know in order to make an informed, educated decision. I don't want to see a first look, install it, and after 1 month realize that this isn't working, as nice as it looks, I need to hop again. Here a review (long term or "proper" or "full" review however you want to call it) is something that probably would give me the information in 20-40 minutes so I can skip that 1 month and go and install directly what I actually need.
    2
  673. 2
  674. 2
  675. 2
  676. 2
  677. 2
  678. 2
  679. 2
  680. 2
  681. 2
  682. 2
  683. 2
  684. 2
  685. 2
  686. 2
  687. 2
  688. 2
  689. 2
  690. 2
  691. 2
  692. 2
  693. 2
  694. 2
  695. 2
  696. 2
  697. 2
  698. 2
  699. 2
  700. 2
  701. 2
  702. 2
  703. 2
  704. 2
  705. 2
  706. 2
  707. 2
  708. 2
  709. 2
  710. 2
  711. 2
  712. 2
  713. 2
  714. 2
  715. 2
  716. 2
  717. 2
  718. 2
  719. 2
  720. 2
  721. 2
  722.  @BrodieRobertson  There's this ... let's say feeling, since I'm not so sure exactly how factual it is, but the idea is that Lunduke is apparently about the only one digging and reporting on all sorts of these issues. These foundations seemingly got more and more corrupt and woke trying to censor what they don't like. And also he apparently is banned from a lot of them, and even banned to be mentioned. The uptick is that if you do find that his investigations are good, you could mention that he also covered the topic. In these clown-world times, this is needed. And it would also show that you're not under some control. Then again, people and fundations having a problem with Lunduke might start having a problem with you if you give him even a modicule of publicity. Speaking of, if you feel bold and crazy, I would really enjoy a clip / take on this whole Lunduke situation. It's history and current status and how you think this whole situation is, how split the whole bigger Linux and FOSS community is about him. I personally started watching him recently and he seems genuine, but it's still early to be sure about that. And the things he's reporting on... not gonna lie, they kinda scare me. Linux foundation having a total of 2% of its budget reserved for Linux and programming, and 98% on totally unrelated stuff, that thing can't be good long term. It seems like all of these fundations, being legally based in USA, have a systemic problem of being infiltrated by people who do not care about the product(s) that the foundation was based originally on. If these aren't course-corrected, or others arise that are free from all this drama, I truly fear for the future of Linux and FOSS in general.
    2
  723. 2
  724. 2
  725. 2
  726. 2
  727. 2
  728. 2
  729. 2
  730. 2
  731. 2
  732. 2
  733. 2
  734. 2
  735. 2
  736. 2
  737. 2
  738. 2
  739. 2
  740. 2
  741. 2
  742. 2
  743. 2
  744. 2
  745. 2
  746. 2
  747. 2
  748. 2
  749. 2
  750. 2
  751. 2
  752. 2
  753. 2
  754. 2
  755. 2
  756. 2
  757. 2
  758. 2
  759. 2
  760. 2
  761.  @protocetid  Actually, TDP is not how much a chip uses, though it's not that far off either. When you don't have something else, and if you don't need/expect good precision, it can be used instead. The thing is that TDP is how much HEAT a cooler for that chip should be able to dissipate. Which if you think about it, that and how much the chip uses should never be the same, unless the chip is literally a resistor. As an example, on AMD's Zen 4 chips, the top desktop ones, they have 170W TDP, but they can consume (without overclocking) 230W in sustained load. Fortunatly, this is one of the bigger deviations, usually the TDP is closer to the actual consumption. Well, PEAK sustained consumption! The chip, if not in fully load, will always consume (much) less. And on short burst, it can consume (much) more. Getting back. Steam Deck is 15W TDP, but it's optimized to run at more like 3-9W. In games like Dead Cells 2 (which is a 2d indie game, pretty lightweight, but still far from idle-like power required) the OLED version of Steam Deck can run for over 8 hours. With a 50 Wh battery, that means that, in average, the chip + screen + wifi + audio, I think without bluetooth, consumes about 6W. Which makes me think that the chip itself is consuming like 3-4 W. Still, given that the Steam Deck has only 4c/8t, that's not exactly high end. Current phone flagships are certainly both more performant and more efficient. Not sure how it competes on GPU performance. A typical phone battery nowadays has 5000 mAh, which, given that the Li-Ion batteries usually hover at 4V (between 3.7 and 4.2), that makes for a battery capacity of aprox 20 Wh. Less than half of what the Steam Deck has. So the Steam Deck's APU (which I still consider the closest in x86 space to what a phone or a very efficient tablet/ultrabook would need) is not that efficient as compared to the current smartphone chips. Though, it is also built on 6nm, while the most recent chips are on 3nm, almost 2 generations newer, which is a pretty big difference. So, overall, I think that on the hardware side, while it will most likely be a setback in terms of performance or (maybe even and) efficiency, I think that if they wanted, both Intel and AMD could come up with a chip for a smartphone that still has decent efficiency and performance, just not flagship level. Now, on the software side, the advantage with Linux ... that is, GNU/Linux phones (Android, technically, is also Linux) is also the control that you get. And, I guess, a bit of compatibility for the software that's made for the desktop. I wouldn't say it's a big demand, unfortunately. Most likely just techies like us, and maybe privacy nerds. Still, it is nice to see how far Pinephone got, even though it seems like what they have is a bit too low end. The chip itself can be very efficient, they don't have a lot of cores or overclocked them or anything, it seems that the firmware and drivers they use or something is still not up to the task. Or maybe everything is rendered with the CPU instead of the GPU, dunno. But the chip itself is pretty common ARM chip with 4 A53 cores, those can totally be efficient. Oh, and good point about Waydroid. Haven't checked it, but from what I remember, you can already run a lot of apps through it. So you can get the best of both worlds with it.
    2
  762. 2
  763. 2
  764. 2
  765. 2
  766. 2
  767. 2
  768. 2
  769. 2
  770. 2
  771. 2
  772. 2
  773. #HeyDT, why do you have to be so cringe at times ? 1) 1:16 "According to statcounter a full 71.29% of personal computers around the world are running Windows 10". With 15.44% for Windows 11 and 9.6% for Windows 7 and 2.5% for Windows 8, that's a total 98.83%. Add 0.4% from Windows XP and Windows has over 99% of personal computers around the world ??? Then at 2:36 you basically say the correct numbers now, Windows in total has 76%, and from that 76%, 71.3% are Windows 10. Meaning that a total of 0.76 * 0.7129 = 54.18% of the personal computers around the world are running Windows 10 2) Comparing Windows 11 market share to diseases is just very petty and cringe. You're releasing your hate on it, in the same way that people are getting toxic on social media. It's ok to have this at a talk with some friends when having a beer, but not here, in the wider public. Like I said, it's petty. Making fun of it's growth in a really weird way. Someone coming from Windows and seeing this will certainly not be convinced to switch, it will think all Linux guys are lunatics living in their bubble, making fun of "only 15%" while they have 2%. Other than that, like others said, Windows had this cycle of "bad release followed by good release" several times now, so most likely Windows 11 users will stay low until in 2024 we'll get Windows 12 which will be a refinement of Windows 11, aka it will work very well out of the box. Still with ads and telemetry through the roof, but it won't be so much in your way. And it will be better than Windows 11 in absolutely all aspects and people will then upgrade.
    2
  774. 2
  775. 2
  776. 2
  777. 2
  778. 2
  779. 2
  780. 2
  781. 2
  782. 2
  783. 2
  784. 2
  785. 2
  786. 2
  787. 2
  788. 2
  789. 2
  790. 2
  791. 2
  792. 2
  793. 2
  794. 2
  795. I don't understand that part at the start with the cost of delete vs write. Both in CPU registers, RAM and disk (be it solid state or HDD), a delete IS a write. Going over the special case of SSDs which have multiple bits per cell and writing would mean writing all 4 bits, the delete vs write doesn't make sense to me. At least, not in the current compute landscape. And the "store the input so later you can simply switch, instead of delete" sounds like basic caching to me. For consumer hardware, we are both getting more effiecient and not. If you look at smartphones and laptops, it is inarguable that we're getting much more efficient. And in general staying in the same power envelope, though the highend desktop replacement laptops are indeed more power hungry that what was 10 years ago. On the desktop side... if we ignore a couple of generations from Intel (13th and 14th) then the CPUs I'd say are getting more efficient and also staying at a reasonable power draw, so same power envelope. Same for RAM and disks. It's the GPUs that are also more efficient, but have expanded, by quite a lot, the power envelope at mid and high end levels. But I would say that the raw hardware power is impressive. On the datacenter side, 30,000 tons of coal seems quite little. I expected something like 1 billion tons of coal. Funnily enough, a lot of electrity nowadays is consumed in AI. Feels like creating the problem in order to create the solution to me. Waaay too much desperate-ness in getting the AI upper hand is quite a clown show to me. I am expecting more and more regulations on AI as the data used is still highway robbery in most cases, and the energy used is just ludicrous, at least for the current and short-term future results. In the context of having to use less energy, so we can stop putting carbon into air. Lastly on the prebuilt power limits or something similar. I don't know of having such a law, neither in EU nor in Romania where I live. However I do know that there is one for TVs (and other household electronic appliances, if I'm not mistaken) which actually limits the highend TVs quite a lot. Which, frankly, is quite stypid to me. If I get an 85" TV, you expect it to consume the same as a 40" inch one ? Not to mention that maybe I'm fully powered by my own solar panels. Who are you to decide that I can't use 200 more Watts for my TV ? On this theoretical setup, it would generate literally 0 extra carbon. And what's worse, because of this st00pid law, now people are incentivised to buy from abroad, which is worse for energy used (using energy to ship from the other side of the world instead of local) and worse for the economy (EU manufacturers cannot compete as well as those in other countries). Anyway, rant off.
    2
  796. 2
  797. 2
  798. 2
  799. 2
  800. 2
  801. 2
  802. 2
  803. 2
  804. 2
  805. 2
  806. 2
  807. 2
  808. 2
  809. 2
  810. 2
  811. 2
  812. 2
  813. 2
  814. 2
  815. 2
  816. 2
  817. 2
  818. 2
  819. 2
  820.  @NoBoilerplate  But Java had trillions of dollars of optimisations too! That's why I was shocked. If I'm not mistaken Java has about the most advanced (and complex) garbage collection algorithms. And I know the JDK is quite a beast. And I say that as someone who doesn't like Java (and I like javascript, though I kind of hate about all frameworks on it). Of course, there's no ceiling in optimisations. Unless you don't have enough data. And Javascript (unlike Typescript) does lack having strong typing everywhere for example. That by itself includes some run overhead. I guess it's a matter of things like Python having those ML libraries that are basically implemented in C and calling them in Python gives you basically the same speed (for those specific functions). Also like the Falcon framework/module in PHP is basically a collection of C functions presented in PHP which, if you only use them, you're close to C speed. But in both cases, you're restricted to a set of functions. And the language itself, being dynamic, has an overhead of its own. I think that the benchmarks in which JS runs well are just that - cases where the engine already has an optimized solution implemented. Though if these cases are in good enough number, especially for the domain where JS is used, that's good and I think they can be used as representative of the language. But in general I'd go by worst case. I guess I'll just have to up my game on current language speeds. I did a quick search now and I can't say I'm happy with the first page of google's results. To be fair, I did find in there some instances where JS is faster or on the same plane (to say so) with Java. But I'm still not convinced. To be frank, I really don't see something like Elasticsearch being implemented and running as well on Javascript. While on this topic, do you have any good benchmarks sites ?
    2
  821. 2
  822. 2
  823. 2
  824. 2
  825. 2
  826. 2
  827. 2
  828. 2
  829. 2
  830. 2
  831. 2
  832. 2
  833. 2
  834. 2
  835. 2
  836. 2
  837. 2
  838. 2
  839. 2
  840. 2
  841. 1
  842. 1
  843. 1
  844. 1
  845. 1
  846. 1
  847. 1
  848. 1
  849. 1
  850. 1
  851. 1
  852. 1
  853. 1
  854. 1
  855. 1
  856. 1
  857. 1
  858. 1
  859. 1
  860. 1
  861. 1
  862. 1
  863. 1
  864. 1
  865. 1
  866. 1
  867. 1
  868. 1
  869. 1
  870. 1
  871. 1
  872. 1
  873. 1
  874. 1
  875. 1
  876. 1
  877. 1
  878. 1
  879. 1
  880. 1
  881. 1
  882. 1
  883. 1
  884. 1
  885. 1
  886. 1
  887. 1
  888. 1
  889. 1
  890. 1
  891. 1
  892. 1
  893. 1
  894. 1
  895. 1
  896. 1
  897. Wow, so many people over here that barely use food delivery. I feel quite special now, I'm kind of the opposite, I actually use food delivery a lot. First thing, I don't think it's THAT bad here in Romania. From the apps mentioned in the video, we only have UberEats a vaguely popular around here. Bolt Food (Estonian Uber that's 2 times better both in car sharing taxi and food delivery), Glovo and Tazz. I basically only use Bolt Food for several years now. Unlike the other apps, I can put the exact pin on the map so the delivery guys (which some really aren't that smart) don't end up on the other side of the boulevard. Also, the prices have indeed gone up substantially. As someone who used food delivery a lot before the pandemic too, the prices are on average a bit more then double (and the inflation here hasn't been THAT bad). One thing mentioned in the video, the difference between pizza chains which have somewhat working delivery and food delivery apps. It painted the apps as the new guy which doesn't have experience and doesn't know how to do things effectively. No idea how it is in USA, but I can confidently say that here with Bolt Food especially that is NOT the case. Just from the accuracy of the estimated time of arrival you realize that the infrastructure is quite advanced and mature. Not perfect, but clearly good. Also, we already have the range limit that it's mentioned in the video that somehow only pizza chains have. Here in Bucharest, which is a very large town for Romania, but it's probably just as big as one small district from New York, the range is about 1/4 of the city. 5-10 km, something like that, I don't know exactly. It is very expensive for me, and I was always aware of that, I still prefer it over spending time shopping and cooking and cleaning and associated extras. Plus, there are days when I literally spend about 1 minute ordering, since I know what I want, about 1 minute answering the delivery guy and getting the bag, and I can eat at my laptop while paying attention to a meeting, effectively I spend no time at all the whole day for "cooking" and eating, I love it. To achieve the same time efficiency and confort I'd have to either a) consume mostly instant foods, which are significantly cheaper, but much less healthier and with much less variety in the foods or b) have a personal chef which would be ideal, but I don't have THAT much money. From several hundreds of orders in the last 5 years I had only literally one order which was not delivered because there was no courier found, on a Sunday at 9-10 PM (don't ask, it was not at my place nor a normal day). And I had several bad deliveries where it was mostly the restaurant fault for not having good wrapping/casing of the food. And several deliveries with missing/wrong items, which were solved (kinda) with the help system of the app. Overall percent wise, not that many, something like 3-5% of the orders. Though there is a thing that I have to mention, I think it does help my good record, I rarely order at lunch times, when there's the most chaos. I usually eat significantly later. Of course, for the end user it will always be expensive. Before the apps, when you had to call, and there were fewer places you could have delivery from, it was worth it if multiple people ordered, like 4+. The delivery tax was a bit steeper, but when you split that in multiple people, it gets quite cheap. And some places had free delivery after a certain threshold. Now with the apps, there's still that free delivery after a threshold (though not always) and the tax seems to take into account the distance, but the efficiency of multiple people (big order) vs single person (small order) has been much diminished. All in all, considering the above points, I'd say that it can get to be mature enough to be sustainable. And I think that I am experiencing that.
    1
  898. There's something that doesn't sit well with me: - the law assumes that the cores have the same performance characteristics. The Macs have different cores, so the estimate cannot be correct. The single core performance isn't mentioned if it's a performance core (which I assume) or efficiency core - why is 12 core estimation of improvement 418%, but later a 10 core estimation of improvement is also 418% ? - why is process creation 1900% better ? Theoretically it shouldn't be possible to surpass 1100% (11 extra cores). Is is just because there's less context switching ? Lastly, I just have to talk about a thing that I see that many do not mention. The Amdahl's Law applies for a single program, more specifically a single algorithm. If you actually have multiple programs, multiple things that have to be computed, those should be basically 99% paralelizable between themselves. Say, playing a game and recording (encoding) a video of it, while also compiling something in the background. These are 3 main tasks, in which going from one CPU core to do them all, to say, 3 cores to do them (one for each program) I expect at least 99% improvement (assuming there's no bottlenecks at, say, HDD/SSD level). None of the programs needs to know what the other is doing, so it has 100% palalelization in theory (of course, in practice it can vary, a bit more if more cores alleviate bottlenecks and less with the overhead of scheduling and with the limitations of other hardware like memory and disk bandwidth) In current day and age, we're not running like in DOS times, running a single program at a time. Usually it is a single main program, like a browser or a game, but there's plenty occasions where you run multiple things, like I said above. Having a browser with many tabs can alone benefit from more cores, even if each tab has only serial tasks in it (aka 0% paralelism achievable). If you also do some coding alongside, there goes more cores. And, of course, today in something like MS Windows, you can assume a core is permanently dedicated to the background tasks of windows - indexing disk data, checking for updates, collecting and sending telemetry, all kinds of checks and syncs, like the NTP/Windows Time, scheduled tasks and soo on. In practice, 8 cores for casual workflows (browsing, gaming and office) is basically plenty, it is indeed little gain from more cores. In that sense I agree with the final thoughts. But I fully disagree with the final thoughts on the server comparison. Virtualisation is not for performance, quite the opposite. If you need top performance, especially lowest latency, you have to go bare metal. Virtualization has two great benefits: sandboxing: you don't have conflicts with what anything else is running on that server, so you can have 10 versions of something with no problem, it's easy to control how much resources it can use and many more. Also, it makes for immediate (almost) identical development environment, reducing devops time and especially stupid bugs because some dev runs PHP on Windows and it behaves differently than the same PHP version on Ubuntu. Also again, thinking in this paradigm of small virtual computers makes your application easy to scale (just have more containers). But an appllication running in a virtual machine or in a container will NEVER be faster than the same app configured the same, on bare metal. The nice thing is that nowadays, in most cases, virtualizing has a negligible impact on performance, while the other benefits are massive. That's why everybody is using it now.
    1
  899. 1
  900. 1
  901. 1
  902. 1
  903. I just want to add that the attitude of "Valve doesn't care about snaps, or your package manager. They don't want to support it. Not their job yadda yadda yadda" is ... not that good. Yeah, they might not like it, but if it isn't fundamentally flawed, as in making it impossible or incredibly difficult, then Valve should consider supporting and working with distributions and package managers, so it can be nicely integrated. That is, the way I see it, the desired outcome for an app that wants to have large reach, to be used by a large mass of people. You can say it's the same as making a FOSS for proprietary OS like Windows or iOS. You can hate the inferior OS (in case of Windows) and the hurdles you have to go to to bring compatibility, but if you do want to have a high reach, then it is something you have to do. While on this, thank you GIMP and LibreOffice. So getting back to the topic, I think everybody will have to gain, including less headaches and issues on Valve side, if Valve worked with the distros and package managers to make Steam work directly from the package manager so you don't need to go and download from Steam's website. That's what cavemen using Windows Neanderthal Technology (NT for short) do. Ok, snaps might still be a headache, though I guess it would be more from Canonical than from the snap system. If that's the only system not supported, it would still be better than now. And I suspect that a lot of this work would be front-heavy, that is, you work hard to integrate it once, then it's easy to maintain after.
    1
  904. 1
  905. 1
  906. 1
  907. 1
  908. 1
  909. 1
  910. 1
  911. 1
  912. 1
  913. 1
  914. 1
  915. 1
  916. 1
  917. 1
  918. 1
  919. 1
  920. 1
  921. 1
  922. 1
  923. 1
  924. 1
  925. 1
  926. 1
  927. 1
  928. 1
  929. 1
  930. 1
  931. 1
  932. 1
  933. 1
  934. 1
  935. 1
  936. 1
  937. 1
  938. 1
  939. 1
  940. 1
  941.  @terrydaktyllus1320  Everybody reading what you write and you (because you wrote it) would have a much more productive use of their time if you'd stop spewing bullshit that you have a very surface knowledge on. In your fantasy cuckoo land there are these "good programmers" that somehow never make any mistakes, their software doesn't ever have any bugs. In the real world, everybody makes mistakes. I invite you to name one, just one "good programmer" that doesn't ever write software with bugs. If it's you who's that person, let me know your non-trivial software that you wrote and that has no bugs. And if you're going to bring the "I didn't say that good programmers don't make mistakes or don't make bugs" argument, then I'm sorry to inform you that Rust, or more evolved languages in general, were created exactly for that. Programmers, good AND bad, especially on a deadline, have to get the most help they can. That's why IDEs exist. That's why ALL compilers check for errors. A language that does more checks, like Rust, but still gives you the freedom do to everything you want, like C, is very helpful. Unlike your stupid elitist posts that "languages don't matter". The bug presented in this video, that's a very classic example of something that would not happen in Rust. With people like you, we wouldn't even had C, just assembler by now. Whenever there's something about programming languages, don't say anything, just get out of the room and don't come back until the topic changes. Hopefully to one that you actually know something about.
    1
  942. 1
  943. 1
  944. 1
  945. 1
  946. 1
  947. 1
  948. 1
  949. 1
  950. 1
  951. 1
  952. 1
  953. 1
  954. While I agree on general with the video's ideas, I don't fully agree with the argumentation, and I want to challenge or maybe add to some things presented here. I mean, on the "getting stuck" vs "learning new stuff, like distros, DEs etc" - at which point learning another thing which is kind of the same of what you already know and barely (if any) brings anything new to the table is actually worthwhile ? What I mean is, having the growth mindset is important, but only on the personal "global" level. Meaning, a person should always have a growth and learning mindset, but the focus and topics can (and in most cases should) change over time. Which means that there are many legitimate cases and reasons when you simply want that something you learn or learnt to stay the same, so you can still be proeficient in it, and still can rely on it, so you can the focus on SOMETHING ELSE. While using what you learnt just as a tool with 0 more investment on learning about it or very similar alternatives. Distros and DEs and so on are a perfect example of this, where, after you tried several and know what each is about and what you like most, yeah, not wanting to know about any other distro or DE (unless something truly revolutionary appears) is a perfectly legitimate mindset, when you want to now focus on learning something totally different. "Hey, did you see that new KDE hammer ? It has a nice glossy look. Do you want to learn how to use it and maintain it so the gloss doesn't wear off ?" "No, I like my Gnome hammer just fine, I don't care about that stupid glossy look" "Duuude, wth, why are you so stuck in the past ? You need to have a growth mindset, otherwise your mind will rot" It's like being in the 8th grade and having learnt to do quadratic equations, you keep searching and finding all the possible quadratic equations and challenge yourself to solve them. Yeah, it's good to do it for a while, so you know them by heart, but if you keep focusing just on them, you won't learn what integrals are and group theory and so on. I'd say that checking new distros and DEs (again, after you tried a good bunch of them and know what they're about) is analogous with this. After a while, simply trying new stuff that is fundamentally the same you did before is not so different to jerking off. You get a sense of acomplishment, but you haven't really advanced. There's also something: in my case I do have changes and updates anxiety of some sorts. I know, maybe I'm too much in the mind-rotten-cannot-accept-change-does-not-challange-himself territory. Thing is, I like perfectioning things. I like optimising things. Using a program and see it do things faster and changing it so I can have one less click in a workflow that I use rather often. So, for this, I do like the peace of mind of checking the "market", selecting what I think is the best (be it a distro, desktop environment or window manager, shell interpreter, text editor, video editor, audio editor, video player, audio player, browser etc) and sticking to it, and start optimising it. Usually through configs, then plugings then maybe even patches or actual code changes done by me. And if everybody starts using another program because a new one looks nicer and is new and has 2 more features, then I get very... sad and ... I don't know anxious or nervous ? I mean, if you look at it, the old program can be rather simply upgraded to have those 2 more features and its looks can be configured to look almost the same as good... but people don't care. Then some big company or group decides that what they do will only work on this new program, because they like it, so you can't use the old program, at least not without massive pain. So at that point you're like... "great, basically now I wasted a lot of hours and I have to switch to another program" which, bar that new feature, is strictly inferior because it's slower and you need more clicks to do your thing. Of course you can learn this new program too and change it to your hearts delight. When you get to the same optimisation as you had on the older one BAM another new program to do the same thing, only (ever so slightly) different. And the history repeats. There's the UNIX philosophy that each program should do one thing only and do it well. It's exactly for this reason: so you only learn it once. When you don't have to relearn the same things over and over again, because they got changed, you can then start building bigger things. No, I don't regularly check and learn new letters, digits, screwdrivers, hammers, pencils and so on. So, yes, I would like to stay on Gentoo for 20 years, preferably 100 if I live that long and something better doesn't appear. Same with other apps. Not because I gave up on the growth mindset, but because I want to learn about other stuff now, like driver programming, electrical engineering, plant farming, the french language and so on. No, I couldn't care less right now about the new Arch spinoff. And I think that's totally fine. Lastly @DT I have a genuine challange. Since you said recently that you actually didn't used Windows at all in more than a decade, I have this challenge for you: actually try Windows and MacOS and note at least 5 good things about them each (and not stupid things like "it's good because it's popular" or "the rounded corners are really nice". Actual good stuff). This isn't a "ha, got you with your own words" kind of thing. After this long of time, you might not be aware of what the others did and might not know exactly where Linux stands. Trying them, Windows and MacOS should give you good insights of some things that Linux could improve upon. And it might make you like Linux more. Errr, GNU/Linux, my apologies.
    1
  955. 1
  956. 1
  957. 1
  958. 1
  959. 1
  960. 1
  961. 1
  962. 1
  963. 1
  964. 1
  965. OhMyGaaawd, I can't believe you missed the point on that C vs Go comparison at the end. It is a VERY GOOD comparison. Why ? The very thing you mentioned, that you wouldn't use C and Go in the same places. Initially there was only C (ok, a few others too, doesn't matter). But because it is only good at performance and low level control, it sucks at everything else (well, arguably) Go was invented as an alternative. Now Go occupies its own niche because C couldn't properly cover that part. Compared to C, Go has decent everything, it's just not perfect or the best in any of them. And since in some contexts speed of running is not the most important thing, but ease of developing is, they would prefer Go's "decent" vs C's "horrible" part. In regards to his SQL complaints... I'm still not sure what he wants/needs. Apparently statically analyzing that a query works is one of them. Ok, that can be done with SQL. Maybe he wants something like having directly an ORM-like library for a language, bypassing SQL ? Like, you only call functions and everything is run directly or sent from one server to another directly as binary data ? I guess that would be something. I remember that at the beginning there was a screenshot in which I think he wanted to highlight that the parsing of the query took semnificative time. Which, if you use something more direct, could be bypassed. Can't say I dislike the idea of having the option to send directly the structures and data exactly as the DB works with/uses them, so no more parsing is required.
    1
  966. 1
  967. On the complaining about the company part - I strongly disagree. First, just because you chose a [whatever] owned by a company (when you had the choice of [whatever] made by a community) that DOES NOT mean you must agree with everything they did or do, ESPECIALLY future decisions, and ESPECIALLY those that are different in nature to the past ones. The company released that [whatever] to the public for a reason, it wants users to use it, so it's normal to provide feedback for it. Which, when it works below expectations, comes in the form of critiques and complaints. In the end, the company does what it wants, then if they do something stupid, people complain, and if the company doesn't fix it, then it shouldn't be surprised when people stop using their product(s) and/or stop caring about the company. It's normal stuff. People complaining online is normal behaviour and a way for Canonical to find out that they're not doing the best they can. And if they're slow, they'll also see that in the number of new & existing installs. Of course, some people get the complaints to toxic levels. That is a problem. But complaining in general, is totally fine, if it's argumented (and if the user tried to do something about it, made some minimum effort of researching and troubleshooting before starting to complain). It's basically saying "Hey, it's your product, but if you want to keep me as an user/customer, they you should stop doing X". How is that wrong ? I have to say, the take that people shouldn't complain about a company, because it's in their right to take decisions and make changes... well, complaining is not the same as killing/cancelling/dismantling/disowning/forcing the company. That is, OF COURSE, not ok (and illegal). But complaining is 100% in the right of the users to do, just as the company has the right to do whatever with their product. Also, the popularity standpoint, I find it to be a weak argument. The popularity means that something was good YESTERDAY. It's not instant, so it cannot reflect what [whatever] is today. If something has 80% popularity today and the owner made a stupid decision and tomorrow it drops to 70%, well guess what, it's still the most popular, but it's clearly not the best anymore. Just that some people realize it on the spot, while some other people bring the popularity argument. By that logic Windows is excellent, because it's crushingly popular on desktop and laptops. But we all know that it's not that stellar, even though it had good moments, and is still a good operating system, desptie its annoyances. But if history and popularity would have not mattered, and everybody would start from 0% today ? I doubt that Windows would pass 40% market share.
    1
  968. 1
  969. 1
  970. 1
  971. 1
  972. 1
  973. 1
  974. 1
  975. 1
  976. 1
  977. 1
  978. 1
  979. 1
  980. 1
  981. 1
  982. 1
  983. 1
  984. 1
  985. 1
  986. 1
  987. 1
  988. 1
  989. 1
  990. 1
  991. 1
  992. 1
  993. 1
  994. 1
  995. 1
  996. 1
  997. 1
  998. 1
  999. 1
  1000. 1
  1001. 1
  1002. 1
  1003. 1
  1004. 1
  1005. 1
  1006. 1
  1007. 1
  1008. 1
  1009. 1
  1010. 1
  1011. 1
  1012. 1
  1013. 1
  1014. 1
  1015. 1
  1016. 1
  1017. 1
  1018. 1
  1019. 1
  1020. 1
  1021. 1
  1022. 1
  1023. 1
  1024. 1
  1025. 1
  1026. 1
  1027. 1
  1028. 1
  1029. 1
  1030. 1
  1031. 1
  1032. 1
  1033. 1
  1034. 1
  1035. 1
  1036. 1
  1037. 1
  1038. 1
  1039. Holy crap, I really hate to do this, but the calculations at 9:00 are really off. Let me put it in another way. Let's actually compute the energy needed for a 36 t vehicle going at 60 kmph for 901 km (560 miles for the cavemen out there). Using the computing method as shown by Engineering Explained's video named "Why Teslas Are Bad At Towing (Today)" (search it here on YouTube) from 4th December 2019, and using the following assumptions: drag coeficient is 0.6 (what I found is typical for a truck, after a quick search), the same 3.7 sqm area as in the EE's video (40 sqft for the neanderthals out there that somehow know to read) which is almost 2 by 2 meters, and a rolling resistance coeficient of the same 0.015, which seems actually to be pretty conservative, I've found lower estimates after a quick search on the web. Also these 901 km are on completely flat land. Formulas used: on short, the total force needed is Fa (force to overcome aero drag resistance) + Fr (force to overcome rolling resistance). Fa = 1/2 * p (density of air, in kg/m^3) * v^2 (speed of the vehicle, in m/s then squared) * Cd (coefficient of drag resistance, unitless) * A (area displaced, in 1/m^2) Fr = G (vehicle's weight, which is mass * g, in N) * Crr (coefficent of rolling resistance, depending on tires and the surface, like asphalt, unitless) Then the force is multipled by the distance, to find out the energy in Joules and then I converted it to kWh, since that seems to be used more in these kind of discussions. So, after doing the calculations, the energy needed is 94.94 kWh for the Aero drag resistance and 1325 kWh for the rolling resistance. For a total of 1420 kWh energy needed to move a 36 000 kg vehicle on flat asphalt for 901 km. And taking the current usually used estimate of 0.25 kWh per kg energy density of Li-ion batteries, we end up with a battery which has 5683 kg, so 5.7 tonnes. I wouldn't say I used optimistic numbers. And even for this truly unneeded long range of 900 km, the battery is much lower than "8-16 tonnes" nonsense. If the truck would go at 100 kmph then it would need 1589 kWh of energy, or 6358 kg of batteries. If the truck would go at 60kmph but just for 400 km, then it would need a mere 630 kWh energy or 2523 kg of batteries. As if it wasn't enough, there's one more thing that makes this even better, and can be seen in the Honda Accord vs Tesla Model 3: the car weight, without the batteries. The Tesla one vs Honda is 300 kg less. That's because the electrical engine is smaller and there's other stuff that's simply not needed. I expect this to be the same in a truck, and something like 1000 kg to be shed from a normal diesel truck when getting it to be electrical. So this means that the extra weight will be about 3-5 tonnes. And probably the range a bit smaller. But it's still up for 15 tonnes of payload or 25% less which might be less than the other economies of going electric. In other words, I can totally see this working. Please Thunderf00t stop these cringe calculations and most importantly, stop giving musktards fuel for refuting you or your points and videos. You represent the science community and this 8-16 tonnes of batteries bullshit is just... sad.
    1
  1040. 1
  1041. 1
  1042. 1
  1043. 1
  1044. 1
  1045. #AskGN would disabling HyperTreading AND cores on a 10900K, coupled with overclocking it, the cache and memory, for specific games (like knowing that game X only uses 8 cores, so disable HT on all of them, while also disabling 2 cores), do you think that using that would significantly increase the performance and/or the power usage & temps of that specific CPU ? If I got your attention, how much do you think that a 10900K with a 4c/8t would compare to a 6700K ? Or in 4c/4t configuration vs a 6600K ? I'm asking in the context of being curious of the overall advancements in the Skylake-generation of CPUs. Just with the cores&threads configuration, the 10900K should perform much better, only from the power budget alone. But I'm also curious if the frequencies would be the same if the power targets are matched. Or vice-versa, if the power usage is the same if they are locked at the same frequencies. In this last scenario I'm curious about the performance too, since I think the power usage will be lower for 10900K, effectively meaning that it has all the same limits, so in this case the IPC can be compared (if there's any improvement). I have to say, I'm very excited about the overclocking improvements that Intel added to this 10th gen (...."gen"....). Too bad they're still a bit too expensive and behind in IPC and power draw. Intel really had a lot of bad luck by not being able to deliver the 10nm manufacturing even now, 3 years later (or is it 5 later than initially annouced?). The laptop CPUs looked interesting, with pretty good IPC increase, only drawn back by limited cores and frequencies. I was hoping they would have something by now, but I think that on desktop 10nm will be skipped entirely. Also #AskGN, how would Intel's 14nm, TSMC's 7nm and the other 7nm variants would be called using the new metrics proposed by ... that consortium that I forgot it's name and I'm too lazy to check ? I even forgot the metrics they proposed to use. I only remember that there were about 3 of them, the main ones, and that one of them would be the transistor density. Anyway, seeing them in your videos and hearing them would get all of us used to them sooner rather than later :) If you read this far, cheers!
    1
  1046. 1
  1047. 1
  1048. 1
  1049. 1
  1050. 1
  1051. 1
  1052. 1
  1053. 1
  1054. 1
  1055. 1
  1056. 1
  1057. 1
  1058. 1
  1059. 1
  1060. 1
  1061. 1
  1062. 1
  1063. 1
  1064. 1
  1065. 1
  1066. 1
  1067. 1
  1068. 1
  1069. 1
  1070. 1
  1071. 1
  1072. 1
  1073. 1
  1074. 1
  1075. 1
  1076. 1
  1077. 1
  1078. You can't call something the "greatest OS of all time" if it doesn't work in the majority of devices (phones in this case) on the current landscape. Fight me! For real though, I want it, but don't want to buy a Pixel phone, neither new nor SH. And saying it's supported just on Pixel because it has extra hardware security features - bullshit! I mean, I don't challenge that it's not, but I want to point out that it's a very stupid argument. Maybe in alpha and pre alpha stages it would be ok. But on the full release, it's MUCH MUCH more important to have people onboard and even with just like 90% of the privacy and 50% of the security (and still much better than what the original phone has) than to have only one line of phones that limit the exposure to 10% of the potential users. That's bad prioritisation at this point. At least if they care more about raising the global security and privacy. It's not an easy decision to make, and I can't fault them, I'm just saying what I think would be more important (maybe a bit in a harsh matter, but, meh). I do think a lot of people would rather actually be able to use Graphene on their current phone, without the full security suite, than to have the Graphene OS team develop 3 security features extra, but keep the Pixel limitation (which means that people have to wait more until they try it, or switch to a Pixel phone - and maybe lose some features, like Louis does now) Other than that, I agree, it's awesome! Won't try it on a Pixel, sorry. Can't wait for it to become more popular and branch into other models, hopefully Fairphone and Pinephone.
    1
  1079. 1
  1080. 1
  1081. 1
  1082. 1
  1083. 1
  1084. 1
  1085. 1
  1086. 1
  1087. That's so blatantly false and wrong (that these kind of pushes are necessary) that I'm doubting the ability to reason. I'm not referring only to the OP comment but many who defend it too. First, there is a GIGANTIC difference between a) forcing users to try something new and giving the option to use the old, which is known to work and b) forcing users to try something new and if they're missing something ... well tough luck ? How is that not OBVIOUSLY irresponsible ? What are they supposed to do, stay on the old one ? Go to a different distro or a different spin (which might be more different than another distro but with KDE) ? Well then, don't be surprised if they don't come back. Second, the reason that "if they don't do that, people would not try or switch to and it will not evolve" is also blatantly false. Wayland now is progressing very nicely and fast. Yet NOBODY forces Wayland as the only option. Proof that removing options and functionality from users is not needed (DUUH). Doing that will only alienate the users and feed the Wayland (or whatever is pushed) haters. It's a lose-lose situation by infatuated people who care more about being/feeling bleeding edge than providing and caring for their users. It adds, I would argue, nothing, while raising all kinds of concern and stress and conflict, like this very thread. While waiting until Wayland is truly ready and then doing the switch, nobody would bat an eye. You can see they're searching for excuses rather than actually caring from that statement that they'd rather do the switch on a major version change. Because it makes sense, it's something to be expected. But they didn't thought (too much of a distance) that removing it now is 10 times the distress than removing it in, say, KDE Plasma 6.4.
    1
  1088. 1
  1089. 1
  1090. 1
  1091. 1
  1092. 1
  1093. 1
  1094. 1
  1095. 1
  1096. 1
  1097. 1
  1098. 1
  1099. 1
  1100. 1
  1101. 1
  1102. 1
  1103. 1
  1104. 1
  1105. 1
  1106. 1
  1107. 1
  1108. 1
  1109. 1
  1110. That is an excellent video topic. In case there isn't a video soon, or at all here are some tips for switching to Linux: - some things continue to be a hard NO on Linux. To my knowledge the biggest are: Adobe Photoshop and I think their whole creation suite and games with anticheat (some work, but most don't). If you have to use/play these, just stick with Windows, at least for the foreseeable (1 year+) future. Things evolve pretty fast nowadays, so if Linux isn't viable today, trying again in one year you might find that a lot of issues are solved (not guaranteed, of course). - be prepared that Linux is different and still has some things lacking, for many different reasons: some hardware simply don't have drivers made for Linux, as the manufacturers didn't put the effort to (justifyable or not) and might work subpar or not at all. Or if something just got launched, the drivers for Linux might arrive several months later. In general stuff works, with the only important stuff that might be a blocker being some Wi-Fi cards. The rest is more peripheral or niche products. Same with software, like I said above. Some things have to be done differently. In general, if you search online or on YT, you can find what works and not, and if something doesn't work for you, how to do it. But try to use recent sources when you can. - if the above aren't a big concern (or not at all), the first thing to try would be to make a bootable USB stick (easy to do) and reboot your computer into that Linux and see how it is. Check if all the hardware works, especially Wi-Fi if you're on a laptop. Graphic card drivers might be a nuisance depending on the distro, so if that doesn't work that well, don't give up yet. Then try to install the browser(s) you usually use and then the applications / programs / games you need and use often - that being said, here are some Linux distros worh mentioning. Before that, a Linux distro (short for distribution) is a fully working operating system (OS) that uses the Linux kernel (yes, technically Linux is just the OS kernel. A distro comes with many things on top: how it looks and how it can be customized, what package manager it uses (a package manager is basically the App Store / Google Play you have on a smartphone, you can use it directly to install all/most of the software you use and to keep them updated easily), how it general works and feels. So, here they are: - Linux Mint - stable (doesn't change often, focused on things working and not crashing, is slow/delayed to adopt new things), user & begginer and non-technical friendly. My recommandation to try first - Ubuntu - the OG begginer friendly distro, now fallen a bit out of favor in the community. Still very capable and compatible with many hardware and software, as it was the de-facto most used Linux distro for many years, most hardware and softare devs made sure their stuff works on Ubuntu first (and probably only there). Still, it has some caveats, if something doesn't work, do try other distros. Also pretty stable - Fedora - focused on being quite cutting edge, sleek and for work/workstation usage. Not a personal fan of, but it's also a very popular distro that should work very well in most cases, so it's also worth a try - Pop_OS! and TuxedoOS - made by Linux computer manufacturers. Pretty ok/decent, I'd say similar with Linux Mint. Also worth a try - Manjaro, EndeavorOS - more cutting edge, somewhat less stable, but that doesn't automatically mean your system will crash daily or something. Pretty good as beginner distros - Nobarra, CachyOS - gaming optimized distros, also fairly beginner friendly - ZorinOS - focused on having a very Windows-like look and feel. Solid, but I didn't put it as the first option as I feel it's a bit behind now - Debian - very stable, which also means it sometimes is way behind the latest versions of software and drivers. Not my choice for a beginner, though it's used in many situations, good to know about - Arch - advanced distro, skip it for now (also here would be Gentoo if you hear about it) - NixOS - can't say if it's beginner friendly or not - the rest I either forgot or don't recommend for a beginner. If you're technical you can try the more advanced (aka, any distro after all) ones, preferably in a virtual machine first.
    1
  1111. 1
  1112. 1
  1113. 1
  1114. 1
  1115. 1
  1116. 1
  1117. 1
  1118. 1
  1119. 1
  1120. Not that much... Tesla has some top-of-the line things in their cars. None which were invented by Musk, though I give credit to him, him being there and pushing everybody to slave-rate levels of labor and attracting funds certainly made Tesla what it is today. Without him, it would either be a small niche firm, or would've went bankrupt. SpaceX ... I don't know, the claims that rockets will get so reusable and so cheap to launch and so fast to get reused, that you could get a launch every day ... and what we have now... quite the difference. Getting to Mars was supposed to be ... 2024 ? Or even sooner ? Now, if I'm not mistaken it's supposed to be 2029. I would bet that not even in 2030 it won't happen. First because it's stupid to send people there, so early. They'll have nothing to do that a robot like Perseverance can't do already. But maybe we'll at least see SpaceX send a rocket with a robot reach Mars by then, but I'm very skeptical they can do that too, until 2030. But, there's a lot more things that are pure con-man snake oil salesman pitch. FSD ? Keeps coming next year since ... 2015 ? The name should also be illegal, it's so far from full self driving, it's not even funny. Robo taxis ? Yeah, right. Cybertruck ? Nope. Tesla Semi ? Clearly not in 2019. LA hyperloop ? Teslas in tunnels (with drivers) - absolutely not profound. The tunnels themselves were also, short, very small in diameter tunnels, dug at normal prices in not-uncommon time. Absolutely nothing revolutionary. Just from this LA thing alone and Musk should be laughed at away from the room when he starts even mentioning revolutionizing transports. Tesla bot ? Get outta here. Splitting so there's not a giant wall of text. Let me continue. Tesla solar roof aquisition ? Quite scummy. Starlink ? Interesting idea, but extremely polluting and unpractical if you think seriously about it. It will never get to fully roll out because of its inherent problems. In the end, the only companies that border revolutionary technology are SpaceX and, briefly, Tesla. Yawn! Just how much and how many times must someone be proven wrong and/or lying for people to realize it's a conman ? Just because he got involved in some companies who actually have products (even at massive losses, like SpaceX is) doesn't mean he's not a vaporware salesman. He had promised a lot of things that range from somewhat impractical (but who knows, maybe it works, a genius could certainly make it work) to downright bullshit. He straight up lied multiple times about when things will get done/released (and a good bunch of them would never be practical). How is that not vaporware salesman ?
    1
  1121. 1
  1122. 1
  1123. 1
  1124. 1
  1125. 1
  1126. 1
  1127. 1
  1128. 1
  1129. 1
  1130. 1
  1131. 1
  1132. 1
  1133. 1
  1134. 1
  1135. 1
  1136. 1
  1137. 1
  1138. 1
  1139. 1
  1140. 1
  1141. 1
  1142. 1
  1143. 1
  1144. 1
  1145. For GN: if you have the time and curiosity... since you can disable HT per individual core, and, if I'm not mistaken, also each individual core can be disabled completely, could you test a handful of games, to see if using these could notificeably increase performance (in a handful of games, theoretically each will differ from the others by A LOT) or decrease power&temp ? For example, in GTA 5, the CPU could be set to only have 8 cores, all without HT. I realize that this game is a pretty bad example, since it has that stupid 187 FPS limit, but you get the idea. Disabling HT should lead to performance because of less resource contention, while disable a core completely should help with power draw & temps and maybe latency ? Related to this, I'm personally very curious how did the skylake architecture/platform/manufacturing did improve after all these years. Could you take a 10900K, set it to have the same a) core & threads, b) power targets, c) clock frequencies and d) all combinations of the before against a 6600K and 6700K ? So, for example, a 10900K configured to match the 4/4 and 4/8 cores/threads of 6600K and 6700K also with the same frequencies of said CPUs, how much less power it consumes ? Does it perform the same ? On another test, with the same cores/threads and the same power limits, what frequencies and performance does it achieve ? Is that also the same ? To remind here that the 10900K should have some security fixes directly in the hardware. I'm not certain if right now for a 6700K the security fixes would be in software or none at all.
    1
  1146. 1
  1147. 1
  1148. 1
  1149. 1
  1150. 1
  1151. 1
  1152. 1
  1153. 1
  1154. 1
  1155. 1
  1156. 1
  1157. 1
  1158. 1
  1159. 1
  1160. 1
  1161. 1
  1162. 1
  1163. 1
  1164. I guess I do have a bit of an uptime fetish. Several things to mention: - in Windows 10, with quite some hassle and at least the Pro version, you can actually control the updates. As I speak I have 99 days of uptime. I didn't actually wanted to reach this high, since effectively I'm behind the updates, I cannot have (most of) the important ones without a restart. But Windows, as I configured it, doesn't pest me neither to do the updaters, nor to restart. And usually between 1 to 3 months I do the updates and restart. - having a high uptime is a sign of a properly configured system. It shows that you don't have weird leakages or simply bad software. As having a system that you don't have to reinstall (and even reformat the drive). I've never understood the people who got to the conclusion that you have to format + reinstall windows once per year. I've always reinstalled because the old one was too old (like from Windows 98 to XP, from XP to 7, from W7 to W10), not because it wasn't running ok. Windows is bad, but not THAT bad. Anyway, I'm going off topic. - for me at least, not restarting isn't the time to reboot. That is below a minute. Is also opening ALL the apps, and in the same exact state that I left them. Still something that should be below 5 minutes, but I simply like not having to. Some apps don't have the "continue exactly where you left off" feature when restarting them. For this reason, I usually hibernate the laptop instead of shutting down most of the times I'm carrying it around (which is less than 1/week since the pandemic started). I do acknowledge that it's mostly convenience on my part, not actual need. - having the computer open 24/7, if on low power (and low heat) will not damage the components that much, if at all. One power cycle actually might do more damage than 50 hours of uptime (as I said, if the uptime is in a non-stressful manner, with no overclocking and no large amounts of heat). As to why you would do that, some have things open, like torrents, folding at home, or mining. In my case, when I leave it running when I'm sleeping or I'm away, and I'm only keeping it open for torrents, I'm putting it in a custom power mode, which is low power, but with everything still turned on, except the display. This way, it consumes quite little, despite still being "on".
    1
  1165. 1
  1166. 1
  1167. This was an interesting watch. Informative, but also flawed, in multiple points. Before going point by point, I'd like to say that I am a Linux user and, while I don't go out of my way to bash Electron apps, I'm not a fan of it either, and I prefer to not use them whenever I can. At this moment I think I only use one: Postman, which annoys the hell out of me when it hangs for like 30 seconds, and I will change it soon, once I have more time to check the alternatives and settle on one. Still, Postman I think is trash because it is made like that, not because of Electron. And it annoys me for other reasons too. But regardless, it is very much looking and feeling like a much heavier application for what it should be. I do have an 8 yeah old laptop, which was powerful in 2017, not so much now, but it's not trash either. One thing I must point out - I am using apparently a lot of software that has an Electron app. But I don't use the app, I use what is sane to me - a simple browser tab, in my browser. I was using this before Electron was a thing, so I kind of happened to be like that, before deciding to not use Electron if I can. Why not use Electron ? Some time ago MS Teams had a bug which let remote code execution within it, because of bad sandboxing. That's when I knew I'll never install it, as running it in the browser I can be much more sure that it doesn't have access to my system. Especially after finding out that Discord for YEARS used an extremely outdated version of Electron. Imagine how secure that was! And the performance penalty is there, I don't want it. By only having those directly in the browser (seriously, why are people not using this ?) I have almost 0 disk space used, much less security issues / stress, much less RAM used. So I feel quite immune to the "either Electron either nothing" threat, though I do understand it. For me it didn't enable me in any way to use Linux. As IDE I use JetBrains. On the performance side... that example with the SwiftUI and the conclusion that Electron might be faster than native ... I am soooo NOT buying that. I call 100% skill issue or just something related to SwiftUI at that point in time. Even the first update it says it swapped one component from SwiftUI Text to SwiftUI TextEditor. And apparently it does the rendering using the CPU ? Disable hardware GPU accelerated rendering on a browser and watch the same pain there too. You can't really say that native in general might be slower just from one example like that. Chrome, the only way to efficiently render text (without being John Carmack), really ? Talk about blowing things way out of proportion. In the end, I don't like Electron for the same reason I don't like Flatpaks and snaps. Having everything bundled. In case of flatpaks and snaps, I'd rather have a statically-linked executable. In case of Electron I'd rather simply use my browser. Having one embedded will always be a problem to me. Maybe with servo it will get to be less of an issue. Still, I'm not thrilled on having JS either, though for simple applications it is fine. I would still like something that can be more efficient. Not just in CPU usage, but also in RAM usage. A waste is still a waste. Even with the above, I don't hate Electron for existing. I do kind of hate it becoming the norm. It's like AGAIN we forget all the years of lessons we learned before, most of everything we've learnt and built for something that's short-term easier. Maybe PWAs will become a thing in this lifetime, and we can stop shipping an almost full browser with every app.
    1
  1168. 1
  1169. 1
  1170. 1
  1171. 1
  1172. 1
  1173. 1
  1174. 1
  1175. 1
  1176. 1
  1177. 1
  1178. 1
  1179. 1
  1180. 1
  1181. 1
  1182. 1
  1183. 1
  1184. 1
  1185. 1
  1186. 1
  1187. 1
  1188. 1
  1189. 1
  1190. 1
  1191. 1
  1192. 1
  1193. 1
  1194. 1
  1195. 1
  1196. 1
  1197. 1
  1198. 1
  1199. 1
  1200. 1
  1201. 1
  1202. 1
  1203. 1
  1204. 1
  1205. 1
  1206. 1
  1207. 1
  1208. 1
  1209. 1
  1210. 1
  1211. 1
  1212. 1
  1213. 1
  1214. 1
  1215. 1
  1216. 1
  1217. 1
  1218. 1
  1219. 1
  1220. 1
  1221. 1
  1222.  @nlight8769  Oh, wow, things got very complicated too fast. The problem is actually much simpler. It's the word "performance". For some people it's not immediately obvious that it's about "something (a task) done in an amount of time". Well, where time is involved. That's the thing, it doesn't explicitly say the metric used. And if the metric is not explicitly said or obvious from the context, people make assumptions and that's how we got into this topic :D But performance is very analogous to speed. In our case the compile time is similar to lap time. And speed is measured in kmph (some use miles per hour, but we've grown out of the bronze age). In our case it would be the not-so-intuitive compiles per hour. One could say that instructions run per second could also be a metric, but it has 2 problems: a) nobody knows how many instructions are run/needed for a particular compile, though I guess it can be found out and b) not all instructions are equal, and they NEED to be equal in order to give predictible estimations. For speed, all the seconds are equal and all the meters are also equal. Here's another tip - degradation implies that it's worse and that the something degraded is *reduced*. If someone tells you something degraded by 80%, you KNOW that it's now at 20% of what it was (and not 180%). And something degraded by 100% would mean it's reduced by 100%, aka there's nothing left. Lastly, correlating to the above - When the performance of anything degraded "fully", so to speak, we say it's 0 performance. Not that it takes infinity time.
    1
  1223. 1
  1224. 1
  1225. 1
  1226. 1
  1227. 1
  1228. 1
  1229. 1
  1230. 1
  1231. 1
  1232. 1
  1233. Yup. I know that Sweden is testing that, adding an overhead cable on some highway, on the first lane, so they can have electric-driven trucks. Heard of that about 2 years ago, I think, but haven't heard more yet. To be frank, I haven't searched for either, so for all I know, it might be already in place and used by actual trucks. Also, in these videos, I think they go a bit too much on the idea. Yeah, it's overhyped, because Elon is Elon. But the Semis themselves I think that the can make actual sense in a few scenarios. Like hauling toilet paper or similarly high volume little weight cargo. And for short to medium trips (up to 800 km / 500 miles). One metric that Musk showed was that 80% of hauls are below 250 miles. I haven't seen anyone dispute that. So, there's certainly places where a cheaper fuel, more eco friendly, less noisy, less stinky and locally-polluting truck does make sense, including economically so. What Musk said that it's better than Diesel in all regards... yeah, no, not even close. I think it might get there in 20 years, where Diesel will be required only in quite niche situations (hauling in winter somewhere very high for example. Or actual 2000 miles trips in the middle of nowhere). I do expect that in 20 years the batteries, at worst, will be a bit better, a bit cheaper and there will be enough charging stations or, for these trucks, maybe even battery swap stations, so they don't have to be fast charged which does degrade the battery a bit faster. Unless the actual battery as compared to the usable battery, is much bigger in capacity, which I kind of doubt. Anyway, I expect, at minimum, in 20 years, to have a 500 miles fully loaded truck with, say, only 5 tons of battery and a cargo hold of 3-4 tons less than a diesel truck.
    1
  1234. 1
  1235. 1
  1236. 1
  1237. 1
  1238. 1
  1239. 1
  1240. 1
  1241. 1
  1242. 1
  1243. 1
  1244. 1
  1245. 1
  1246. 1
  1247. 1
  1248. 1
  1249. 1
  1250. 1
  1251. 1
  1252. 1
  1253. 1
  1254. I do agree that what DoJ proposes is an overreach. And I think it's why it will not pass/continue. However, I totally disagree that it will kill the web or anything remotely close to that. There's plenty of innovation done outside Google. If Chrome disappears today, sites and products won't break, as Chromium can serve them with absolutely no issues. If Chrome disappears today, we won't be stuck in the current version of the web, it will continue to be improved. Maybe slower, but we'll be absolutely fine. I hate this type of argument so much. That if we don't sell our soul to the devil (figuratively) then we won't be able to do anything. Ok, maybe 10 or 15 years ago it might've been closer to the truth, but even then I don't agree that we wouldn't have have the benefits of the modern day browsers without Chrome. It would've taken longer most likely. But nobody would've died because of that. But it's even less of a concern now, with thousands of non-google engineers contributing to web standards and browser code. I wouold also not discount Firefox so quickly. It's true that most of Mozilla's revenue comes from Google, but a lot of the revenue is wasted on useless projects (usually DEI stuff) and on the woke staff itself that managed to get the leadership. Firefox developers don't get a lot of cash. Not getting the money from Google might actually be better for Mozilla and Firefox, as it might be freed from the woke management and steered back into competence and relevancy by engineers and ACTUAL free speech activists. And if they make a dedicated Firefox-devs only donation category, I'm sure that there will be people chiming in (myself included), enough to at least keep the current funding the developers get.
    1
  1255. 1
  1256. 1
  1257. 1
  1258. 1
  1259. 1
  1260. 1
  1261. 1
  1262. 1
  1263. 1
  1264. 1
  1265. The way I see it, the main thing, the main advantages of this "do one thing and do it well" is for having easy composition and less/no code duplication. That is, it's not important to follow it to the letter, but it's a nice, short way of conveying the goals of achieving several benefits that I'll try to list below: That program or library has to be small enough so it can be used in a chain of commands or inside a bigger app, with a minimal footprint. If everything follows this philosophy, then it's also easy to replace without having a depency hell, coupling issues and performance struggles. This small enough pushes the developer to try to be as narrowly focused in the thing it's doing, and when it does need more things to do also try to see if it can use another program or library. This also allows projects to have few developers, since they can only focus on their specific program and domain. To give an example (I don't know if the reality is anywhere close to what I'll present, but it seems like a nice example), the Lynx browser. Their devs can simply use internally curl to fetch the resources and only deal with building the DOM and rendering it. Internally curl might also use a ssl library and a tcp library to handle the lower level networking and only focus on the HTTP & related standards. In this example, if HTTP 3 gets released (woohoo) it might get implemented into Lynx with minimal effort, by just updating the curl library (well, usually minimal, there might be breaking changes or new stuff to take care of). Do Lynx developers have to care about HTTP 3? Nope. Do they have to care about available encryption hashes used for HTTPS connections ? Nope. Do they have to care about opening sockets and their timeouts and buffer sizes ? Nope. They can focus on their specific thing. And that means they can also know very little of the underlying features, meaning less experienced developers can start to contribute, the project has a lower barrier of entry. Having a smaller project/library also allows having manageable configurations. I mean, it can be made to be very configurable (including being modular) without getting overwhelming, because it's in the context of a rather small program/library. Another interesting example is ffmpeg. As a program and cli command, it's actually pretty big. But it's still made so it's easy to be used with other tools and programs. Of course, in the real world, the separation cannot be made perfectly. For some developer the big thing A would be split into b, c and d. Another developer would see A split into b, c, d, e and f, and each also split into 2-3 smaller programs, with one of them being used in 2 places (say, program t is used by both b and e). As you can see, technicallly the second split is better from the "do one thing and do it well", but it's also much more complex. This cannot go ad-infinitum. Theoretically, it would be nice if we'd have only system functions and calls and we'd only run a composition of them. But in the real life it's never going to happen. Also in the real life, the example above, a third developer might see the split of program A into B, C, D and E, with B being say 80% of what b does in the vision of the first developer + 50% of what c does in the vision of the first developer. And so on. And there would be arguments for all approaches that make sense. Lastly, doing one thing and well allows for easier optimisation. Especially in the context of a program or library to be used in bigger projects or commands, having it well optimized is important. And because the program/library is rather small and focused on one thing, that is, it's into a single domain usually, it's easier for the developer to go deep into optimisation. Of course in the extreme cases, having one big monolithic program can allow for better overall optimisation, but you'd also have to code everything yourself. Regarding the Linux kernel, I'd say that it achieves the goals of "do one thing and do it well" perfectly because it's modular (and each module does one thing) and all of them play nice with each other and with the userspace. The problem that I see with systemd is that their binaries, while neatly split, are basically talking their own language. They cannot be augmented or replaced by the normal tools we already have (well, sometimes they can be augmented). Somebody would have to create a program from scratch just to replace, say, journald. And this replacement program would be just for that. It's this "we're special and we need special tools" thing that is annoying. Ten years from now if one of the binaries is found with massive flaw, well... good luck replacing it. Oh, and it's critical and you cannot run systemd without it, so you have to replace ALL the system management tools ? Oh well, warnings were shot, those who cared listened...
    1
  1266. 1
  1267. 1
  1268. 1
  1269. 1
  1270. 1
  1271. 1
  1272. 1
  1273. 1
  1274. 1
  1275. 1
  1276. 1
  1277. 1
  1278. 1
  1279. 1
  1280. 1
  1281. 1
  1282. 1
  1283. 1
  1284. 1
  1285. 1
  1286. 1
  1287. 1
  1288. 1
  1289. 1
  1290. 1
  1291. Brodie, I agree that this being opt-out is bad. However some other things I disagree. Especially the points that the CTO discussed. I fully disagree with your take at 12:05 "This system does not do anything about stopping those economic incentives". And at 14:44 "The way you get this fixed is by talking with the regulators clamping on the advertisers [...] and THEN you can implement the system that gives them very minimal data." With the above, you are suggesting is that for an unspecified amount of time businesses to spend money for completely random ads, instead of targeted ones, basically to throw money in the air and light a flamethrower on it, and then in some mythical future they can get some data so they can be back on targeted advertising. And that somehow they won't be strongly incentivised to find and use ways around these regulations (that often get more and more terrible). Also you're saying that providing the service beforehand, so businesses can switch to it, in a specified window of time until the regulators can come raining down on them is somehow bad or useless. That somehow they'll have the same exact incentive to spend money to find or make ways around this. WTactualF. Please try having a business first, maybe it will be more apparent that what the CTO did and said on this approach makes the most sense. To put it more simply you're asking people that don't have a garage to park their car to first sell their cars, be carless for some time and then buy them back when the authorities built some parking lots. Nobody will do that. And it will be a massive backlash. Learn how things work in a society. Learn to think how is like for the other side. And they ARE doing something about the dystopian state of the web today. So far I haven't heard any other actual solution, something that is actually feaseable to be both useful, to work and also to be implemented. Another thing, at 10:16 "If you're unable to explain to the user in a short form why a system like this is benefical to them, why they would want a system like this running on their computer, you shouldn't be doing it". I agree that they should explain to the user. But I disagree on the "shouldn't be doing it" part. Many things are somewhat complicated and many people wouldn't understand because they're not that interested. Frankly many things are simply very subjective if they're explained or not. It can certainly be summarized quite shortly, but some would argue is not explained enough. And a more proper explanation would then be too long for some people. From "hard to explain" to "don't implement it" is a LOOONG road and the "hard to explain" shouldn't solely be the reason of not implementing something. People receive drugs and medication or even things like surgeries with very little explanation too. You can argue that maybe it shuoldn't be like that, but if you compare it to our case, this is orders of magnitude less damaging in any sense of the word, so in the grand scheme of things it can be explained very shortly and whomever truly wants to understand it can find that in the code or somehwere on the web in a blog post or a video or something.
    1
  1292. 1
  1293. 1
  1294. 1
  1295. 1
  1296. 1
  1297. 1
  1298. 1
  1299. 1
  1300. 1
  1301. 1
  1302. 1
  1303. 1
  1304. 1
  1305. 1
  1306. 1
  1307. 1
  1308. 1
  1309. 1
  1310. 1
  1311. 1
  1312. 1
  1313. 1
  1314. 1
  1315. 1
  1316. 1
  1317. 1
  1318. 1
  1319. 1
  1320. 1
  1321. 1
  1322. 1
  1323. 1
  1324. 1
  1325. 1
  1326. 1
  1327. 1
  1328. 1
  1329. 1
  1330. 1
  1331. 1
  1332. 1
  1333. 1
  1334. 1
  1335. 1
  1336. 1
  1337. 1
  1338. 1
  1339. 1
  1340. 1
  1341. 1
  1342. 1
  1343. 1
  1344. 1
  1345. 1
  1346. 1
  1347. 1
  1348. 1
  1349. 1
  1350. 1
  1351. 1
  1352. 1
  1353. 1
  1354. 1
  1355. 1
  1356. 1
  1357. 1
  1358. 1
  1359. 1
  1360.  @joeyvdm1  Oh wow, you really like this stuff :)) Well, maybe I heard/understood/remembered incorectly. From what I remember (I also watch AdornedTV and CoreTeks?CoreTecks?) there was something that because you only have one CCX, the cache is now available for every core directly, without the need for accessing the other CCX or duplicating the data in the local cache. The thing that might make our communication a bit harder here is that IPC is too broad of a term. Because it can be influeced by many things. Having a lower latency for... anything really, I see that as IPC increase. Don't know why companies wouldn't market it as such. Instead of saying "We got 5% lower latency, 5% better cache efficiency and 5% IPC", they will simply say "We got an average of 16% IPC increase". Because that's a bigger number, and very easy to understand: in average, everything will be 16% faster. It's nerds like us who want to know how is that 16% made of, to better asses which workloads will be impacted more. So, I still stand by my claim that IPC increase will encompass all the improvements, since that's the idea of IPC metric. Not "pure workload, no memory fetching or saving, no shared memory, no multicore communication" metric. And, even if I'm wrong, I want to not be too hyped up. Seeing in some places that Zen 3 will have "only 10-15%" IPC increase, as if that's not great, only makes me sad. Even with 10% IPC increase and absolutely nothing else improved, my quick mind maths tell me that this will be enough for Ryzen CPUs to match Intel in gaming (on average, on some games it will be better), while obliterating them in all other aspects. That's still something that I can't wait to see. Cheers!
    1
  1361. 1
  1362. 1
  1363. 1
  1364. 1
  1365. 1
  1366. 1
  1367. 1
  1368. 1
  1369. 1
  1370. 1
  1371. 1
  1372. 1
  1373. 1
  1374. 1
  1375. 1
  1376. 1
  1377. 1
  1378. 1
  1379. 1
  1380. 1
  1381. 1
  1382. 1
  1383. 1
  1384. 1
  1385. 1
  1386. 1
  1387. 1
  1388. 1
  1389. 1
  1390. 1
  1391. 1
  1392. 1
  1393. 1
  1394. I'm on Gentoo. It's a distro that I can't fully recommend to somebody without knowing that person. In general you'd know best if Gentoo is for you or not. That being said, I'm on it for almost one year and a half (from Dec 2023-Jan 2024) and had no significant problems of any sort. No system instability at all. Gentoo is very good for learning and very good for control and customization. Because of the use flags, you can customize what an individual program/app/package has or doesn't, allowing you to enable experimental or esoteric features or remove things you don't want or don't need. It also allows you to have the binaries optimized for your specific CPU, which can help performance. If you happen to want to have patches for some programs, you can streamline that with Gentoo, so those programs are updated with the rest of the programs, while still having your patches applied. One thing I have to add ... the compilations are really exagerrated IMO. The laptop I'm using is almost 9 years old. It's from 2016. It has a 4 core Intel i7-6700HQ CPU. While it was high-end in 2016, now it's equivalent to a dual core CPU. It does help that I have 64 GB of RAM. Still, knowing that I don't have a fast system, the only program that's annoying to upgrade/compile is Chromium. Last compiles took about 14 hours, when not doing anything else (I just left it running while I went out). Everything else, no exceptions, takes up to 2 hours. Firefox is between 60 and 80 minutes. I'd say that, on average, I have about 1 (one) round of updates per week that takes more than 30 minutes (for everything that's new, not just a single program) and that's while I'm doing something else, like watching YT and commenting (which is pretty lightweight, true). I'm sure that if I had more Chromium based browsers, each would take that 14 hours to compile. It's true that I've also been lazy to dig deeper into ways I can speed it up. And I don't have KDE or GNOME, which I know are quite big, so those might add a bit of time compiling too. Still, if you have something low end or simply don't want to deal with the bigger compiles, there's binary packages. Not for all, but the browsers and bigger packages in general have a precompiled binary from Gentoo.
    1
  1395. 1
  1396. 1
  1397. 1
  1398. 1
  1399. 1
  1400. 1
  1401. 1
  1402. 1
  1403. 1
  1404. 1
  1405. 1
  1406. 1
  1407. 1
  1408. 1
  1409. 1
  1410. 1
  1411. 1
  1412. 1
  1413. 1
  1414. 1
  1415. 1
  1416. 1
  1417. 1
  1418. 1
  1419. 1
  1420. 1
  1421. 1
  1422. 1
  1423. 1
  1424. 1
  1425. 1
  1426. 1
  1427. 1
  1428. 1
  1429. 1
  1430. 1
  1431. 1
  1432. 1
  1433. 1
  1434. 1
  1435. 1
  1436. 1
  1437. 1
  1438. 1
  1439. 1
  1440. 1
  1441. 1
  1442. 6:01 "It's impossible to leave yourself in an unbootable state" - my @s$! Have a "lucky" grub update and then tell me how well that quote aged. I kinda get the reproductibility, but I don't see what it's SUCH a big thing. It's ... nice. Surely not everybody needs it. It does sound good for a dev environment though, I agree. The "every program has its own libraries" aka kind of (or actually, I don't know) statically linked is ... not that good ? I mean, it's very nice to have the ability, when you need two or more libraries at different versions for different programs. But... it's good to also have it like only once if you only need it once, instead of potentially 100 times. And, yeah, in some instances I do prefer it instead of virtualization (like when it's not about security but about making sure it has all the dependencies properly set, with the correct versions). But I don't want this to be the default for all my programs. Feels like waste. That graph with the number of packages and fresh packages ... it smells funny to me (to put it mildly). Any source on that ? I find it hard to believe that nix has so much over all the rest, it's either through a gimmick (like counting each version of a package as individual packages) or some other kind of BS. At least that's what I think. EDIT: ok, I checked a bit, apparently it's just super easy to contribute to it, which at least partially, is a good reason to have high number of packages. So I guess it's legit high number. Neat! So... yeah, I'll stick with Gentoo. It's still the most powerful and configurable/customizable of the bunch. Also cutting edge and stable, mind you. And you can choose if you want a more stable version of an app or a more bleeding edge version. And theoretically you can set automatic updates but it wouldn't be a good idea, sometimes un update needs a bit of manual care, like a configuration change. Gentoo tells you nicely about this, but you won't be able to see it if it's running in the background. But starting the update and seeing if there's extra things to check/do barely takes a minute anyway, after which you can leave it compiling in the background, so I don't see the appeal anyway.
    1
  1443. 1
  1444. 1
  1445. 1
  1446. 1
  1447. 1
  1448. 1
  1449. 1
  1450. 1
  1451. 1
  1452. 1
  1453. 1
  1454. 1
  1455. 1
  1456. My several cents, that I haven't seen on other commenters: - asks for unsigned ints but reads them as %hi instead of %hu . But this is a very minor thing, I agree - I don't know for the life of me the requirement to enter the types in order. You can handle all cases TRIVIALLY by simply having something like switch(ptr->type) { case 124: // nobreak or whatever signal you have for your compiler to not see this as a mistake case 142: // nobreak or whatever signal you have for your compiler to not see this as a mistake case 214: // nobreak or whatever signal you have for your compiler to not see this as a mistake case 241: // nobreak or whatever signal you have for your compiler to not see this as a mistake case 412: // nobreak or whatever signal you have for your compiler to not see this as a mistake case 421: arrayCreation(0, 52, 62, 92, ptr); break; } - the example above was the most complex. When there's only two "types" chosen, there's only two cases (12 and 21 or 24 and 42 etc). - while writing the above, I just observed that the "124" case is missing. tsk tsk tsk, -1 point! - I think the main idea for most tips presented here is missing: the whole code I'd say is quite ok and readable and maintainable. Because it's small. When you get into a project with hundred of source files and thousands of lines of code and hundreds to thousands of functions, THEN having comments is important. When you have those "magic" numbers spread in 20 files, THEN you'll be doomed if you have to update them, so using a define for it is better, so it can be trivially updated - especially since it's a small program, I don't like the idea with the enum and the extra parsing, at least not being given as a blanket statement. That's basically overengineering. Feature creep. You're creating extra code, including RUNTIME code (which is why I dislike it) for a POTENTIAL FUTURE. You're making the code more complex and which runs slower just because "it feels right". There should be disclaimers. Like, do you know with decent certainty that this will not be updated (like getting that 5th type) ? If yes, then the current code is perfect. If you know it will be updated, or if you're not sure, THEN, maybe, think of a more extendible solution. Still, it's it's something small, it might still be ok to make it like that and only refactor it when it's needed. Don't fear refactorings, since you cannot avoid them. Instead embrace them and get used to them.
    1
  1457. 1
  1458. 1
  1459. 1
  1460. 1
  1461. 1
  1462. I think that what they wanted to avoid (but put 0 effort into explaining it) that's actually understandable, is using the CT to be used as a home battery a lot, and still try to have the 8 years of warranty, because you're driving 0 miles, but degrade the battery as if you drove 1,000,000 miles in those 8 years, then ask for a replacement after 7.5 years. 1,000,000 miles sounds too much ? Let's do some math if it's possible. Let's assume that the battery has 100 kWh energy capacity. It's good to not use 0% to 100%, so let's say the usable capacity is 60 kWh. 11,5 kW, means that if full, it will discharge in, say, 6 hours, rounding up a bit. If recharging from 20% back to 60% again, it would mean it needs 10 hours just for the discharge part to cover 100% of the battery capacity. That means it's totally doable to have a full charge-discharge cycle in a day. Also, that would be like driving about 300 miles What does 7 years mean in terms of days ? 7 * 365.25 = 2556.75 days. Let's round that to 2500 So, 2500 days means potentially 2500 battery cycles. I read somewhere that they officially said that their batteries can be used for 1500 cycles. So that's already over the limit. Also, 300 miles * 2500 = 750,000 miles. Ok, so I was off, but it's still 5 times over what they would've covered, if you used it to drive, not to power a home or whatever. And it is over its expected life time. To put it in another way, 150,000 miles would mean it only needs 150,000/300 = 500 cycles, about 1/3 of the battery lifespan. Still doesn't excuse what they wrote. Or Elon being a gigantic jerk that needs to be jailed, along with the many people that enabled him to go this far.
    1
  1463. 1
  1464. 1
  1465. 1
  1466. 1
  1467. 1
  1468. 1
  1469. 1
  1470. 1
  1471. 1
  1472. 1
  1473. 1
  1474. 1
  1475. 1
  1476. 1
  1477. 1
  1478. 1
  1479. 1. How is that different than simply having a pseudonim online ? Do you feel bad in any way that you have to call me Winnetou17, which, let me tell you, is not my real name ? Do you feel like you have to lie about other parts of our conversation because of this ? 2. Stop communicating, as part of the project, on instant messaging, which is more emotion prone. You can still have discussions in github issues and on mailing lists. Where people usually put more thought and effort and aren't simply going to make a bad joke, or other quickly go into a flame war because somebody typed a bit faster than was able to think. It's a bit strict, but I see the value, it should work. 3. How can not joining as a contributor stop you in any way of being interested ? I am a contributor of exactly 0 (zero) FOSS projects, even though I am a developer, and I'm still very much interested in a good bunch of FOSS projects. When I'll retire, I think I'll even start contributing. Are you actually trying to be trolling or do you actually think like what DT said is to stop being interested ? 4. Here DT could've given more examples. The idea is to separate and to have a project about a single thing. Like, a browser is about being a browser, you enter an address, it fetches the data from that addreess and renders it. It should not save the planet, care about the poor kids in Africa and so on. Those should be different projects. This allows the project that is about software to stay apolitical and in general focused and not dwelve into endless talks about inclusivity and other flame wars like that (though a flame war about micro vs monolithic kernel is totally ok)
    1
  1480. 1
  1481. 1
  1482. 1
  1483. 1
  1484. 1
  1485. 1
  1486. 1
  1487. 1
  1488. 1
  1489. 1
  1490. 1
  1491. 1
  1492. 1
  1493. 1
  1494. 1
  1495. 1
  1496. 1
  1497. 1
  1498. 1
  1499. 1
  1500. 1
  1501. 1
  1502. 1
  1503. 1
  1504. 1
  1505. 1
  1506. 1
  1507. 1
  1508. 1
  1509. 1
  1510. 1
  1511. 1
  1512. 1
  1513. 1
  1514. 1
  1515. 1
  1516. 1
  1517. 1
  1518. 1
  1519. 1
  1520. 1
  1521. 1
  1522. 1
  1523. 1
  1524. 1
  1525. 1
  1526. 1
  1527. 1
  1528. 1
  1529. 1
  1530. 1
  1531. 1
  1532. 1
  1533. 1
  1534. 1
  1535. 1
  1536. 1
  1537. 1
  1538. 1
  1539. 1
  1540. 1
  1541. 1
  1542. 1
  1543. I remember, though I might be wrong, that Intel wanted to have a yearly release cycle. For now, Battlemage seems to be arriving at exactly 2 years after Alchemist, but like you said, I think they had to wait to sort the driver issues. The driver is still not perfect, but it's actually usable for a good bunch of people now. What I fear most with Battlemage is that it's again a bit too late. If its top SKU fights with RTX 4060 or RTX 4070 or RX 7700X, at 250W ... and then both NVidia and AMD launch a new xx70 or xx60 class GPU 5 months later ... then Battlemage would have again to be extremely low priced, in order to be competitive ... which might mean very well that it's sold at a cost by Intel. If I'm not mistaken that was kind of the situation with Alchemist. And if it's again with Battlemage, well, Intel isn't exactly doing that good financially, so I'm not sure they can support it, if it doesn't have some profit. The less gloomy part is that the same architecture and drivers will be used in Lunar Lake and the next CPU generation (rumors say that Arrow Lake has Alchemist+, not Battlemage). And those might sell quite well. Right now the MSI Claw is basically the worst handheld, buuut, with some updates and tuning, it can ... get there, so to speak. I don't expect it to win against RoG Ally or Steam Deck, buut, it can get to be kind of on the same level, and with no issues. I'm so curious of seeing a Steam OS (or Holo or whatever it was called) on the MSI Claw, I'm really curious how it would work. Anyway, an MSI Claw 2 might actually be competitive this time. And be launched in time. Still speculation, but there is hope.
    1
  1544. 1
  1545. 1
  1546. 1
  1547. 1
  1548. 1
  1549. 1
  1550. 1
  1551. 1
  1552. 1
  1553. 1
  1554. 1
  1555. 1
  1556. 1
  1557. 1
  1558. 1
  1559. 1
  1560. 1
  1561. 1
  1562. 1
  1563. 1
  1564. 1
  1565. 1
  1566. 1
  1567. 1
  1568. 1
  1569. 1
  1570. 1
  1571. 1
  1572. 1
  1573. 1
  1574. 1
  1575. 1
  1576. 1
  1577. 1
  1578. 1
  1579. 1
  1580. 1
  1581. 1
  1582. 1
  1583. 1
  1584. 1
  1585. 1
  1586. 1
  1587. 1
  1588. 1
  1589. 1
  1590. 1
  1591. 1
  1592. 1
  1593. 1
  1594. 1
  1595. 1
  1596. 1
  1597. 1
  1598. 1
  1599. 1
  1600. 1
  1601. 1
  1602. I agree that the supply chain is the motive that they cannot repair in a reasonable timeframe. BUUUT, it's completely their making, their choice, their problem, their mistake for having this. I do not see this as a good enough argument. Legislation should not care about this. If you can produce new cars, then you should be able to provide parts for the SAME FREAKING CARS. If you have "very lean supply chain" that's a you problem not a me problem. Overall, I think that a legislation stating that whatever you make, you cannot have license to sell unless you can provide service in a timely manner and provide replacement parts in a timely manner (I know, I know, timely is too subjective, I'm only saying the idea). If you do not provide them, then you are forced to release the schematics for the product and all of its parts. In case of service, those who still have warranty should be able to get a full refund. If you do not have the ability to fully provide the schematics for the product and all of its parts, then you cannot get the license to sell, easy. Ok, I know what I wrote above is currently impossible. Some parts cannot be made inhouse and also cannot be had with schematics, as they're 3rd party vendors who do not care. In this case there can either initially be made specific exceptions and b) longer term - the manufacturers of those parts be liable under the same as above, if they don't provide the parts for general sale, then they are forced to release the schematics. I think that in both cases, when aquiring a license to sell, the schematics should be provided upfront to the a government entity. So when needed, the schematics can be made public without interference or possible "accidents" from the original company. You know, sometimes I cannot not think how far we'd have reached if we weren't so petty. What I described above is so much extra work just because we cannot have common sense and a bit of moral integrity to not steal and profit from others. Sigh
    1
  1603. 1
  1604. 1
  1605. 1
  1606. 1
  1607. 1
  1608. 1
  1609. 1
  1610. 1
  1611. 1
  1612. 1
  1613. 1
  1614. 1
  1615. 1
  1616. 1
  1617. 1
  1618. 1
  1619. 1
  1620.  @bvd_vlvd  Don't mind Terry. He usually likes to tell people what to do, without bothering to properly explain his view. But he can give valuable insight at times. Back to the topic, I'm a systemd hater. To be clear, I don't know in what state it is now, from what I know it is getting better (maybe it even got ok), but for a long period of time it had a monolithic problem. Yes, that thing with "do one thing and do it well", the unix philosophy, that was not embraced. The best example I have of why that is bad, and why systemd, at least at that point, was bad is CVE-2018-16865 . It's a vulnerability that was found in journald in 2018, I think (judging by the CVE id). It was, of course, later fixed. The problem is that it was a high severity vulnerability and ... you couldn't disable/remove journald at that time, without getting rid of systemd completely. Because if was so tightly coupled. Imagine being a system administrator and knowing you have a vulnerability that you are forced to live with until a fix arrives. On a critical part of the system. Not nice. The thing is that we DO know better. We knew that in 2014 too. We know to make things that are smaller and can have parts disabled or replaced (for security, customizability and reusability). And this was pointed out before the CVE happened. That being so big and monolithic is a massive risk down the line. Imagine the same scenario as the above one, but in a uncontested systemd, 5-10 years later, with everybody using it. Another security hole found that cannot be instantly disabled until the fix is ready would pose risk similar to Windows XP-era vulnerabilities that infected countless computers. And the jump from "not for me" or "dislike" to "hate" is the way it was pushed, got mainstream, and the flags raised completely brushed off as inexistant or irrelevant or other stupid excuses. It was the insistance that it was perfect and that those who are against it are somehow against progress or sys-V-init lovers. Systemd is clearly very potent, full of features and capable. And when it appeared it kickstarted people getting out of sys-V mess and in general having better init systems and better service managers. But the hate appeared (rightfully so, I'd argue) because of how ... uhm ... ignorantly it was pushed, so to speak.
    1
  1621. 1
  1622. 1
  1623. 1
  1624. 1
  1625. 1
  1626. 1
  1627. 1
  1628. 1
  1629. 1
  1630. 1
  1631. 1
  1632. 1
  1633. 1
  1634. 1
  1635. 1
  1636. 1
  1637. 1
  1638. 1
  1639. 1
  1640. 1
  1641. I'm part of this statistic change 😀 Long overdue, this start of the year (well, started just before the year change) I finally installed my first Linux on bare metal and switched to it. I went directly with Gentoo. In these 8 months, I only logged in back into Windows 10 (which still functioned flawlessly, I might add, one of the reasons I switched so late) a total of 3 times, one of which was only to check that it's still working and to do updates, just in case. I'm not surprised that Gentoo isn't in the statistics, it's very niche by its nature, and while I love it and I think it's, by a good margin, the distro with the best customizations possible (in an easy to do manner), I think it will always be niche. I also have to point out (fortunately at least one other comment saw it) the Steam Desktop share is flawed and it is trivially to see it. SteamOS users are Steam Deck users, which, last time I checked, it is not a desktop PC neither a laptop. Even if you keep it on a desktop or in your lap. So the Steam Desktop PCs users is actually about 1%. Which, I do have to admit, is significantly lower than I expected. Steam somehow saw the least Linux increase, even though it should've actually increased the most, since it has two vectors: desktop PCs AND Steam Deck. But then again, the surveys are only on a portion and it's random, so there's always a chance of stats being skewed by how the random picks happened. That's why it's better to wait several months too see a trend, like Bryan said in the video.
    1
  1642. 1
  1643. 1
  1644. 1
  1645. 1
  1646. 1
  1647. 1
  1648. 1
  1649. 1
  1650. 1
  1651. 1
  1652. 1
  1653. 1
  1654. 1
  1655. 1
  1656. 1
  1657. 1