Youtube comments of Winnetou17 (@Winnetou17).
-
559
-
221
-
On the Google doesn't support JPEG-XL because CPU costs... that's a cop-out. They've been asked multiple times, including by companies like Adobe, to implement it and they just straight up refused also acting like nobody cares about JPEG-XL. You can say whatever you want, but if you read the chromium thread about it, it's clear as day that Google DECIDED against both JPEG-XL and the community with no technical reasoning. Of course, they didn't say out loud why.
Also, the CPU costs of JPEG-XL cannot be the only reason of rejecting it. There's loads of other use cases. Not to mention the forward looking of, JPEG XL is alone in the amount of features it provides/support that is outside of just some 300x300 images on a random website. You can have wide color gamut and HDR in it. People taking high-detail high resolution photos, can put the original images directly as jpegxl. But Google directly jumped in to say "nope, cannot see that directly in the browser. Think of the children in Africa!". I'll stop, my blood is starting to boil. I see that close to the end of the video these extra features were mentioned, so at least I'm at peace that Theo saw that.
- I'm totally on the webp that it is good and it should still be supported as the bridge to JPEG XL.
- AVIF has nice features, but it's a dead-end technology, we shouldn't bother putting more effort into it.
- JPEG XL is truly the next generation of image formats and what we will use for decades. Hence why it's so infuriating of Google actively being against it and slowing massively its adoption. No, I'm not buying out the CPU cost argument. That's not the browser support problem. In a world where a website has a 6.6 MB JPEG with .png extension image, there's PLENTY of room for JPEG XL.
Lastly, I really want to see benchmarks of just how slow JPEG XL supposedly is. I found out a benchmark, apparently from 2020 which unfortunately doesn't have a comparison to webp. And just citing X or Y MB/s is meaningless when you don't webp on the same exact hardware, as the numbers can differ massively from one CPU to another. Not to mention that 4 years have passed, things might've changed.
EDIT:
Initially mentioned about JPEGXL being lossless, but Theo corrected that very quickly.
Also mentioned about PNG: PNG not being compressed ? Duuuude .... It's not a good compression for photographs (hence why people still use JPEG for that, it looks basically the same to a human, if you don't need to zoom in, and it is a smaller size as a JPEG). But PNG itself totally has compression. And multiple ways in which you can reduce the file size. There's a reason TinyPNG exists (or existed, didn't check recently)
208
-
179
-
159
-
154
-
141
-
136
-
130
-
128
-
108
-
106
-
105
-
104
-
91
-
I think that Apple gets less hate because they're a more "opt-in" ecosystem / playground. That is, the default is windows, when you have no choice or don't know what to pick. So you'll use it and, in many cases, find things that irk you and some that you'll absolutely hate. But going to an Apple... you usually reasearch it a bit, before you choose to buy one. That is, you already have some idea if you'd like it or not and there's good chances that you'll simply not switch to it if there's possibility of incompatibility, so to speak. Getting back to Windows being the default option - you rarely are forced to use Apple, for, say, work.
So bottom line of the above, when going Apple you usually know what you're getting into, significantly reducing the number of people frustrating with using it. As some simply choose to not go Apple, they might've realized beforehand that what they're doing is simply incompatible (like most gaming). And the rest might've had done some research and learned how to do the basic things.
Me personally, I do hate Apple more than Microsoft. I do not deny that their engineers and designers are usually doing a very good job. Most people I know using Apple's products are happy, things work. Well, the thing they're using it for. But Apple is so focused on control and walling the garden as much as possible, so anti consumer, that I do not care how good their products are.
Microsoft, to be fair, is not that far off. But, I guess, because their current position, they have a much bigger garden, so closing it is much much harder. But their strides for requiring an online Microsoft account, and what they're doing to secure login and I forgot the next thing after secure login, that's also a no-no. I've used Windows since Windows 95 (used a bit 3.11 too, but it was on old computers in some places) to Windows 10, and I've been a happy Windows 10 user. I know I won't drive Windows 11, not by personal choice. I might have to, for work, but unless I REALLY have to for something specific, I won't install it on any of my personal systems. Even if their bullshit if bypassable.
79
-
76
-
70
-
I stand with Drew here. He might've been nagging and snarky a bit, but was in the right. And the later responses from Go's side were quite lacking, I'd say much more impolite that Drew ever was in those threads.
First, like Brodie said, they initially said they can do a temporary solution, and that they're working on a solution or at least some improvements. Later, with no further notice, they act like the temporary solution is the well-established-and-acknowledged one, with no more information on the improvement/solution.
Second, the responses like "because of boring details, we don't want to do this" (paraphrase). But it's exactly the place and time to actually post those admitedly boring details since at least those who are impacted can understand why it takes so long or why it's not practical to be done, and the people that are in those threads can understand a lot on the subject. Not to mention that it would've also shed some light into the prioritisation, if it's something that is hard to do, it can be understood that there are more important things to do first. But no, they treated those impacted like kids who want too much, and whose pains are not important.
Sigh
Edit: I forgot to add. When you have 3 requests from the same IP in the same second... I find it hard to believe that something can't be done about it, with reasonable effort. Example: 5:18
67
-
64
-
64
-
I cannot but nickpick that you say that Stallman is asking things from users. Check 18:10 and especially what he says at 18:40 . He is not asking users do to stuff, he's just doing things for himself the way he sees it they should be done, and encourage others to do the same.
Of course, as you say, it's very impractical now for most of the people to live with the same restrictions as Stallman. Hell, I'm posting this from a Windows laptop, because I'm that confortable. However, the more people are aware of this, and see the valor of free software, the more people will get involved in it, and the more it will get to be a reality. No company or current government will do that. Simple people will have to work hard and unite in order to achieve this. So for now we have an OS and several tools which are free. Then some people make another tool or program free. Then some talented people, fed up by Intel's surveillance in their CPUs team up and make a free CPU. And so on.
So, don't think too much of the good things that you'll have to give up in order to be pure. Think of the things you can help out (even by just spreading the idea and debate on this topic around) so you and everyone else will have less and less things to give up, in order to live a free life :)
62
-
62
-
55
-
54
-
54
-
51
-
51
-
50
-
45
-
45
-
42
-
40
-
38
-
38
-
I have some questions for AMD, though surely we'll never have an answer as their recent silence is already one answer for many of them.
Anyway, here it goes:
- Why was B550 so late ?
- Why was this support/compatibility annouced so late ? Wasn't it known when Zen 2 launched ? If not, when was it known ? Even so, wasn't a lack of guarantee known in advance ? Couldn't AMD give some warnings going forward ?
- When making the decision to absolutely not support any 3xx or 4xx chipsets for Zen 3 CPUs, were any board partners consulted ?
- Wasn't AMD aware that many customers are buying B450 specifically to upgrade to a Zen 3 CPU ? Why wasn't there any communication ?
- Why is AMD still so silent about the matter ? How could a customer not think that AMD simply pulled an Intel out of greed and/or lack of care ? That is, simply abandon a part of customers and move forward, because it's easier. How can an AMD fan have the benefit of the doubt now ?
- Seeing customers and media perception (especially seeing MSI promises) and not having any comment on them, any try to address the issue as soon as possible (so there's as little damage as possible), isn't AMD concerned that the whole community will be less trustful of ANY marketing and promise going further ? Isn't that a bigger price to pay than being honest and trying to work with the partners and the community ? Does anyone at AMD think it's ok to say now that "well, we only said Socket AM4 support, nothing about chipsets" ? How could the community at large realize the difficulty of providing this kind of support when no attempts at it were made and when AMD is being so shady ?
Sigh
37
-
34
-
34
-
33
-
32
-
31
-
10:41: "We've got the input list just here... same elements as before, I think..." "Well, you wrote it, mate!"
=)) What's a Creel playing both the teacher and the student for our amusement.
Question: for integers, if you know you're big-endian, isn't it easier to simply build the counter array to have power-of-two buckets (like 8 or 16) ? And then, instead of doing mod 10, simply read the relevant 3-4 bits, put in the relevant bucket, and so on. You also have to make sure to have the sign bit in it's own step. Also, would it help to construct all the count arrays at once, so you only traverse forward the array once ? It uses a bit more memory, but arrays of 16 integers... doesn't sound that bad. Worst case for normal integers would be to compare 64 bit signed integers, and that would amount to 64/4 + 1 = 17 count arrays (the extra one is for the sign count array).
Cheers!
30
-
28
-
28
-
27
-
27
-
27
-
@Marlow925 What you say about the car having the option to be the fastest when there are multiple stops is true.
However, there's two things which actually makes this impractical and which subway trains (only more than a century old idea) solve effortlessly.
1) Throughput: 4400 passangers per hour is super low. Make another stop at a stadium and you'll have 50 000+ angry people who can't get to /out from the stadium. Ok, not everybody will have to use this, but however you want to expand this, the very very low capacity that single cars have will immediately become very painful.
2) Cost: Having a lot of cars for 1-4 people is not efficient. Not only the car themselves will be quite expensive, but the operating cost will be quite high too. A car carrying 1-4 people and weighting 1 tonne is not efficient. Also, they'll have to recharge the battery (which will wear out) several times per day, which complicates things by quite a bit. A train can have powered lines, so no need of a battery which will inevitably become waste, and the weight to people ratio is much better, effectively more efficient.
However you take it, if you have to scale it, the cars won't work, and the train will be the best option. Unless you want to keep it exclusive and expensive.
In the end, I really don't understand what are people so excited for. I mean, yeah, nice, a new route was made, some stuff is easier to reach. But the technology is absolutely nothing new. Some say that the tunnelling was done much cheaper, but I really don't see that either. Maybe it's on the cheap side, but surely not 10 times cheaper or anything close.
26
-
26
-
Ok, here's a hot take: I fully disagree with Drew. Well, most of his points are actually ok, and I agree with some (like decoupling GNU and FSF and the need for new licences). But I said fully disagree because I totally do not agree with the method of achieving said reforms.
There is this case that FSF is kind of tone deaf, that is extreme in its philosophy. I do think that is good. That is should stay that way (off topic and that Richard Stallman should stay in FSF, including leading it). Why (to answer Brodie's question in the end) ? Because it is objectively pure. It is a golden standard. When FSF endorses something, so far you can be sure that it actually, absolutely is free software, no "practical considerations", no "in a manner of speaking" no "for all intents and purposes" and so on. That is very valuable.
If someone like Drew likes to improve the situation and cannot do so with/within FSF for reasons like FSF being very rigid, I don't understand this need to change FSF, when it has a clear stated goal and philosophy. He should begin another foundation and achieve those things like that. A milder FSF, more in tone with the masses I'm sure would attract a lot of people that are in the sentiment of FSF, but are not willing to go to the lengths that Richard Stallman go (and why I have huge respect for him). This doesn't have to be at the expense of the current FSF, it should be alongside.
Also, I cannot agree with that 5-year-old mentality that if red people are known to be good with something, then to have blue people good with that, we should put blue people in charge. That's downright insulting for anybody with 3-digit IQ. If the blue people want to weave, then they should start learning. And only deal with the cases when they are not allowed to learn, that's the only thing that should be done. Equality of chances, not equality of outcome. Leadership should be on merit. Assuming that blue people need to be put in charge automatically assumes that both red and blue people are tribalist cavemen-level people who cannot be impartial and cannot see value in the people of other color. How can I take this man seriously when he's so gigantically wrong about such a simple issue ?
Also, that "we're told that unfree Javascript" is stupid and cringe, I have to agree. That should be improved. By FSF.
24
-
24
-
24
-
23
-
23
-
23
-
23
-
22
-
That's a good argument. Also, isn't having more complex instruction going to help with program size, which also helps with cache hits ? That is, using an overly simple example, we can make a 128 kB program with RISC, but with CISC that can probably be done in just 64 kB, and that 64 kB might fit exactly in the L1 cache, and lo and behold, speed improvements!
Not to mention, that I think there should be cases where an instruction would simply be faster than multiple instructions ... actually, with out of order and speculative execution maybe not. The idea is that you might need to use some extra steps or registers, while in a single instruction, the CPU could optimize it further. Though I'm just theoretical here, no idea if it's actually the case.
20
-
20
-
20
-
20
-
In regards to "not forking Chromium" vs "use OS-based rendering". My take on this, is that they simply use whatever rendering engine is installed.
The rendering engine (webkit, for example) will be used by the browser like "hey, here's the HTML + CSS, give me pixels to show on the screen". Inherently, it cannot mess with privacy, since it cannot know where that HTML+CSS comes from, it has no access to things like cookies, it doesn't know and cannot connect to anything external and so on. I'm not so sure on the Javascript side, but I'm pretty sure that JS will be run separately, or containerized somehow.
So, the browser, whatever DuckDuckGo are working on, will manage things like connecting to the server, fetching the files (what you can do with cURL), managing cookies and local storage, and so on, and on this level you have privacy concerns.
So that's why I think they're going for this approach. No need to fork and maintain the rendering engine, they'll use whatever is available, since it cannot interfere with the privacy. And using it as an external program is different than forking it, even though it's, I guess, more on the nuance level.
20
-
20
-
19
-
19
-
18
-
18
-
18
-
18
-
18
-
17
-
17
-
I think the FSF attitude is EXACTLY what is needed and what they should do. No cracks in the armor as you say. It protects us from getting complacent and "slowly boiled". It protects against slippery slopes. It defines the golden standard and is very nice to see people catering to that, despite the immense hurdles in doing so.
It really saddens me the stance of many people who think FSF as irrelevant or extremists just because they actually stand by their stance and don't compromise on their ethics. They are important to see what the golden standard is. It's up to you how much of it you want. In practicality, for now, going 100% is very limiting. But that's the good thing, we know, we are aware of that! If you want to go full privacy and full freedom, you know what to do, you know how to get there, you know what you have to ditch. And I haven't heard of an FSF endorsed software to be actually non-free in any regard, so they are doing a good job there, as far as I know.
It also REALLY saddens me that some people think that endorsing FSF somehow needs that you yourself, on all your computers, have to run 100% free software and then they see the impracticality of it (like Richard Stallman going without a cellphone and running 15 year old laptops) and promptly reject the idea in its entirety. When it's actually should be used a sign that more work has to be done to get free software to be a decent alternative. You can run and use whatever you want, just try to help the idea (mind-share, testing, documentation and, of course, programming & others) to move it into a better place.
Its akin to someone seeing a poor, weak person, which is poor and weak of no fault of its own, and being disgusted by it and running away from it. That's not the right attitudine, that person should be helped. Same with free software, it should be helped so it grows into a decent alternative.
17
-
16
-
16
-
16
-
16
-
16
-
15
-
15
-
15
-
15
-
15
-
15
-
15
-
14
-
14
-
14
-
14
-
14
-
14
-
14
-
13
-
13
-
13
-
13
-
13
-
I would argue that having processes that are tightly coupled are a direct violation of UNIX philosophy. One of the main takeaways of "do ONE thing and do it WELL" is that it can be used with other programs. That is, a "business" need can be done by simply running a specifically crafted chain of commands (programs). The "one thing" part mentioned before helps so that no matter the context, the program has no side/unwanted effects and doesn't waste resources for something you don't need. If you need something extra, it can be provided by another program or with an option.
However, if your program is tightly coupled with another, well, then you don't have one program that does one thing and does it well, you have 2 programs masked as one, so essentially one program that does two things.
Not being able to run a program with all its features standalone compromises its interoperability and use for composing a bigger program. Which is exactly what the UNIX philosophy stands for, whatever bigger/complex program you need, you can build it by using small dedicated programs. With systemd's dedicated programs you can reasonably build a single program: systemd itself.
In conclusion, systemd is mostly a monolyth. It gets away because it does its job pretty well, it does cover most of the needs that people have. But not the concerns about its future.
13
-
13
-
12
-
12
-
Awesome, now that Theo switched to Zen, thanks to Firefox being so open and customizable, maybe now we can have more contributions to Firefox and somebody very known and vocal who can trash the websites that DECIDE to not support Firefox for absolutely no good reason (which is easy to prove when you change the user agent to appear like Chrome and see that evrything works perfectly).
Theo, here's the formal request to revisit Firefox (like, say, when version 135 arrives) and do a video of what it is missing from it, what is wrong, what is insufficient and so.
For me it's still my browser simply because of full open source and it works good enough for me. On the developer side, the fact that it has on the network developer tools the "Edit and Resend" makes it better than Chromium-based browsers. Not to mention that for mobile, it's the only one that supports extensions.
12
-
12
-
12
-
12
-
12
-
12
-
@BrodieRobertson I get the "historic" ? part of "tiling" = tiling manager, but I don't think it should be a be-all end-all argument. Like Nicco said, while it's barebones, so to speak, or lacking, it's still tiling. Calling it like that it's the most descriptive way. I don't think a new term should be invented for this, I think the "in Linux, tiling means X" people should adjust. That's what makes the most sense to me, and what makes sense for the future (where, to a new guy, all these terms should make sense). Eventually we can maybe call this "manual tiling", versus "automatic tiling" and make sure to simply not use just "tiling".
12
-
@mrdot1126 Yeah, I concede, the price is pretty good. I'd still argue that it's a bit apples to oranges since these tunnels presented here are really small.
In the car costs I don't know... it does sounds like the Teslas are better in every way, except that the subway takes, what, an order of magnitude less space ? And I don't know how much power it uses, but it surely isn't the same as 60 model 3s or Ys, since the total weight is at least several times less. Oh, and it doesn't have heavy, polluting-to-make, fast-aging batteries.
No matter how you cut it, using cars only works if you have a very low to low volume of people needed to be transported. In the case presented here it works. But for a whole city ? Yeah, right. Nothing revolutionary here, sorry, this isn't the future of transportation.
12
-
12
-
12
-
11
-
11
-
11
-
11
-
11
-
11
-
11
-
11
-
11
-
11
-
11
-
11
-
10
-
10
-
10
-
10
-
10
-
10
-
10
-
10
-
10
-
10
-
10
-
10
-
10
-
I was thinking the same thing. Unfortunately, doing the right thing will be very expensive in terms of money, time, effort etc.
I don't know how the justice systems everywhere are getting more and more complicated, bloated to the extreme, that only people who have YEARS and YEARS of study can make sense of all the rules, and yet, the results is that the justice is a total joke. It does work in normal cases. But whenever is a little entity vs a big powerful entity, you have absolutely no guarantee of even the slightest fair and just and overly long, time-consuming, MONEY-consuming trial.
10
-
10
-
10
-
9
-
9
-
9
-
9
-
9
-
9
-
9
-
9
-
9
-
9
-
9
-
9
-
8
-
8
-
"Free to play games are complete exploitative garbage" I don't know, I wouldn't call NetHack that...
Jokes aside, yeah, all free things need to have a method to pay, a method to support. While I'm young I'm not able to contribute. Or those in poor countries. Now that I'm mid aged, have some disposable income and that I understand that things needs funds in order to have them stay nice, I can donate. Not to all of them, sadly, but, still, it can get sustainable in most situations.
8
-
8
-
8
-
8
-
8
-
@handlebar4520 The problem is twofold with your view/response/comment:
1. He used hormone levels that AS A PLACEHOLDER, as a simple example, he didn't even remotely said "this is what we need to do". How can you miss that ?
2. What he's proposing totally isn't an ELO system. In an ELO system you gain points for winning (or placing high in this case). It is based on the RESULTS of your matches/runs. Totally not what he said. He even gave the most simple example you can have - weight classes. On a weight class system, someone at 100 lbs will NEVER compete vs a 200 lbs person. That's not how it works. There's clearly defined brackets / segments. With an ELO system, a 100 lbs person might get to compete vs a 200 lbs person and forever be jumping between, say, 180 to 200 lbs competitors. Which, as you say, IS wrong, but, again, what he said totally IS NOT this.
I appreciate the percentages. You seem to know some details about the topic. I'm genuinely and unironically baffled how you missed the point so ... completely. I can't think of an easier way to explain it than he did (for something short and on the spot)
8
-
While I get the frustration, and that it certainly can be better, the communication, that is. But I also have to point out that it's good that very specific examples arise with Wayland's developers refusing to simply implement something on a simple request. It's like a high barrier of entry (frustratingly so at times) that does, in its way, allow for proper thought on how it should be standardised and implemented. And with the requirements made clear, it's also kind of documenting why X or Y is like it is. And why it exists.
So I kind of support their way of asking about the problem, until Every Single Detail is laid out. It does help with building the proper future, so to speak. The "trust the users", even the those users are hand selected to be trustworthy, that does not fly with me. They can be very thoughtful and well-motivated, but going into ALL the details upfront allows for improvements. After all, Wayland is not about simply recreating X, but something better, that's good, efficient and maintainable. For that you need to know exactly what's needed, exclude what's not needed, and have good documentation.
In the end, they're the ones which give their sign of approval, so it only makes sense that they ask all the details upfront. And the devil is in the details, never underestimate the ability of a small detail to ruin everything.
8
-
Stallman also doesn't use it for the priciple of it. I've never understood, and it pains me, the people that saw Stallman using really old hardware having basically a severely handicapped experience and then they thought to themselves "hmm, nope, I don't want to do that, no FOSS for me". Like, nobody is asking you to go to the same lengths as Stallman, he's just showcasing you just how bad of a situation we're in. If anythink, it should be more of a reason to start contributing to FOSS one way or another, so if people get in a situation where they HAVE to go full FOSS, they won't be so behind in features.
8
-
8
-
Well, Thunderf00t did mention a bit about that.
So, you need roughly 1 MWh of energy for a full load of the semi.
And in decently good conditions, you can about an average of 10 hours of peak time for solar panels.
So to get 1 MWh of energy in a day, you need 100 kW worth of solar panels, that in 10 hours (well, in more than that, but in average) will gather 1 MWh.
So, solar panels are usually 20 to 22.something percent effective. That is, they can gather about 20 something of the incoming solar energy which is around 1kW per square meter in a normal bright sunny day. It can go a bit more than that, though not by much. And when very bright, you usually have the problem of cooling the panels, otherwise their efficiency drops when overheated.
So, let's assume the panels are at 25% efficiency. That means than 1 square meter of that can generate 250 W, so in one hour it will get 250 Wh of energy.
Simply put, 4 square meters make 1 kW.
To get to our 100 kW worth needed, we simply need to multiple the above figure by 100. So a megacharger needs gasp 400 sq.meters of good solar panels, and to run in good conditions, which California does have, but the northern part of USA and most of Europe (or rest of the world, really) do not.
Now, usually a solar panel is bigger than a square meter, but less than 2. So the number of typical solar panels would be between 200 and 400, with a strong bias to be around 300, I'd say. Actually, if the conditions aren't that great or the efficiency is not the 25% I used... well, then having 400 might actually be the more realistic number.
8
-
8
-
8
-
8
-
8
-
I'll sound very disrespectful, but this kind of a review (I know this is a refresh, it doesn't matter this time) is ... not good for this type of a CPU.
First thing: too much gaming benchmarks. It's a waste of time for everybody. Not even streamers should look for this CPU. So, gaming benchmarks for this kind of a CPU should be between none or at most 2 games, with several seconds of airtime.
The other thing: the productivity benchmarks... are too few. This kind of CPU is rarely for one person and rendering jobs are not everything. This kind of CPU is mostly used in servers. Besides rendering, there's also databases, of many kinds, applications, web servers and above all... virtual machines. That photoshop score was kind of meh. But how does it do with 6 photoshop at once, each in a different virtual machine ? How about 7 ? Or 8 ? How does the threadripper or R9 3950 fare in this ? How many queries per second can it do ? Requests per second ? How much RAM can it have ? How well does it run a special algorithm ? Or another algorithm, but in 50 instances ? Or Docker container farms ? In this video, W-3175X won confortably the 7-Zip compression benchmark. How many other applications/workloads does it win in ? Probably not many, if any. But we don't know. And this video sheds way too little light.
If you start to factor all the things said above, you start to realize that this kind of a review misses the point for this kind of a CPU. It spends too much time on benchmarks that are not relevant, misses a lot of benchmarks or workloads that are relevant, and I guess it also kind of speaks to the wrong audience. All in all I think this is just mostly a time waster. The folks at the big corporations that buy these CPUs don't decide based on this review. And those of us who look at this never buy something like this.
8
-
8
-
8
-
8
-
8
-
8
-
8
-
8
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
Call me pedantic, but I think it would be best to have a fork with the current way of small, required proprietary blobs included, just to be called something else, like Chadboot. And have Libreboot be.... you know... libre? Without any asterisks. Fully, 100% free, like it was before nov 2022. And to have them easy to recognize what they do like "oh, Libreboot is just Chadboot without any proprietary code". It's basically like it is now already, but under two different names, for different purposes and less confusion.
Otherwise, I'm happy that both a) they added the proprietary stuff to make it usable on much more systems and b) that they still give you the option of having 100% free software.
7
-
7
-
7
-
7
-
7
-
7
-
It is a glorified RDP hardware.
And 2 x 4K doesn't need THAT much. You can have good 1080p quality with 10 Mbit/s. 4K is 4 times the number of pixels of 1080p, so AT MOST it would need 40 MBit/s. I say at most, because video streming will be (of course) done using compression (probably AV1) and if the video is not very "busy" in the amount of small details, then the extra pixels won't add much when compressed.
So 2x4K would be 80 Mbit/s. Let's round it to 100 Mbit/s and have them at 60 FPS. And no, doubling the framerate doesn't double the size, again, because of compression.
Of course, if you have more, it will be better. But I wouldn't be surprised if it would actually be totally fine with 100 MBit/s. Because normal applications have a lot of spaces that are basic and simple. Like margins on a website or on a Word document, that gets compressed very easily. Or the background color of the Excel cells, those are easy to compress too, and also don't move much, so you might get it to look artifact-y only when you scroll, but look crystal clear when you're just typing stuff and 95% of the screen is static. The taskbar of the OS and the menu/top of the app(s) are also very static, so in reality it's like you have fewer number of pixels.
7
-
7
-
@MichaelZenkay That is a good argument, but the numbers still matter a lot. Using more energy for something does, in the end, mean more pollution, since the energy itself, even if solar or wind, pollutes (from manufacturing, repairing and then replacing). But how much more pollution is to be seen. Like mjc0961 said, the difference might be easily offset by simply reusing the bottle. If it's something like reusing 2, 3, 5 or 10 times, then I'd say it's the better option. 10 to 20 I'd say it's questionable, and it would depend on context if it's feasible or not. Higher need of reusing means that yes, just using plastic is actually better (and also more convenient)
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
7
-
6
-
6
-
Until snaps can have custom repositories, people complaining about it will never stop, even if Canonical will be spotless for, say, 10 years. And while I don't actively complain, I support the camp that complains about it.
Really, having the backend closed source is simply a risk. A risk that at one point it will be gone. Or it will turn for the worse. It might not happen, but noone can guarantee it won't. There are many people who will happily use it, and if it goes to the doodoo simply switch to another package format. But some other people prefer stability. That is, they prefer to settle on the better (preferably best) format and then never have to deal with it again.
Canonical really is wrong with this approach. For snaps' sake, I hope that they either make the backend open source too, or someone creates a custom backend for a snap repository so people can bypass Canonical's nonsense.
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
@Ayy Leeuz I think you confuse "tool" with "abstraction". When you said: "that is not the purpose of abstraction at all. abstraction is not removed from actuality so you don't need to understand how it works in actuality, nonsense. abstraction is a means of simplifying for those who understand"
That is totally wrong. That is what a tool (or library in broader computer term) does - allows you to apply your knowledge much more easily, without going into the details, but you have to understand how it works, what it does.
An abstraction is just going to give you the ability to use a concept without knowing its implementation details. That includes to not know them at all. Of course, it should still let you know of its limits and side effects, if there are side effects. And because all abstractions have limits, eventually you will learn how they work, when you need something better/different.
For example: can someone learning javascript get good at it and be productive, and give out good code without knowing what pointers are or basic structures, like linked lists ? Absolutely! Of course, at some point, it should learn, or it would be good to learn about them. Like say, they start with frontend development, go into backend development, start using databases and at some point they'll need to optimize a query and then learn that there are hash-based indexes and btree based indexes. Because the language (JS) abstracted the memory management, it lowered the barrier of entry, but that doesn't mean those who use it are useless or that whatever they code (until they learn how everything works from the transistor level up) is bad or garbage. It just means that they're limited in what they can do. And that "limited" is actually still quite sought-after and useful these days when there's a lot of programs, apps, APIs and so on, to be built. Blame it on the hardware which allows programs being literally 10000 slower than a decently optimized one to still be useful.
Overall I think you're wrong, and I agree with Grog. I hope you never used malloc until you properly learned what it has to do, otherwise you're a bad programmer too and should quit the industry. See how ridiculous it is ? You're basically saying that everybody should know assembler before they can use any other language. Insane!
6
-
6
-
6
-
@alittlelooney5361 How can't you understand the hate ? It's been 11 months of having only X570 chipset capable of supporting Zen2 and Zen3, with everybody (very reasonably) believing that B450 chipset boards will support Zen 3 too, especially since B550 is never to be seen. If someone bought a Ryzen5 3600 two months ago, planning to upgrade to a Ryzen7 4700, with no need of PCI-E 4.0, tell me how could you know that you DO need X570 ? Or why would you spend the extra money on a X570, for 0 benefits ?
AMD delayed B550 by one entire year, and they didn't told anything about Zen3 only working on 500 series chipsets. If they either a) announced last year that Zen 3 needs 5xx chipset or b) released B550 when Zen2 launched, then this wouldn't be such an issue now. But, as it stands, AMD just s****ed in the plans and decisions of a lot of their customers, including people who actually paid attention at what they are buying. AMD deserves most of the hate they're getting now.
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
6
-
I used to like JavaScript (dynamic language, first order functions, prototype based OOP) but I didn't like all the frameworks that emerged from it. jQuery is truly a definition of good library design, as evidenced by its staying power with so little changes. While things like Angular have to keep saying "yeah, the old version was bad, but this new one is good". Anyway, bad comparsion, framework vs library.
What I still don't understand, probably because I don't program much in JS nowadays, is why people didn't profit from the awesome prototypical inheritance but insisted into getting back to the classes system. Yeah, it can be done (because prototype based OOP is awesome) but it's like "nah, these round wheels are too dangerous, we must get back to square wheels".
5
-
5
-
5
-
5
-
5
-
5
-
5
-
I disagree with the ranking. Yeah, if you know already what to expect, what do you have to do, it is easy.
But, in some cases, let's take Gentoo, you do have to learn quite some things for it to work. And as nice as the wiki is, there's still things that you kind of need to know before hand, otherwise you'll be in for a bad time. Like someone mentioned, when it breaks... well, it's not so easy anymore, now, is it ? And just the amount of what you need to learn, what packages do what, what an OS requires in general and so on, there's a lot to learn. That's objectively difficult.
Of course, there is such a thing as something being difficult no matter how much you know. I'd say that it doesn't apply here almost at all. Everything related to installing and maintaing an OS is knowledge based. There's no realtime dexterity or attention/observation/perception contest where if you do not act in less than 1 second to an event you lose.
Back to the Gentoo example. I can bet you that more than 99% of people who a) didn't used Linux before or know that much about it and b) are not developers, now these people, if you would task them to install Gentoo (as their first Linux install) I bet you that more than 99% won't be able to do it in less than 24 hours. On a computer where compiling everything needed takes less than 4 hours. They'll have A LOT to learn. Well, maybe some may be able to do it faster, if they skip on the documentation, and happen that the code examples all work.
If you take out the "time to learn" parameter from the equation, most things in life become easy.
5
-
5
-
5
-
5
-
5
-
5
-
5
-
@barongerhardt There are times where you normally use feet but have other things being expressed as "half a mile". Which is 2640 feet in length. Oh yeah, I forgot about yards. Those I haven't used or was forced to use much.
Anyway, the age is not exactly an argument. I was using more as mocking, like neatherthal = uses foot, contemporary man = uses meters.
First, using "foot" as a measure does come from around the Bronze age. Of course it wasn't the exact foot we have now. But the idea is the same, just that now we have tools and conventions, so it's the same everywhere (well, almost everywhere, that's how "International foot" got to be a thing).
The meter is still superior in that it was designed from the start to not be something so subjective like a human foot or cubit or anything else human-related. First it was based on Earth's size, and quite soon a reference bar was created.
But to get back on in, ft, yd, miles, pounds, ounces, gallons and the rest, the real benefit of the metric system is indeed that it's all in base of 10. You might barely, if ever need to care about mm in the same sentence as Mm (1000 km), but it does matter immensily. Because you use small units there, which interacts with medium units elsewhere, which eventually matter for big units somewhere else. Not having headaches in converting is very useful, both in time and in chances of mistakes. Basically the context will never matter, since it's so easy to convert from one size to another.
Lastly, indeed, our current languages aren't neccessarily better because they are newer. On that, I do regret many new things because a lot of time they shed some old, useful things, just because they feel they won't need them or because they're expensive (like having a phone that can be used for 50 years, not what we have now with smartphones).
However, I do have to point that using NAMES in latin has nothing to do with the language being good. Speaking in latin is an ENTIRELY different thing. And, no, basically nobody (including scientists) is speaking latin, which I think you could say that it is counter-productive. I understand your argument, but your example here is really not good.
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
@sammiller6631 Wrong. This direction that we're talking about is something objective. When they say that something is free software, it means that the code for the software is available, no binaries. That's not faith, it's not zealotry, is the objective truth.
Now, you can say that they are inqusition-level zealous, because they don't compromise. Like, say, for practicality. If the only full free software laptop is one from 17 years ago, then so be it. I like and respect that.
Because (here's the big thing) I can always choose how much poison I take. That is, if I go full free software or only partially and how much. And this differs from person to person, everybody has a different tolerance and needs, so what's practically free software might be too compromised and unsafe for others and viceversa.
That's why having FSF tied to objectively full free software is very valuable, even if it's impractical (for now). I also value that is shows us how far we are from that ideal standard. Even when that distance is sooo big and they're actually getting blamed for it.
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
I think that Linus is ok with the english way of pronouncing Linus, because he's usually very pragmatic. It's close enough to be clear it's his name, and it certainly the way people pronounce things can't be changed over night, people will still call him in the english way. It would be a pointless debate.
Out of respect, unless said otherwise by the person, I always think that pronouning in its native language is the polite way. Of course, in practice I don't always get to do that, sometimes I'm not even aware that it's in a different language, but I do think that's the most polite way, most showing of respect. I would go out of my way to learn and use this "origninal" pronunciation for people like Linus (well, it's easy here anyway, no real effort required) or, to give another easy example, for Michael Schumacher. While here, it's also a sign of respect (some would even call that normal decency) to use the diacritics where required.
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
5
-
4
-
4
-
4
-
I take when we have things that have specific words with specific meanings and those get ignored.
Case in point, having EXPERIMENTAL code. What more do you need to be told to not use this in production, and if you do, do it at your own risk ? Experimental code is so you know what's coming, to be able to work with it and test it, not ship it to production.
To the folks saying that "well, they shipped it, so it's their responsability", I disagree, at least on this particular case. Nginx is opensource and you can compile nginx without http3 (omit --with-http_v3_module), so you can have it without experimental code at all. You can have it removed (not just disabled) at a binary level. So you can be VERY EASILY shielded from "oh, someone just flipped a flag, by mistake or intent". Also, F5's point was that they know people/projects/companies that USE it, not that they simply might have it in the binary.
I understand F5's position here, and I can't really blame them that much, but I do think it's just bad practice and it incentivises bad practice in general, and should be avoided. So I'd be happier if they didn't go the CVE route, but using other means of communication.
4
-
4
-
This depends a lot on the hardware you have. For desktops and laptops Linux is still behind in hardware support (especially peripherals) because the companies making them didn't bother to write drivers for Linux, and OF COURSE that they don't opensource the ones on Windows or offer schematics or documentation of sorts.
Also, even for Windows, the stability of the system is again, determined by hardware. And user behaviour. I'm not now writing this from my almost 7 year old laptop that I've used DAILY for both work and personal use (aka A LOT). It still has the original Windows 10 install, except that at one point I upgraded to the Pro version to get rid of those F^#^&$#ING automatic updates (best $11 spent ever). In almost 7 years I had 4 (four) blue screens of death. And after I configured it so it updates when I say so, not when it wants, I can get to really high uptimes (I usually try to be between 1 and 2 months, but the biggest I got was exactly 100 days. It still ran fine, but I went immediately to update it, to not risk being the idiot who got hit by a ransomware)
Still, if you get a new laptop or premade desktop, you can check System76 or Tuxedo or Framework to get one that's guaranteed to work flawlessly with Linux. And even without that, Linux is evolving fast, so your experience from 1 year ago can be drastically improved. It's still not guaranteed. But in, say, at most 5 years, I think that more than 99% of people would be perfectly served by Linux. By then Wayland and HDR will be mainstream and mature, the GPU drivers will be perfect for all 3 vendors, the anti-cheat systems in games should no longer be a problem, all except the most obscure games should be playable, everything except Adobe software should also work with no hassle and drivers for over 99% of components and peripherals should be available.
4
-
4
-
4
-
4
-
@SisypheanRoller Damn it, if my net wouldn't had dropped at the exact time, I would've posted this hours ago and the many replies that I see now would've been ... better.
So, regarding the monolithic part - the number of binaries is indeed not relevant (though often easy to tell at a glance). The idea is the coupling. If you have one giant binary or one main binary and another 10 binaries, but there is a hard dependency to them (or just one of them) then you have a monolithic program.
In our case (unless it has changed recently, I haven't checked) journald is a prime example. It is a component of systemd that cannot be (safely) removed or changed. It is a separate binary from systemd but because of the hard coupling, it effectively is part of systemd.
To systemd's credit, the amount of binaries that have stable APIs and that can be changed with 3rd party binaries safely has increased over the years. One can hope that eventually that will be the case for all of them and that everyone will then be able to use as much of systemd as it needs and replace anything that they don't like.
Getting back to the UNIX philosophy of "do one thing and do it well" unfortunately many many people don't understand it and spew bullsh!t that it's outdated or other such nonsense.
The idea of it is that such programs (tools or in a broader sense, infrastructure) should do one thing and do it well in order to have effective interoperability. In order to have that program be able to be easily and effectively used for scripts or other programs.
Since you mentioned, the "one thing" is not important. It can be any thing as long as a) is complete enough that in most cases can be used alone and b) is small enough that you don't have to disable a lot of it for normal cases or that by simply running it the CPU/memory requirements are significantly higher than what is actually needed in the typical use case.
This can be as simple as listing the contents of a directory (ls) or transcoding video and audio streams with many options of editing and exporting (ffmpeg). Is ffmpeg massively complex ? Yes! Do people complain that it violates the UNIX philosophy ? Not to my knowledge. Why ? You can use it effectively with the rest of the system, you can script around it. And it works well. OBS using it under the hood is a testament of that too.
Lastly, here's a practical example of why not following the UNIX philosophy is bad, which hopefully also responses to Great Cait's question of why the hate:
Search CVE-2018-16865 . It's a vulnerability that was found on journald several years ago, and was later fixed. The problem is that it's pretty high severity. And... you cannot simply disable or remove journald (or couldn't at that time). You can use rsyslog alongside journald, but because they made it to be soo coupled, you cannot literally remove it and still have a working system. Imagine the stress levels for system administrators that found out that they have a big security risk that they cannot disable/remove/replace, they just have to wait for an update.
That's the hate. Yeah, it works pretty good. But it's not perfect. And it's being shoved down our throats in a "take it all or leave it" manner that is a slippery slope for potential big problems down the line when everyone is using it and suddenly some massive vulnerability hits it or Red Hat pushes something on it that everybody hates or things like that. And people will suddenly realize that "oh, sheet, what do we do now, we have no alternative, we cannot change 70 programs overnight". And it's annoying, because we know to do better. Hopefully it can change to be fully modular and non monolithic so something like what I wrote above cannot happen.
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
While I support the idea, I don't think it relates to e-waste anymore. 32bit systems are, by now, so old that they're basically unusable as a normal computer. If you value your time, that is. So I think that 32 bit systems are mostly just for niche segments now, enthusiasts and geeks having them for retro purposes. And, frankly, they could all simply still use Debian 11, that's why I say the e-waste is a non argument in this case, these systems are not being thrown away anyway, Debian 12 support or not.
And the machines that would be e-waste without proper software support are 64 bit by now.
Still, like I said, I really like the 32 bit support and backwards support in general. Maybe you want to simply have a partition with 32bit OS+apps that you can run "natively" so to speak, while your main OS can stay 64 bit.
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
I have two questions. Forgive me if they are extremely stupid.
1) Can you minimize a window in dwm ? And in WMs in general ? (when using just them, without a DE). I'm asking because I'm actually keeping a lot of programs open usually (currently still in Windows), like several terminals, notepad++, 2-3 browsers (and FFX and Chrome usually have 2+ windows opened too), PHPStorm, Skype, Teams, File manager(s) and task manager. And some I don't REALLY need to keep them open at all times, but I do like the instant switch.
2) Can you install and run something like Firefox without a DE and a WM ? I assume that yes, you can, provided you have a ... graphical system (??) like X ? I might confusing a lot of stuff here. I'm just curious what is exactly needed for what.
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
4
-
I don't think that Notepad++ devs are hypocritical in the slightest. Few things to consider:
- are they owned by Microsoft ? No
- do they have contracts with Microsoft ? Pretty certain they don't
- Notepad++ is 17 years old software that's very praised for it's usefulness on Windows. If Microsoft does NOW something bad, what would the devs that worked 17 years on this project supposed to do ? Close the project now ? Ok, it can be argued that this isn't the first shady thing that Microsoft does, but you get the idea
So, considering the above, I think that the devs publicly distancing themselves to what Microsoft did is exactly the thing they should've done. Though I can also argue about not mixing products with ideology. I mean, raise your individual fork at Microsoft, but leave the Notepad++ project completely untouched. But getting back, I don't think they're hypocritical. You can certainly stay in a man's backyard but still fight them over things you don't approve. You don't have to immediately leave. If the differences are many, clear and insurmountable, THEN you'll have to eventually leave.
4
-
4
-
4
-
4
-
@jaredsmith5826 Well, getting more serious now, it actually improved a lot. Since PHP 7.0 (now we're at version 8.1), I'd say it's a decent language. Much more static typing, less bugs, less exotic behaviours and hidden errors because 0.1% might want that feature, and general modern features.
It has pretty regular releases, though it also followed the extremely annoying and stupid trend of only supporting a version for a couple of years, but other than that, it's going well.
There's still some of that haystack, needle and needle, haystack nonsense, because backwards compatibility (ugly or not, there's millions of websites written in PHP, so backwards compatibility is very important).
I like it as a language because you can easily grow with it. You can start by simply having a really simple program with several functions and ifs and fors, fully procedural. And you can expand easily and at your own pace towards fully featured OOP style, with namespaces, interfaces and traits, with autoloading. You get the idea.
4
-
4
-
4
-
4
-
There is a video here on youtube with a madman who installed the latest Gentoo at that time on a 486, in 2018.
And another video from this year, if I remember correctly, installing ... Gentoo (of course it's Gentoo) on a Pentium 133 MHz.
And yeah, they boot really slow 5-10 minutes to boot, another 5 minutes just to shut down. Though you can actually connect to the internet and see or download stuff safely. But overall, there's little reason to install a modern kernel. Though, regarding the speed, the one on the Pentium doesn't seem like he went full optimisation, so it actually might run much better.
I'm with Linus too here. Part of me hates to see support like this dropping, but in reality, it keeps the code more maintainable, without an everlasting list of things to check or maintain compatible. And the devices who get to be dropped support truly are obsolete and also don't really benefit from having the latest kernel.
Now please excuse me while I stash a nice whiskey bottle for 2032 when 586 support will be dropped.
4
-
3
-
Hey Louis, thanks and congrats for all the work you do, especially this thing: contributing for a common sense law to be passed, aka working for a better world.
Now, I feel the need to express some things:
1) Even though things like the right to repair don't have much sense to be something to be taken into account just on a local scale, I do feel that the senator is entitled to ask anyone from where it is, even if it sounds totally dumb. For all they care maybe the people from Nebraska actually don't want this law, but somehow a lot of people from other states come to plead for it. Surely there can be nebraskians found to come and plead so this is no longer an issue.
1.a) When the senator had nothing to good to say about that satisfied customer from Nebraska... I see that as perfectly normal. He is not there to congratulate anybody. It's normal, given the time constraints, to only ask/talk only about the things that he doesn't like/know etc. I'd say, if all he had to say/point was a stupid argument, then all for the better, as that seems that everything else was ok, and that stupid argument can be cleared with ease.
2) In general I think that in order for a law to be passed, or at least for it to pass further, after this kind of talk (I don't know exactly how it works in US, I'm from Romania), the senators DO have to ensure that all aspects are taken into account. You can think of them to be the devil's advocate. However a dumb question might be asked, you guys should be prepared to answer it so the thing you're pleading for is without a benefit of a doubt good/better for all people, especially law and politics people. Think of them being like "ok, so you want this law that seems pretty common sense. But, you know there's big companies (or anyone else for that matter) that might not want that, and we're not technical enough to call bullshit on their part. How will you tackle this?" Aka is your job to provide as much evidence as possible that this will have no secondary effects or unforseen situations or abusable situations or affect unrelated parties etc. And that the things affected are with a reason (repair right will lower Apple's income, but will provide the consumer their right of ownership over the bought part or their human right dunno). It does sound a little like you'll have to do their work, but ... such is life.
3) As AkolythArathok said in this comment section, there needs to be a more serious pose. Talking about Repair Family is kind of distracting from the point aswell. Or things like "hey I have here a customer which is so happy, yay!". You actually did her job here with very clearly and shortly/on point saying "we do data recovery for which the customer has no option to do at the manufacturer, for any amount of money". That is how I think a point should be made.
All in all, it was kind of sad, but totally not surprising to see this. And I have to congratulate for your speech. It was very on point, with clear arguments and examples. Now all you have to do for next year is to have everybody supporting this be as efficient and articulate as you :) And have everybody be able to totally demolish all (dumb) counterarguments presented here. And as you very well observed, to have this lobbying prior to the talk. The talk is just a showcase.
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
@spuriouseffect Sorry, you keep confusing government or state owned with beaurocracy. And think that private = beaurocracy free.
Which is totally wrong. They are very different things.
At a very basic level, having beaurocracy is having written papers about rules and agreements. Any kind of contract, if it's written - BAM there you go, beaurocracy. Does your fancy Sweden have nothing but verbal agreements between all the companies ? I seriously doubt it.
Beaurocracy is often seen as negative when it's done inefficient, which is very easy to do. The goal of beaurocracy is to have everything stated and noted, so everything can be tracked and everybody can know and check the current or past rules and agreements & their states at a given time. With a well-constructed beaurocracy, nobody can say "I don't know what I have to do, what are my responsabilities" or "I don't know if I'm allowed to do that" or "I don't know who made that or took that decision" and so on.
Having that idea in mind, simple people think then that everything have to be written and whenever something new comes, one or more rules are created, without having regard of the existing rules and the need to have the shortest, most efficient list of rules.
Cases where beaurocracy foster corruption and nepotism are cases where people are using or actively trying to create a very heavy beaurocracy, where it's impossible to track all the things so they can benefit from that. But that's not the idea of beaurocracy that's wrong, that's the fault of the people who created or let the system get into that state.
It's like saying that houses are useless - they break all the time, you need a lot of time and money for all the repairs, in winter you're cold, in summer you're sweating buckets, insects can fly inside, the walls get moldy etc etc, so you conclude that houses suck, and everybody should move into caves. Well, that's not the idea of having a house that's to blame here, it's how that house in particular what was wrong - it should have been built better.
3
-
3
-
3
-
On the Zen5 rumor, another serial leaker (which, while I don't always agree with what he says, his leaks are pretty solid) - the YT channel "Moore's Law is Dead" specifically said that it's NOT 40% and that everybody is wrong on that. To his sources, it's 16-24% IPC increase, and the clock speeds seem to be mostly similar. And I firmly believe him on this one.
40% increase is either a) very cherry picked, b) totally wrong or c) gigantic, almost unseen, gen-to-gen improvement. The last time that has happened is from Buldozer to Zen 1, which was also 4 years, not just 2. And it was from a rather bad architecture to begin with, while here the "previous gen" is already very good and advanced on all fronts. To put it shortly 40% my @$$
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
It's been like that for many years already. It focuses and comments more on the future and the trends, not that much on particular stuff that's currently not working. It did that in the first editions, but it got boring and repetitive (for him) fast (every single year to mention how bad games are on Linux, for example), so he shifted on more of a meta analysis.
While I wouldn't say no to have an actual list, roundup of all the things still missing or being broken in Linux, I appreciate this culture and trend analysis more, since it's less easy to observe and realize the current state or the near to medium term risks.
He is usually a bit more pessimistic in his views, but, in a way, he's contributing to those things not having a change to turn into reality, by outraging people who in turn do something about it, before it's too late.
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
Yeah, I'm in the same camp. I like (liked?) JS as a language, and realized I don't like any frameworks. Though, to be honest, I haven't used one in years, since I'm a backend dev. jQuery was/is the only well built, well designed piece of JS. And it's what it should be - just a library, not a framework.
I'm still baffled that some simple websites go to the hassle of having Angular or Vue or React. If you have A LOT of reactive (is this the word ?) elements, then, maybe. Something like a facebook page, I guess would be quite the work with only jQuery. But maan, there's a lot of websites that are over engineered with these modern JS frameworks.
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
I can't wait for a similar analysis on the Aptera car too!
Regarding the solar panels, IMO it's mostly an useless gimmick. Ok, if you REALLY like to travel in remote areas (that still have good roads somehow), then, ok, it makes sense. But that's really, really niche. Otherwise, especially since we need so many solar panels globally, to get rid of fossil fuels, any solar panel on a car is at least twice as productive sitting on a roof. Like a roof over a parking, charging the battery that some electric cars can use to recharge while the car is parked and the owner is busy. On a roof it can be 100% in sunlight (except only from clouds) and it can be oriented directly towards the sun. Or close to, if it's static. On a car, at least 1/3 of the panels are not well oriented. Well, come to think of, almost none are properly oriented, though usually most of them are almost decently so.
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
The honourable mention is Arch, buut
The esteemed, prestigious, renowned, distinguished, venerable mention is Gentoo. It still has 32 bit support. And you can trim it down more than all distros in DT's list, so even 30 (yes, thirty) year old computers can run on it. As long as they have 32 MB of RAM. And an Intel 486 compatible CPU. It won't be fast though. A Pentium 3 tier CPU + about 2 GB I'd say is the minimum for kind-of-modern way of using the computer for modern-day light activities. The down side is that Gentoo is totally not for Linux beginners. There's a lot to learn in order to customize and use it effectively.
While we're at it, Debian itself is a good option. Since, you know, 80% of DT's list is based (directly or indirectly) of Debian. And, of course, Devuan too (maybe even better).
Lastly, but not least, I'm pretty sure that Slackware is also an excellent distro for this.
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
What does servo add ? Or to ask differently, what is Firefox missing ? Genuine question, I do use Firefox and I don't feel like it's slower or something, at least not noticeable. And features-wise I'd say it's about on parity with Chrome and Chromium, except google specific things.
Though, come to think of it, I do remember WebRTC working worse in firefox, while it was basically without issues on Firefox Chrome. Though on the other side, Firefox has better developer tools. Well, maybe not overall better, but it has 2 things that I do use that Chrome doesn't have: edit and resend for network requests (chrome has only resend) and "skip this file" when debugging javascript.
3
-
3
-
3
-
3
-
Genuine question: is that "they can do that in a few lines in the server" actually a well-spread, often-use thing, that makes sense to be in the server, FOR THE WHOLE ecosystem ? Because, hypothetically, if it was a stupid architecture before, where the server did way too many things, and got to be unmaintably huge and impossible to improve without breaking a lot of other things, then the replacement for that, it makes sense to have better separation and thought put in what should be where.
I have no idea of your exact use case, but in general, just because before it worked very easily and now it doesn't (from an effort to develop point of view), it's not enough of an argument in itself, if the new way is more maintainable and allows the ecosystem to evolve and be much better than the old way.
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
Lunar Lake is only half of next gen, meant for very efficient ultrabooks. The other half is Arrow Lake.
While I normally don't like soldered memory (or soldered anything) I'm very excited about Lunar Lake. It should be much more efficient, and it should FINALLY kill the gigantic horde of "bUt aRm iS sO mUcH mOrE eFfiCieNt, die x86 hurr durr".
If having proper controls and drivers and software (like Linux) it should finally be on par with Apple's M powered laptops, as far as ability to be efficient (latest manufacturing node, memory on chip and not a massively bloated OS + apps - like Windows). That is, it won't be dragged down by non-ISA factors that make x86 seem less efficient than ARM. Well, I do think it is less efficient, but to a much smaller degree, like 5-10%, nothing massive and certainly not something to ditch x86 over. Hopefully with LunarLake we'll see the real difference.
3
-
3
-
What tylerdean said. It's a bit big and it's hard to replace stuff you don't like from it. For example, check CVE-2018-16865 . Basically journald had a pretty big vulnerability. That's not the issue, it happens. The issue is that you could not (at that time at least) disable or remove it. It was essential to the system, even though it shouldn't have. Imagine being a system administrator and realizing there's a big vulnerability out there, and there's nothing you can do, only wait for the patch to appear.
This monolithic design (and no, just because there 60-70 executables doesn't mean it's not monolithic) is the bad part about it. Or following the UNIX philosophy, as some would say. Still, from what I remember, they started to modularise their things, most of them could be used independently or swapped with something else. I haven't checked in almost one year, so I don't know if it got there yet. If it does, then the hate towards it should dissipate too. At least my gripe with it.
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
2
-
2
-
2
-
2
-
Holy crap! An entire 54 reply-long thread and all but one of the comments are totally missing the point!
@Torgo I'll answer to you, since at least you honestly said your opinion, argumented it and presented it in a non-flaming way.
So, the most important thing: Free software DOES NOT mean the software cost 0$. This is a problem with this label, I really wish Stallman would make an effort to name it otherwise so there's less confusion about this.
What RMS is advocating for is the freedom to see/check, modify and (re)distribute the (modified or not) software (though on the redistribution of a paid software is kind of weird). So, just like after you buy a table, you are free to check how it was made, and you can modify it and even sell it. RMS wants to be able to do the same things with the software. Theoretically you could check a program by disassembling it, but the time investment for that is massive. And in most cases illegal also.
As a side topic, the means of revenue from software in these cases is complicated. Nothing would prevent someone from buying a software then putting it on piratebay and everyone else getting it for free.
For games, I guess that one way of doing would be like John Carmack/Id Software did - make it closed source, and release it as open source several years after. But you still have the privacy concern.
Highly specialized software (websites included) usually are already done just for a specific company and the company (aka the customer) does receive the source code too, so there is no problem.
Getting back to Stallman, another thing is that (from what I've seen) he's not advocating that others should do like him and impose such enourmous restrictions just to be free. He is just stating at what lengths he (and anyone else) has to go in order to be mostly free. Just so that you can see how bad the situation is. So you shouldn't think about how impossible and impractical (paranoic if you like) it would be to do the same, but how good it would be for more and more people to work, collaborate and contribute to free software so being free is easier and more of a sane/easy choice.
2
-
2
-
+Ben Wagner Besides what Lucas Bons said already, I think I can give another idea of how it is.
Basically every attack or roll/jump you make, can't be stopped. So, if you attack an enemy and realize you'll miss.... you can't cancel it or move away. You have to finish your move. Which might even take longer if you don't hit anything (with big, heavy weapons, like a halberd). In this time the enemies can have all the fun on you.
And viceversa, once they started an attack they can't interrupt it, or move while attacking, giving you an opportunity to hit them, if you dodged the attack. That's why it's said to be fair.
You also really have to learn how each enemy likes to attack/battle. Even if you're high level and you go to beat some monster/boss, and just want to attack him straight away, you'll die without killing him(or taking more than 10% of the life of the boss) 10/10 times. While if you know what/how to do, you can beat them 10/10 times.
So, if you have the patience to learn, and the patience to attack when you should... it's not that hard really.
It also feels very rewarding to finally learn and beat some monster/boss, after you got mercilessly hacked into pieces dozens of times. Also finding the numerous secrets. This is one of the reasons it's a great game.
2
-
2
-
@Tiger White - Please, don't say that X has better build quality than W, Y and Z combined. If that were really true W, Y, Z wouldn't have existed anymore. Albeit debateable, people are not THAT stupid.
I have a Dell Inspiron laptop from 2007 too, and it still works quite as well. Though I can't really tell how much since I don't use it anymore, it's more like a backup in case my main laptop dies suddenly. However TWO friends bought the next inspiron series, from 2008 and it was shit. Both had non-trivial problems. So I guess that 2007 was a good year for them, while 2008 was a bad one. And I think this applies to all companies. They have an average build quality, but the REALLY GOOD ones are usually a series/model.
2
-
@DimitrisKanakis "It s still unclear where the power moving the system forward is generated"
I have some doubts too, but it's all from the wind. Let me see if I can make an argumentation:
Let's say we have a 10m/s wind.
The thing is that the wind is so strong, that it will push the cart no matter if it has 10 kg or 100 kg. In that sense, we can say that it's force and power is uncapped, for our purposes (since we don't know at which weight it won't push the cart to 10 m/s speed). But even if the cart has only 0.1 grams, the wind will only transfer enough power to get it to 10 m/s. In all cases it imposes a 10 m/s^2 acceleration, but the mass differs, so the force differs.
Now, because of that, the cart can, normally, only use (is this the correct term?) the power required for it's own weight. Let's assume it has 10 kg. And that is on perfect ice (no friction with the ground). The power needed to push 10 kg at 10 m/s means that it will need to have a 10 m/s^2 acceleration directed forward, so the power needed would be m * a * v = 10 * 10 * 10 = 1000 kg * m^2 / s^3 which is 1000 W.
And that is the main thing. At the wheel level, they have the same power, since they are forced to move with the cart, at the 10 m/s velocity and the torque is ... I guess, the same force of the cart. But, the wheels are also connected to the propeller. So they transfer (for simplicity) all the 1000W to it. But the propeller has a bigger "wheel", so while it has the same power & force, because it's bigger, it will result in lower speed. It's the torque formula which determines the correlation between force, radius and power. Basically at a given, constant, power, the bigger the radius, the smaller the force and viceversa. And likewise, at a given, constant force, the bigger the radius the smaller the power, since bigger radius means lower velocity. And because friction is a thing, at a low enough force, the thing won't rotate.
In a way, I think that the propeller acts as extra weight, extra friction resistance, because it's connected to the wheels, which "saps" more power from the wind, which in turn it uses to propel the cart faster. So the initial 1000W that I said, some of it will be used by the propeller, so there's less power for the wheels. But, the wind will compensate the extra power so the cart still has 10 m/s^2 acceleration forward. So now (well, at the same time) the cart will receive power for both moving the wheels of the cart and the propeller. So you can say that the wind provides the cart with, say, 1500 W, which 1000W are used by the wheels to push the cart forward and 500W used by the propeller. Hope it makes sense.
There's also the thing that, if the car is faster than the wind, then theoretically it doesn't receive any power from the wind. I'm still not sure how to explain that. In a way I can say that both the propellere and the wind are, combined, pushing the cart. But then again, if the cart has been faster than the wind for some time, then it should only go on the propeller, and maybe inertia... dunno.
2
-
2
-
Nice, thanks for the info! Some others pointed that those barriers are actually heavier. I have no idea and I'm lazy to investigate myself, but it might be possible that the cargo is closer to 20 tons. On that idea, if the battery itself adds, at most, 6 tons extra than a diesel engine (7 tons battery, 1 ton less in engine+extras weight), why would it carry 10-14 tons less ? It doesn't make much sense. So I don't think that the drive and steer weigh 60000+lbs.
Also, please split your text a bit. It's a wall of text right now. Readable, but not confortably so, once you passed the 2nd row.
2
-
2
-
2
-
2
-
2
-
Yup. Same idea of living paycheck to paycheck. One time when you don't get it, or it's late, or it's smaller, or your needs increase (any kind of urgency) and you might not be able to pay other things, creating problems for others or for yourself. Having money not spent, which depreciate because of inflation is less efficient, but the problems you might get if you don't have some extra savings might be worth hundreds of years of 1-3 paychecks savings in inflation depreciation.
In the end, if you have a three-digit IQ and realize that problems ALWAYS arise (not "if", but "when"), the long-term cost of having some savings (others might call it "insurance") far outweights the benefits or running fully lean.
A bit off-topic, it's so nice to see that the individual (or bot) who keeps spamming about a certain person related to a certain religion is completely ignored. I feel like I'm in an elevated realm, where all people are smarter or have more wisdom. While I'm here, I hope everybody has a great day!
2
-
2
-
2
-
2
-
2
-
@davidereverberi5279 You didn't understood what I wrote. It's because Skylake (and before) doesn't have the hardware mitigations that Comet Lake has which makes it kind of an IPC increase. It's basically just much less of a penalty to the IPC on Comet Lake than it is on Sky Lake.
Next, you have to add that on top of Sky Lake IPC improvement over Haswell. Here's a review from when 6700K was fresh: https://www.anandtech.com/show/9483/intel-skylake-review-6700k-6600k-ddr4-ddr3-ipc-6th-generation
You can see there, near the bottom, that it says 5.7% IPC over Haswell on Sky Lake with DDR 3, and 6% IPC increase when Sky Lake was on DDR 4. That was quite low, buuut, that was with DDR 4 on 2133 MT/s, the lowest most basic DDR 4. Which was normal back then, as it was new. Now, with 3200-4400 MT/s being the norm, with nice timings, that should be easily at least another 5%.
All three combined should add up to about 20%. That is, from Haswell to Comet Lake.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
Damn, I'm late to the party. This was a really nice showing, and I like Thomas and Sono.
However, I do have a negative point, to ask: why put a solar panel on a car and not on a rooftop ? For the short to mid term, until we have some saturation of solar roofs, wouldn't that panel generate more on a roof, thus better help with the CO2 and the climate change ? Say, if it's put on a parking lot, keeping shade to the parking below, it can have near peak performance the full year. On a car, especially one like in this video, it can generate 0 W for days, depending on where it is sitting.
2
-
Here I place a formal request for Theo to STOP DOING MULTIPLE MULTIPLICATIONS AT ONCE. And in general to STOP RUSHING over delicate things/parts like that.
The rather large and significant math mistake at the end would've been TRIVIALLY avoided by not rushing so much and doing the math in multiple steps. Like first stating how many requests are per minute, then per hour, per day then per month. Or at least something like 3600 seconds per hour, 720 hours per month to keep things still simple. I really mean it that it was really a rush, as what I requested above takes SECONDS extra to write. Can you NOT feel dumb for doing a big mistake because you wanted to save, say, 1 minute, out of a 74 minute video, kinda spoiling it at the end ?
Also, Theo, please place the multiplication in the written part too. If you miss something, you AND US can see it there much more easily and not have to search in the calc app history. Not to mention that you switch things so fast there, it's very hard to keep up and spot the mistake right then (that is, without pausing), it's almost looking like you WANT to hide something (which I know you don't, I'm just stating how bad this is)
After all, it was stated that serverless is more expensive and that the cost difference might not matter that much anyway, so the overall conclusion is still kind of there, but it's still so SOO frustrating to see (not the first time) such glaring mistakes that also might make someone less knowledgeable really question the video and conclusion. For something so easily avoided. STOP RUSHIIIIIIIII
2
-
2
-
2
-
2
-
2
-
2
-
I think this is perfect with TWM. I mean, a normal keyboard has ONLY, what, 105 keys ? Getting 17 more from the mouse has to be a god send. With the 4 modifier keys (Ctrl, Alt, Shift and Super), that basically 68 more key combinations. Sweet! Oh, and you can use more than a modifier key too!
On a more serious note, those keys can actually allow you to permanently keep one hand on the keyboard and one on the mouse, as long as you don't have much text to type. With the modifier keys, you really could have all the TWM administration keybindings controlled with a modifier key (or none) + a mouse key.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
One silver lining that I can think of, about all of this, is that if Apple spends so much effort into making these so much of a closed garden, at one point this effort will simply eat too much into their revenue. That is, it will cost them a lot on the manpower needed to design those things. And the things themselves will make the product cheaper / consume more / weigh more something like that. Well, so far it wasn't enough, but with time, maybe the competition will rise, and by virtue of simply not having to invest that much of everything into closing the garden, they will have the superior product.
It's like hardened products vs normal consumer products. The hardened products are always more expensive, while at the same time having less performance. Well, at least in the normal conditions. In extreme weather for example, a hardened product will work, while a normal consumer one won't. In this case, Apple will have a hardened product, but not against natural elements, but against being repaired. And, I hope, at one point it will simply be inferior in performance or price/performance.
2
-
@LeslieBurke8 True, I won't hold my breath either.
Truth to be told, in some aspects, Apple have the superior product. Be it by design, performance or simply the ecosystem and the idea that things simply work.
Hopefully, with the rise of Linux on desktops, maybe the superior part of the software will vanish as really good things will start to be supported for Linux too.
And on the hardware side, both AMD and Intel made a comeback and have pretty good CPUs and APUs. Still have to catch up on performance/watt, but given that in general the performance needs haven't risen that much, the things you can do on 15W or lower have risen regardless.
Apple will have another home run with M3 as it will be the first 3nm processor in the laptop space and again it will be significantly more powerful at the same very low power consumption. This is the same that they had with M1, first 5nm on the market. Now that AMD will come with 5nm, they will actually be better than M1 and M2, but that means nothing for the normal consumer, when AMD/Intel come with the good product almost 2 years later. Hopefully M3 and M4 will have better competition.
2
-
2
-
2
-
2
-
Good idea of a video! I have several points to make:
- the term "review" is a bit subjective. For me it always hinted some sort of completeness. That is, to paint an accurate description of the entity reviewed, which implies going more in depth, in order to know what and how works, to not be fooled by something pretty that will break on the next update. After reading/watching a review, I expect that anything that's not there to be details of little importance. So in that regard, I, too, agree that what DT does is more "first look" of a Linux/BSD/GNU distro
- I think there is value in having a more in-depth/longer-tried look on a distro, including for newer-to-Linux people. When looking at a "first look", you see mainly how it looks and the general idea of what a distro wants to be. However, it doesn't tell you much about how it actually functions, how it is to daily drive it. For people to switch to a(nother) distribution, they need to also know how hassle-free or hassle-light the new distro is to drive, not just to install and look at. Things like how often should you update stuff, and how does that work. Customizing something non trivial, how is that done. When an update breaks, how easy is to fix it or revert it. When something doesn't work, how easy is to make it work. Upgrading from one version to another, is that ok (where applicable, like Fedora 36 to 37 or Ubuntu 20.04 to 22.04) ? It's things like that that also help when choosing a distro, that give you some knowledge and peace of mind that you'll get along ok with that distro, since you have somewhat of a deeper understanding of how it works and how to tackle it in times of need. Of course, some of this is maybe not distro specific but specific to a package manager or specific to Linux in general.
If I can make a really bad analogy, it's like having a first date and the girl tells you that she cooks. Three months into the relationship you realize that she only cooks pastas and doesn't even want to try to make a soup. Or 1 month after you installed a distro you realize that watching videos on YouTube makes your laptop an airplane simulator because the distro you chose doesn't have the browser with hardware acceleration or doesn't have the proprietary NVidia drivers. Things like there are very valuable to be known beforehand (man, do I like soup!)
- lastly, regarding reviews or first looks, if they can be harmful. Well, they can, even if the reception from both viewers and the maintainer(s)/owner(s) is positive.
It's all about setting expectations. The idea is that a distro might get a first look, where it's shown that is nice and all. And an user (probably a novice) might try to install that, but have all sorts of problems. Either at installation or later, when something doesn't work. And that person might quit Linux altogether after a bad enough experience.
I think the best example of how to do it right is Gentoo. Everybody reviewing Gentoo mentions that it's source based and that is time consuming, unless you have a really fast computer. And that, even so, it's for advanced users, all the responsability is shifted to you, you basically only have several tools for automating stuff. And this is good, it gives the warnings that it needs to do so somebody seeing it, might like it, but realize that it won't have the time to actually maintain that.
2
-
I'm in the same boat. But because he's DT, I asked that, with the same disclaimer than I'm on the side of not believing this is actually useful for the society. I asked that on the first video on the other channel, but I think I'm shadow banned, I have no replies and no likes, kind of like nobody sees it. I'm actually curious, because I'm hoping that DT wouldn't go into this, unless he feels that it's at least okish, morally speaking. I can never know. But since I'm not that much into this, it's an opportunity to learn, maybe I'm the one that will change my mind.
So far he explained what the options are, and overall I got the idea, it's a nice tool for lower risks, by the looks of it. But on the "how is this useful for the society" part I'm still clueless. I mean, for the companies and for actual investors (people that are actually there and put money to invest, not to simply profit by playing the trading game) for them I understand, it's a way for a company to receive funding fast, just by providing trust that it will do well (financially and/or for the society). But for those that are only there to play the trade game and profit from it, I feel like they're simply intermediaries, leeches that profit from the success of others and overall making the place worse (they can also amplify bad stuff like hype or other speculation, which sometimes can be fabricated). I really hope that I'm at least partially wrong and that these traders actually do provide some value for the system, but I highly doubt that. Sigh
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
The video went pretty well for the first 10 minutes. But I cannot disagree more with the reason Linux didn't hit mainstream. People not liking the changes absoultely isn't it (not to mention that saying that there's also clearly no other reason is downright stupid). Yes, not liking change is a thing and that provides resistance. People will postone, will not understand and so on. But there's plenty of people who tried Linux and ... didn't switch (le gasp!).
The main reason is that, for many different reasons, Linux desktop is NOT as easy to use as a Windows and a Mac. Yeah, in some situations Linux works flawlessly out of the box and your Windows will be stuck in a boot loop. But overall, Windows and macs work in 90%+ of the cases, while Linux is somewhere at 50%. Because NVidia (maan, f**k NVidia), because missing wireless drivers, because some stupid update broke GRUB, because you lack Adobe Photoshop, because it crashes when it wakes from sleep AND SO MANY MORE that make Linux harder to use than Windows or Macs, overall.
2
-
I'm with you on this one. While I don't know exactly how would a large project / website would look without OOP, working with Magento 2 frustrates me. It's sooo over engineered. Just checking a normal, non-trivial but also not rocket-science-level flow I have to go through 60 files. WTaF is wrong with OOP nowadays ? The "single responsability principle" of SOLID and those idiotic "have functions of at most 30 lines, preferably 4" and other garbage like that made things so unusable.
Also, one thing more on this, there is still a rush to convert everything from inheritance to composition. People realized that even that OOP from 10-15 years ago was utter BS, but hey, THIS TIME they got it right, right ?
Ok, rant off, sorry for the spam.
2
-
2
-
2
-
2
-
2
-
I agree on your first part, but fully disagree with the last paragraph about "if you're a tech reporter, report on the tech". I mean, you can certainly skip the un-interesting to you stuff, just like I skip political stories some times and sport stories almost all the time. But if something is in the tech, or affects the tech, how, why shouldn't that be reported ?
You mean that, say, hypothetically a news item like "Company X in tech is shutting down half of its servers in preparation of the incoming hurricane and floods that will hit tomorrow" should NOT be reported because some people "don't want a weather report" ?? Should the fact that the organization that funds and steers the Linux development decides to spend less and less on that and increasingly more on totally unrelated stuff NOT be reported ? Something like this, where, if people know about it in time, can rally up and push against some stupid political change, so they can save the project NOT be covered, because it's a news item that's political in nature ? I'm just baffled beyond words of how can you say that. It's beyond obvious to me that this SHOULD.... no, sorry, this MUST be reported on. You can simply filter them out if you're not interested. But the reporting of something that affects tech, by tech journalists absolutely must be done. That's journalism 101. Jeez Louise!
2
-
2
-
2
-
2
-
2
-
Sorry to put this out to you, but many artist are and were as$h0les Or at least very very difficult to work with, and hard to be a close person to. Aka, very focused on their project and their passion, neglecting everything else. If you want to check all the people of all the bands, movies, paintings, sculptures etcetc in detail, you''ll be put off by 80% of them. I'd say to separate the two, the artist and the art, and only make sure that you don't promote ahole artists in a way that enables them to be more ahole-y. At least on the dead ones that's very easy.
With Linux Mint, maan, that's so disappoint to hear. I hope the community can steer it off of that though.
2
-
2
-
@sulai9689 Here is a list:
3:15 There was encapsulation before OOP, including in C
5:28 "before OO there were no maps or lists or sets" - this is 100% wrong from every possible angle. First no OO is required for that, check the article "Dataless Programming" by R. M. Bazler of RAND Corp from 1967. Second, a very trivial example, linked lists exist from 1956. LISP, one of the oldest high level programming language, extensively uses lists. Since 1958. And in general, there have always been abstractions in programming and the more complex languages become, the more abstractions they acquire, this has nothing to do with OOP at all
6:05 polymorphism is not an OOP idea either. Ad-hoc polymorphism if I'm not mistaken, first appeared in ALGOL 68. Very not OOP. What he refers more specifically is subtyping polymorphism, I assume specifically against interfaces. What he says about printers is extremely well shown to work without OOP in open, read and write syscalls. You have no idea, and do not have to bother at all knowing what exactly you are writing to. It's having an API what matters to allow polymorphism. Getting back to interfaces, the ML language has many of these (polymorphism, encapsulation, modularity) also without OOP, since 1973. You could say that this specific polymorphism - interface subtype - was created and popularized by OOP, but overall, he still presented it in a very misleading manner.
8:45 overwhelming majority of drivers are written in C. And not just in Linux.
2
-
2
-
2
-
@terrydaktyllus1320 Regarding emojis: I remember seeing a video sometime ago and the presenter argued that they're basically an evolution of punctuation. If you think about it, with punctuation we are signaling the tone, giving hints about how it should be read. Knowing the tone, you can get a feel of the mood (like being sarcastic, or throwing an idea as a joke, not being serious) and you'll be able to more closely convey or understand what is being communicated, including non verbal hints.
Edit: forgot to add: this is especially helpful when talking with people you don't know. With your friends, just from their style, you should be able to infer the mood only from the text. But on a stranger is much harder to know what they are thinking, and the possibility for misinterpretation is much higher.
Of course, the fact that is overused and mindlessly used is another topic.
2
-
2
-
2
-
2
-
2
-
2
-
Besides having well structured, well done etc documentation like other said, there's one more thing: context.
Just saying is "having documentation" is too general. Does it mean a man page ? --help option ? A long web page ? A wiki ? A series of tutorials ? These all are good resources, but for different contexts.
So, for command line programs, a man page is required. Unless I'm mistaken "man" comes from "manual". A manual has to be detailed and, as best as possible, complete. So I'd say that a massive 100-page long man page for find is ok.
But, staying on the command line programs, you also, many times, only need to check some flags, get some examples and things like that. That's were something like --help or maybe even more flags (maybe --examples?) come to mind/help. You should have all the details in the man page, starting with how to get these quicker, simpler bits of help, so you can type your command in less than 1 minute and go on continue what you're working on.
For complex programs (like Blender, or even ffmpeg) I think that a wiki and a series of tutorials is also needed. Especially if using images or video or audio would simplify the teaching/education (those cases where an image is worth 1000 words).
For source code, yeah documentation is good. Since I'm a programmer myself, the problem is usually that very few people bother writing documentation (because it's hard, tedious and excessively boring. Few, few people have the drive or talent or passion to write documentation).
However, a bit of suckless mentality does good here. The source code should be as easy to understand as possible (of course, without compromising quality) and only then, when needed (like explaining WHY a function or some line is needed, or what a flag or something like that might mean in an external context) the documentation should be written, usually trying to be as concise as possible.
Back to contexts, installing an operating system is not like simply running a command to find all your php files. You do need to understand what is happening. So, while reading most of the stuff in the Arch or Gentoo wiki will take days, you will be better off it, have much less (potential) headaches from the mistakes that you didn't made, that could've also been spread to those who you asked for help.
Since I mentioned, maybe everywhere where there's documentation, the scope of it should be mentioned too. Like a man page could state that it's a long read, if you need a quick nudge to get over your work, check these help/examples. If you do know that you want a complex case then you know you'll have to spend more time to read the man page. A blender wiki could state that learning it will take probably months and that you should start with understanding some terminology and flows and do some tutorials. A documentation like Arch or Gentoo install guide could (I don't remember if it does) also mention the important parts that you should know and that probably you'll need several days of reading + trying to install in a virtual machine, before achieving the install, if you've never done it before. Overall, the documentation should set the expectations of how much information it provides and how quick it is to comprehend it. Edit: and also, try to provide shorter/quicker bits of information, where it's safe to.
2
-
@dm8579 1) Yes, it CAN help. If both USA and EU have been acting faster on the helping side (some of which they themselves set the deadline), maybe Ukraine would've recovered the territory by now
2) Ukraine didn't ask just for money, it requested other things too, like the ability to use rockets in Russia
3) Don't act as if US has absolutely nothing to gain. A weakened Russia means more gas exports (which it already increase) for USA, and a lot of weapons selling to all ex-soviet countries that want to get rid of Russia, like the baltic states, Ukraine, Georgia, not to mention the increase of military spending for most of NATO, which a good portion comes back to USA as weapons sold
4) In the end it's still up to USA how it uses its own money, you can't fault Zelensky for asking
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
Java is so verbose, that even when I was more of a text-editor fanboy, I would've used an IDE for Java with no thought. Making so many files, and writing so much boilerplate code by hand would've pushed me to jump off of a cliff or something.
Now, I mainly code in PHP. When I was on simpler projects years ago, things like Wordpress, using a text editor was actually ok. I don't like autocomplete, so using an IDE wouldn't've added much. However, nowadays, when I work on projects that are OOP and there's literally tens of thousands of files (of course, from the framework & other 3rd party modules), I would feel just as overwhelmed as I said for Java. Because now, when I follow some logic, I have to open up to 50 (yes, fifty) files. It would be insane to do that in a text editor, unless it would work exactly the same as in an IDE.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
Completely agree. Given how many bad examples exist, I think this intel generation going with 10000 is, by far, the least offender. It is annoying, but it has certain key features: it's consistent and predictable with the established pattern. Also, it's (again) just an iteration. Absolutely no need to get a new name for this. They could work on the K, X, XE, F, S OMFG letters in the back though.
Oh, and in their mobile lineup, those names are pretty bad. 10710G7 ? What ? (don't know if I remembered a correct name, but you get the idea).
10700K ? The i7 model following the 9700K ? Yeah, perfectly fine.
2
-
2
-
2
-
2
-
@HentaiWarhol What the hell is wrong with you people ? Why do so so many people get SO triggered over somebody pointing out grammar mistakes (in a polite fashion, mind you) ? It's actually doing them a favor, showing them a mistake so they're aware of it and can correct it. How do you know if the original poster simply did a typo and doesn't care about it, or genuinely didn't know the correct spelling and now is grateful that he learnt it ?
Critique is good and healthy (of course, politely and in moderation). It's allowing us to improve ourselves. The only people you don't critique are those who you truly don't care about in the slightest, and/or that you don't want to ever engage with.
And before you say it, yes grammar is important. In written text that has the potential to be read by hundreds of people or more, grammar is very important. If left unchecked, people will gradually start to write more and more wrong and the potential of not understanding what they are writing or, even worse, incorrectly assess what was written increases. No, not everybody that tells you of a grammar mistake is a pedantic grammar nazi that has nothing better to do.
If it's only the private conversation between you and your buddy, then, yeah, write in whatever style you want.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
@Firestar-rm8df Ok, I've been a bit of an ass, true.
On the multi vs single threaded part, Redis can handle quite some requests, though, of course it does have a limit. And for using something like a Threadripper, you can use multiple instances. Might not be something very easy to do, if you already have an app, or have very specific data requirements, but in most cases this can be used, so you can have both blazingly fast™ response times and lots of users/data.
Still memcached clearly has its place, if I'm not mistaken it can easily scale to be used on multiple servers at once, as a big pool of TB of RAM if you needed (and not in the sense of individual servers that are copies of a main server, neither manual sharding as I suggested with Redis)
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
I'm on the side of people freaking out.
The thing is, Ubuntu/Canonical, have done bad things on this "topic" before, so them "teasing" r/linux is truly poor taste. That is, I can accept this sort of joke from someone that has spotless background/history on the matter. If you don't have a spotless history on the matter, then you joking about it is totally inappropiate, you don't know to read the room, you deserve all the backlash so you learn to behave. When you do something stupid, you do not remember people about it!!!
So, even if it was an acceptable joke, there's still the problem that there's no place for a joke there. I'm human and I do have a sense of humor. I can accept a joke here, in VERY VERY rare occasions for EXCEPTIONALLY good jokes. Which is totally not the case here.
The thing is, these people putting this joke, they think they're funny, but they don't think of the impact. Several weeks or months down the line, when I'll upgrade my sister's computer, seeing the joke for the 34th time is not only not funny, it wastes space on my terminal, wastes energy to my eyes to go past it, wastes brain cell cycles for me to understand that it's there and that I have to skip it. It's pollution.
I think the problem is the goldfish attention span syndrome that seems to be more and more pervasive on the current society. We are not able to be focused on one thing anymore. Like to get into the mindset that you have something to do and for the next 5 minutes, 1 hour, 8 hours or whatever, only think, interact and do think exclusively about that, and nothing else, so you're as efficient and productive as you can be. Sure some people or areas (especially creative/art) can or want all sorts of extras. But that shouldn't become the universal only-way to do/have things. It should be the individual adding the extras, not the provider to come with them.
It's like now an action movie can't simply be an action movie. No, it has to have a comedic relief character and the main character must also have a love interest. It's not something bad if a movie has all 3, but it should be the exception, not the norm. There are places for jokes and comedy, I'll go there when I want jokes and comedy, stop polluting all other areas with unneeded (and rarely good) funny, that's not the reason I'm here.
In conclusion, this particular act is certainly of very small degree and by itself shouldn't cause much rage. But it shows a fundamental lack of understanding from those at Canonical, and as such, everybody expects them to continue on this stupid path, unless someone tells them to not do that. So, that's why the rage is justified and actually needed right now, so they learn that it's not ok and they stop, BEFORE doing something truly stupid and distruptive.
2
-
2
-
It's the idea of having sense, in general. If you see people doing wrong stuff, you should be bothered, to a point, especially if it impacts you (more) directly.
In this case, the main point is that installing into a VM and spending mere hours on a distro is not a review. And I'm totally down with that, it should be called out so people doing these "first impressions" don't label them as reviews. Having proper terms, that is, terms that are not ambiguous and/or that people all generally agree upon make for better, more efficient communication.
For example, I might've heard that Fedora is a really good Linux distro.
Now, the nuance is that if I'm perfectly happy with what I have right now, I might only want a quick look on it, to know what's it about, to see why people call it great. Unless it blows my mind, I won't switch to it, so I don't need many details, including not needing if it works just as well on real hardware or how it is after a month, since I'm not intro distro hopping right now.
However, if I'm unhappy with what I have now and I'm thinking "hmm, this is not good enough, I should try something better, what would that be?" - well, in this case, I would like a review. Something that will give me extra details that make me aware of things I should know in order to make an informed, educated decision. I don't want to see a first look, install it, and after 1 month realize that this isn't working, as nice as it looks, I need to hop again. Here a review (long term or "proper" or "full" review however you want to call it) is something that probably would give me the information in 20-40 minutes so I can skip that 1 month and go and install directly what I actually need.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
Since input was asked, please, let me (I also posted this on Luke's video):
Gemini and Gopher are a total waste of time and effort (and quite stupid because of that) since they offer absolutely nothing that HTTP doesn't have already.
Site owners who care about that can simply make their websites in a clean HTML. Yeah, HTML is a bit more verbose, but it's not that atrocious. An HTML equivalent of what DT showed in gemini would be less than half of what he showed as the source HTML code for distro.tube .
Surfers who care about that can have javascript (and other things) disabled and only enable them by default as needed (corrected in the edit). I will concede that the browsers nowadays are doing it wrong by having everything enabled by default. If anything, all the effort should be in making a nice, easy-to-use interface for modern browsers to have everything disabled by default with easy toggles, like a checkbox in the status bar or something for enabling JS, external JS, cookies and so on, if you deem that you'll benefit from it.
Change my mind!
2
-
2
-
2
-
2
-
2
-
2
-
Quality content my ass. Maybe on other video clips because for certain this one does not have quality content, just blowing up Musk into stratosphere hype.
The Loop in LA didn't cost $10 million, but $53 million, was late, it's WAAY short of the 4400 passangers/hour for at least 13 hours/day and it doesn't have fully autonomous pods, but normal cars with drivers (which, btw, means significant extra running cost, as 62 drivers is nothing to sneeze at). Yeah, it was cheaper than that NY subway tunnel, but we're totally not comparing apples with apples here. The LA tunnel is the smallest it can be and has absolutely 0 (ZERO) safery features. No ventilation, no space for pedestrians, no exits, no fire dispensers, no bypass lanes/tunnels. I can't wait for a simple car breakdown to happen and to see all the Musk fans coming up with excuses that he's still a genious, even though his solution a) totally does not solve traffic and b) is inferior to subways.
2
-
2
-
"To determine once and for all ..." Yeah, lol, no, not even close. The results are so flawed, I'm baffled that they were presented in the first place. That Zig solution had a lower number of passes than the Rust solution, because it ran on half the threads (1 per physical thread, instead of 2 per thread like the rest) and it got divided by 16 instead of 32. And I don't see the issue of having SIMD enabled for Rust and Zig but not enabled for C and C++ being addressed.
There's one thing that this shows very clearly: getting an objective list of the fastest programming language is impossible. Different languages might be better in different scenarios. Also, like it was said, especially if you know the target system and if you REALLY need performance, assembly is the one to go. Zig, Rust, C and C++ all very good choices as well. Java too (was not that surprised to see it there), though for some (me) it's ugly AF, I'd rather struggle with assembly than Java. But for corporate projects Java is very welcomed.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
@BrodieRobertson There's this ... let's say feeling, since I'm not so sure exactly how factual it is, but the idea is that Lunduke is apparently about the only one digging and reporting on all sorts of these issues. These foundations seemingly got more and more corrupt and woke trying to censor what they don't like. And also he apparently is banned from a lot of them, and even banned to be mentioned.
The uptick is that if you do find that his investigations are good, you could mention that he also covered the topic. In these clown-world times, this is needed. And it would also show that you're not under some control. Then again, people and fundations having a problem with Lunduke might start having a problem with you if you give him even a modicule of publicity.
Speaking of, if you feel bold and crazy, I would really enjoy a clip / take on this whole Lunduke situation. It's history and current status and how you think this whole situation is, how split the whole bigger Linux and FOSS community is about him. I personally started watching him recently and he seems genuine, but it's still early to be sure about that. And the things he's reporting on... not gonna lie, they kinda scare me. Linux foundation having a total of 2% of its budget reserved for Linux and programming, and 98% on totally unrelated stuff, that thing can't be good long term. It seems like all of these fundations, being legally based in USA, have a systemic problem of being infiltrated by people who do not care about the product(s) that the foundation was based originally on. If these aren't course-corrected, or others arise that are free from all this drama, I truly fear for the future of Linux and FOSS in general.
2
-
2
-
2
-
2
-
Agree, now it's the norm to include lots of stuff, for a fraction of what they offer.
And what's worse (and why it's not easy for the developers to fix this) is that if you want to NOT use an off-the-shelf solution, everybody jumps on you that you "want to reinvent the wheel", then bury you in insults and make you have trainings so you learn what code reuse is and how much more efficient and maintainable is.
I know, not reinventing the wheel is a perfectly fine advice. Somehow some creatures that have more buzzwords than common sense decided that it's not an advice or guideline, but gospel, and it will be the better solution in all aspects 100% of the time. God forbit for you to actually use your brain a bit, exercise some barebones JS and write 2-10 lines of code, instead of getting YetAnotherPlugin. Same for code reuse. If you have 2 lines of code that are the same in your code base, you must stop working on actual useful stuff and refactor your application (adding 10-100 lines of code, maybe a couple of files) so that one line is not doubled anymore. Just great /s
Not to mention how souless, disgusting and mind-numbing is to simply search, install, configure and maybe tweak a bit (mostly so they fit together) modules only. That's not programming. And, while more economical in some to most situations, it creates people who programmed too little and are not proper seniors. Sigh
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
So many people focus on the resolution when it comes to VR. The resolution just a part in what VR needs to be massively successful, and I think it's the closest to being where it should be. Another thing that it needs is better rendering frequency/refresh rate. That will help with people that get nauseous when playing, and there's still a good bunch of them. Totally out of my ass, but I think it needs to be something like 300Hz to be really effective.
Like ca1ib0s said, it also needs 140º FOV. Variable Focus. Eye Tracking with Foveated Rendering. I'd say especially eye tracking. And what Luke said about more tracking in general, so it's not just your eyes that are in the game. Though what we have now is enough tracking-wise for cockpit games.
Ultimately VR is an immesion tool. It does enhance immersion, but it also breaks it now, in several ways/points. When these are fixed, and the second generation of hardware&software come for that, it will good&cheap enough to really get big. And it's A LOT of work. HUGE amounts of work, both for hardware and software, to have what I've said above. Not just the graphic card able to run 4 - 2x16K at 120 - 300Hz, but also in the gaming technology to take into account all the trackings. In mere miliseconds. I'd say we're still like 10 years distance to that.
On the bright side I have about 10 years in which I'll still be able to be productive & helpful to this world. Thank God there still isn't 16K VR porn.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
@crocodile2006 Ok, ok, thanks for that.
Sooo, let me quote "Recover up to 70 percent of range in 30 minutes using Tesla’s Semi Chargers". UP TO :))
Also, what numbers do add up ? If the battery is 1000kWh, then 70% of that is ... 700kWh. Charging 30 minutes from a 1000 kW station will give you ... 500 kWh.
You might say, well, the battery is not actually 1000kWh, it's less because it needs less than 2kWh per mile. In the video they said 1.7 or 1.8 kWh per mile. Using 1.7 kWh (and we don't know the conditions for that, only that it was a full weight) per mile, in order to do 500 miles we'll need 1.7 * 500 = 850 kWh. 70% of that is 595 kWh. Which is 20% more than the 500kWh that 30 minutes at 1MW. These "numbers match" to you ???
Really, I think that the 70% figure is for the 300 mile variant. Or in a "best case" scenario, where you use much less than 2 or 1.7 kWh per mile, but it's not the average (or anything that you can rely upon)
2
-
2
-
2
-
2
-
2
-
2
-
@protocetid Heat and energy efficiency go hand in hand. If some varies differently from the other, then it's not something x86 vs ARM, but that particular implementation, the manufacturing node used, the IHS, things like that.
The software freedoms... well, that's still kind of not something that's exclusive to x86 or against ARM, but something more on Android and iOS and Google Services. Pinephones exist, and you can install everything there. Another (non phone) ARM that is very free is Raspberry Pi. So, it's not ARM's fault that nowadays smartphones are so incredibly closed and spyware. It's Google + Apple + Samsung mainly + the rest of manufacturers.
And, yeah, it's true, nowadays the performance is so good, that for normal apps, especially if you don't have so much spyware on it, it's already something that can be easily be done by most CPUs, even very low end ones.
2
-
2
-
@protocetid Actually, TDP is not how much a chip uses, though it's not that far off either. When you don't have something else, and if you don't need/expect good precision, it can be used instead.
The thing is that TDP is how much HEAT a cooler for that chip should be able to dissipate. Which if you think about it, that and how much the chip uses should never be the same, unless the chip is literally a resistor. As an example, on AMD's Zen 4 chips, the top desktop ones, they have 170W TDP, but they can consume (without overclocking) 230W in sustained load. Fortunatly, this is one of the bigger deviations, usually the TDP is closer to the actual consumption. Well, PEAK sustained consumption! The chip, if not in fully load, will always consume (much) less. And on short burst, it can consume (much) more.
Getting back. Steam Deck is 15W TDP, but it's optimized to run at more like 3-9W. In games like Dead Cells 2 (which is a 2d indie game, pretty lightweight, but still far from idle-like power required) the OLED version of Steam Deck can run for over 8 hours. With a 50 Wh battery, that means that, in average, the chip + screen + wifi + audio, I think without bluetooth, consumes about 6W. Which makes me think that the chip itself is consuming like 3-4 W.
Still, given that the Steam Deck has only 4c/8t, that's not exactly high end. Current phone flagships are certainly both more performant and more efficient. Not sure how it competes on GPU performance. A typical phone battery nowadays has 5000 mAh, which, given that the Li-Ion batteries usually hover at 4V (between 3.7 and 4.2), that makes for a battery capacity of aprox 20 Wh. Less than half of what the Steam Deck has.
So the Steam Deck's APU (which I still consider the closest in x86 space to what a phone or a very efficient tablet/ultrabook would need) is not that efficient as compared to the current smartphone chips. Though, it is also built on 6nm, while the most recent chips are on 3nm, almost 2 generations newer, which is a pretty big difference.
So, overall, I think that on the hardware side, while it will most likely be a setback in terms of performance or (maybe even and) efficiency, I think that if they wanted, both Intel and AMD could come up with a chip for a smartphone that still has decent efficiency and performance, just not flagship level.
Now, on the software side, the advantage with Linux ... that is, GNU/Linux phones (Android, technically, is also Linux) is also the control that you get. And, I guess, a bit of compatibility for the software that's made for the desktop. I wouldn't say it's a big demand, unfortunately. Most likely just techies like us, and maybe privacy nerds.
Still, it is nice to see how far Pinephone got, even though it seems like what they have is a bit too low end. The chip itself can be very efficient, they don't have a lot of cores or overclocked them or anything, it seems that the firmware and drivers they use or something is still not up to the task. Or maybe everything is rendered with the CPU instead of the GPU, dunno. But the chip itself is pretty common ARM chip with 4 A53 cores, those can totally be efficient.
Oh, and good point about Waydroid. Haven't checked it, but from what I remember, you can already run a lot of apps through it. So you can get the best of both worlds with it.
2
-
2
-
2
-
@jayjohn9680 Uhm, no. And everybody is talking about the peak / best scenario, or with similar effort. What your describing is not. It's like comparing a mediocre new car to a good car that's 30 years old and badly maintained. Of course the new car will be better, but only because the old car was brought into a bad condition.
This "loop" can very well get in the same situation where some cars broke and didn't got replaced, some are off to be charged, some are off because the drivers had their nature calls, and lo and behold, instead of 62 cars, you have 15, and you also have to wait 10 minutes to get one, time in which you could've simply walked to the destination without using this public beta experiment on humans that this "loop" is.
2
-
2
-
2
-
@rogerfroud300 Don't want to sound as insane as that guy, but having a bigger battery means you can charge fewer times, so when you do that, you can have a bigger break while the car charges. Like having a full lunch.
Also, if you keep the same volume and weigh, you'll have a bigger battery, so it might recoup the charging speed, by having more "cells". What I mean is that if this has, say, half the charge rate, but you can have 40% extra cells / energy, then if you do that, you'll have 140% of the original battery, but at half the speed = 70% of the speed of the original one. Still significantly slower, but maybe not a deal breaker.
2
-
2
-
Soooo, a couple of questions:
- if I have a SSD and a HDD, both of 1 TB and non-NVME, how do I differentiate between them ?
- if I install Linux on sdb and later I remove the drive that is now sda, when I boot back into Linux, would it still be sdb, or it would be sda ? How does consistency between drives, when swapping is a thing, work ? Edit: ok, finished viewing the video, this is done using UUID.
2
-
2
-
2
-
2
-
#HeyDT, why do you have to be so cringe at times ?
1) 1:16 "According to statcounter a full 71.29% of personal computers around the world are running Windows 10". With 15.44% for Windows 11 and 9.6% for Windows 7 and 2.5% for Windows 8, that's a total 98.83%. Add 0.4% from Windows XP and Windows has over 99% of personal computers around the world ??? Then at 2:36 you basically say the correct numbers now, Windows in total has 76%, and from that 76%, 71.3% are Windows 10. Meaning that a total of 0.76 * 0.7129 = 54.18% of the personal computers around the world are running Windows 10
2) Comparing Windows 11 market share to diseases is just very petty and cringe. You're releasing your hate on it, in the same way that people are getting toxic on social media. It's ok to have this at a talk with some friends when having a beer, but not here, in the wider public. Like I said, it's petty. Making fun of it's growth in a really weird way. Someone coming from Windows and seeing this will certainly not be convinced to switch, it will think all Linux guys are lunatics living in their bubble, making fun of "only 15%" while they have 2%.
Other than that, like others said, Windows had this cycle of "bad release followed by good release" several times now, so most likely Windows 11 users will stay low until in 2024 we'll get Windows 12 which will be a refinement of Windows 11, aka it will work very well out of the box. Still with ads and telemetry through the roof, but it won't be so much in your way. And it will be better than Windows 11 in absolutely all aspects and people will then upgrade.
2
-
2
-
2
-
2
-
This certainly looks nice for those looking for jumping ship, or for you to install to your (grand)parents, without much worry that they'll struggle :)
I have to ask (every OS video has somebody like this, right?) Have you checked SerenityOS ? It's not ready for daily driving, but is very interesting and, dare-I-say-it, appealing.
Also, I know it's much more "hardcore", which might be out of the scope for this channel, but maybe a Gentoo video some day ? Maybe just to showcase a minimalistic setup and very customized apps, with a more streamlined installation (as compared to manually compiling). Even if few would ever use it, knowing that it exists and knowing what's possible should be good education.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
I think this only addressed part of the questions. The bigger picture for young folk nowadays that didn't got some computer study at school is why would you use them in the first place ? To them, using the terminal for anything else than debugging some system problem is going back to the '80s, like he saw in the movies. I remember an article of sorts with a teacher saying that some students (who only ever used smartphones and maybe tables) didn't knew what files and folders are. You can image now the wide gap from the big full GUI based everything they ran to a terminal-only environment with ... keyboard shortcuts!
Yeah, so as I was saying, why anyone would like to use something like dwm, vim or emacs should also be stated. Someone in 2023, who is not nostalgic, who is not old, who is not even a programmer, what would be the benefits. To be frank, I'd struggle to recomment vim or emacs to a non-programmer. It does feel like overkill. Unless maybe they write a lot and could use things like groff and hoff and poff and whatever those are called, where you can have detailed formatting too.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
I don't understand that part at the start with the cost of delete vs write. Both in CPU registers, RAM and disk (be it solid state or HDD), a delete IS a write. Going over the special case of SSDs which have multiple bits per cell and writing would mean writing all 4 bits, the delete vs write doesn't make sense to me. At least, not in the current compute landscape.
And the "store the input so later you can simply switch, instead of delete" sounds like basic caching to me.
For consumer hardware, we are both getting more effiecient and not. If you look at smartphones and laptops, it is inarguable that we're getting much more efficient. And in general staying in the same power envelope, though the highend desktop replacement laptops are indeed more power hungry that what was 10 years ago.
On the desktop side... if we ignore a couple of generations from Intel (13th and 14th) then the CPUs I'd say are getting more efficient and also staying at a reasonable power draw, so same power envelope. Same for RAM and disks. It's the GPUs that are also more efficient, but have expanded, by quite a lot, the power envelope at mid and high end levels. But I would say that the raw hardware power is impressive.
On the datacenter side, 30,000 tons of coal seems quite little. I expected something like 1 billion tons of coal. Funnily enough, a lot of electrity nowadays is consumed in AI. Feels like creating the problem in order to create the solution to me. Waaay too much desperate-ness in getting the AI upper hand is quite a clown show to me. I am expecting more and more regulations on AI as the data used is still highway robbery in most cases, and the energy used is just ludicrous, at least for the current and short-term future results. In the context of having to use less energy, so we can stop putting carbon into air.
Lastly on the prebuilt power limits or something similar. I don't know of having such a law, neither in EU nor in Romania where I live. However I do know that there is one for TVs (and other household electronic appliances, if I'm not mistaken) which actually limits the highend TVs quite a lot. Which, frankly, is quite stypid to me. If I get an 85" TV, you expect it to consume the same as a 40" inch one ? Not to mention that maybe I'm fully powered by my own solar panels. Who are you to decide that I can't use 200 more Watts for my TV ? On this theoretical setup, it would generate literally 0 extra carbon. And what's worse, because of this st00pid law, now people are incentivised to buy from abroad, which is worse for energy used (using energy to ship from the other side of the world instead of local) and worse for the economy (EU manufacturers cannot compete as well as those in other countries). Anyway, rant off.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
@Maxშემიწყალე I don't think it's as easy as using a calculator vs doing multiplications and divisions by hand (either mentally or on paper). As you use less of stuff X you only get the basic idea of it, especially junior just learning the field. Then you start only learning the gist of stuff Y, which uses stuff X behind the scenes. Then stuff Z which uses stuff Y behind the scenes. When you only learn about stuff Z in general terms and hear there exists stuff Y, you won't get to know that stuff X exists. That's what I fear will happen, especially for junior, very few will still be curious, while many others will be normal humans, and do the bare minimum, and let AI do the rest. And when the AI will do something wrong, well, there might be a problem.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
Whoa, I mostly agree with the message of the video, but you REALLY need to be more careful on getting the information right.
First, Go being a better C++ is... questionable. But there have been other comments and is not exactly my area so I'll just leave it at that.
But later, "Java and Javascript are 3-4 times slower". Jeez. Holy crap! Java is like an order of magnitude faster than Javascript. You absolutely cannot put them in the same category. Javascript is interpreted, hence why it's slower. It's on the same page as Python and PHP. You don't create executables in Javascript, you need to run it through a runtime. Not to mention that, like PHP (don't know about Python) it basically single threaded. While Java is up there with the most performant languages if you know what you're doing and if the workload isn't prone to suffer from having Garbage Collection. I remember seeing that on ARM, some highly optimized program was 4% faster in Java than C (which also had a highly optimized version). I forgot the site with the benchmarks name though.
Really, this is such a basic and fundamental difference that I really cannot trust anything else you say now about Java nor Javascript. It's almost like comparing C++ with Python. No, just no.
2
-
@NoBoilerplate But Java had trillions of dollars of optimisations too! That's why I was shocked. If I'm not mistaken Java has about the most advanced (and complex) garbage collection algorithms. And I know the JDK is quite a beast. And I say that as someone who doesn't like Java (and I like javascript, though I kind of hate about all frameworks on it).
Of course, there's no ceiling in optimisations. Unless you don't have enough data. And Javascript (unlike Typescript) does lack having strong typing everywhere for example. That by itself includes some run overhead. I guess it's a matter of things like Python having those ML libraries that are basically implemented in C and calling them in Python gives you basically the same speed (for those specific functions). Also like the Falcon framework/module in PHP is basically a collection of C functions presented in PHP which, if you only use them, you're close to C speed. But in both cases, you're restricted to a set of functions. And the language itself, being dynamic, has an overhead of its own.
I think that the benchmarks in which JS runs well are just that - cases where the engine already has an optimized solution implemented. Though if these cases are in good enough number, especially for the domain where JS is used, that's good and I think they can be used as representative of the language. But in general I'd go by worst case.
I guess I'll just have to up my game on current language speeds. I did a quick search now and I can't say I'm happy with the first page of google's results. To be fair, I did find in there some instances where JS is faster or on the same plane (to say so) with Java. But I'm still not convinced. To be frank, I really don't see something like Elasticsearch being implemented and running as well on Javascript.
While on this topic, do you have any good benchmarks sites ?
2
-
2
-
2
-
2
-
2
-
@11Survivor The context I put it in, is about the culture, the people's ways, tranditions, values, beliefs. In that regard, Napoleon was born and raised effectively as corsican (italian).
Him being french is basically the same as someone being born in Germany, emigrating to France, getting his citizenship and then saying that he's french.
Yeah, he has the documents, but the region he was born and raised into wasn't converted to french culture yet. If France conquers Corsica today, that doesn't mean that suddenly tomorrow everybody in Corsica speaks and understands french and they all raise their pinky finger when drinking from a teacup. These take at least one generation to have a meaningful change.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
1
-
1
-
About that IGP removed from KabyLake-X, that's a thing that was requested by consumers (don't know how much though). In the idea that instead of space & thermals used needlessly by the IGP, intel would better just make the CPU better and let the graphics for the actual GPU. I for one I am glad to see that, though the specs I find show 7740X having just 100MHz more base clock and the same 4.5GHz boost clock. Lame.
For everything else... yeah, WTF intel. This shows to me that they have NO confidence in their own products. I see no other reason to rush things this much and come up with this extremely lame release. They really are afraid that if they waited 2-3 more months to come with an actual coherent line-up they would somehow lose too much market or something. But I think this poor launch actually helped AMD more than if intel hadn't had said or released anything at Computex.
Also, I'm not (...yet..) anti-intel, I'm not on the bandwagon that they are just ripping and milking consumers etcetc, because I don't know for certain if that is true or those money are actually well spent.
Anyway, on this x299 launch, they announced that they would allow bootable RAID 1 and RAID 10 for NVMe by buying a 99$ physical key (or something like that). Sooo, physical keys ? In 2017 ? That's so anti-consumer I can't even.... Fuck intel for this thing alone. I hope their value halves and those that are in charge now get fired and end up in poverty.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
AileDiablo They have a plan my ass. It's been 40 years and the lira is ran into the ground, millions of people strunggling a lot because of that. A good plan would've shown its fruits by now, it wouldn't have STILL been in the early stages, at the expense of millions of people's well-being.
That's like saying that you'll improve the family's wealth, but your kids will be overworked and starving for their entire lives. But, hey, it will be better after that. That's bullshit. It shouldn't take more than 5 years to see a tangible benefit from a good plan, that people worked properly for. Yeah, Turkey's economy did grew from '80s to now, but it's not enough. And it's not like everybody else didn't grew their economy, without spitting at their people (with some exceptions, of course)
1
-
Wow, so many people over here that barely use food delivery. I feel quite special now, I'm kind of the opposite, I actually use food delivery a lot.
First thing, I don't think it's THAT bad here in Romania. From the apps mentioned in the video, we only have UberEats a vaguely popular around here. Bolt Food (Estonian Uber that's 2 times better both in car sharing taxi and food delivery), Glovo and Tazz. I basically only use Bolt Food for several years now.
Unlike the other apps, I can put the exact pin on the map so the delivery guys (which some really aren't that smart) don't end up on the other side of the boulevard.
Also, the prices have indeed gone up substantially. As someone who used food delivery a lot before the pandemic too, the prices are on average a bit more then double (and the inflation here hasn't been THAT bad).
One thing mentioned in the video, the difference between pizza chains which have somewhat working delivery and food delivery apps. It painted the apps as the new guy which doesn't have experience and doesn't know how to do things effectively. No idea how it is in USA, but I can confidently say that here with Bolt Food especially that is NOT the case. Just from the accuracy of the estimated time of arrival you realize that the infrastructure is quite advanced and mature. Not perfect, but clearly good. Also, we already have the range limit that it's mentioned in the video that somehow only pizza chains have. Here in Bucharest, which is a very large town for Romania, but it's probably just as big as one small district from New York, the range is about 1/4 of the city. 5-10 km, something like that, I don't know exactly.
It is very expensive for me, and I was always aware of that, I still prefer it over spending time shopping and cooking and cleaning and associated extras. Plus, there are days when I literally spend about 1 minute ordering, since I know what I want, about 1 minute answering the delivery guy and getting the bag, and I can eat at my laptop while paying attention to a meeting, effectively I spend no time at all the whole day for "cooking" and eating, I love it. To achieve the same time efficiency and confort I'd have to either a) consume mostly instant foods, which are significantly cheaper, but much less healthier and with much less variety in the foods or b) have a personal chef which would be ideal, but I don't have THAT much money.
From several hundreds of orders in the last 5 years I had only literally one order which was not delivered because there was no courier found, on a Sunday at 9-10 PM (don't ask, it was not at my place nor a normal day). And I had several bad deliveries where it was mostly the restaurant fault for not having good wrapping/casing of the food. And several deliveries with missing/wrong items, which were solved (kinda) with the help system of the app. Overall percent wise, not that many, something like 3-5% of the orders. Though there is a thing that I have to mention, I think it does help my good record, I rarely order at lunch times, when there's the most chaos. I usually eat significantly later.
Of course, for the end user it will always be expensive. Before the apps, when you had to call, and there were fewer places you could have delivery from, it was worth it if multiple people ordered, like 4+. The delivery tax was a bit steeper, but when you split that in multiple people, it gets quite cheap. And some places had free delivery after a certain threshold. Now with the apps, there's still that free delivery after a threshold (though not always) and the tax seems to take into account the distance, but the efficiency of multiple people (big order) vs single person (small order) has been much diminished.
All in all, considering the above points, I'd say that it can get to be mature enough to be sustainable. And I think that I am experiencing that.
1
-
There's something that doesn't sit well with me:
- the law assumes that the cores have the same performance characteristics. The Macs have different cores, so the estimate cannot be correct. The single core performance isn't mentioned if it's a performance core (which I assume) or efficiency core
- why is 12 core estimation of improvement 418%, but later a 10 core estimation of improvement is also 418% ?
- why is process creation 1900% better ? Theoretically it shouldn't be possible to surpass 1100% (11 extra cores). Is is just because there's less context switching ?
Lastly, I just have to talk about a thing that I see that many do not mention. The Amdahl's Law applies for a single program, more specifically a single algorithm. If you actually have multiple programs, multiple things that have to be computed, those should be basically 99% paralelizable between themselves. Say, playing a game and recording (encoding) a video of it, while also compiling something in the background. These are 3 main tasks, in which going from one CPU core to do them all, to say, 3 cores to do them (one for each program) I expect at least 99% improvement (assuming there's no bottlenecks at, say, HDD/SSD level). None of the programs needs to know what the other is doing, so it has 100% palalelization in theory (of course, in practice it can vary, a bit more if more cores alleviate bottlenecks and less with the overhead of scheduling and with the limitations of other hardware like memory and disk bandwidth)
In current day and age, we're not running like in DOS times, running a single program at a time. Usually it is a single main program, like a browser or a game, but there's plenty occasions where you run multiple things, like I said above. Having a browser with many tabs can alone benefit from more cores, even if each tab has only serial tasks in it (aka 0% paralelism achievable). If you also do some coding alongside, there goes more cores. And, of course, today in something like MS Windows, you can assume a core is permanently dedicated to the background tasks of windows - indexing disk data, checking for updates, collecting and sending telemetry, all kinds of checks and syncs, like the NTP/Windows Time, scheduled tasks and soo on.
In practice, 8 cores for casual workflows (browsing, gaming and office) is basically plenty, it is indeed little gain from more cores. In that sense I agree with the final thoughts.
But I fully disagree with the final thoughts on the server comparison. Virtualisation is not for performance, quite the opposite. If you need top performance, especially lowest latency, you have to go bare metal. Virtualization has two great benefits: sandboxing: you don't have conflicts with what anything else is running on that server, so you can have 10 versions of something with no problem, it's easy to control how much resources it can use and many more. Also, it makes for immediate (almost) identical development environment, reducing devops time and especially stupid bugs because some dev runs PHP on Windows and it behaves differently than the same PHP version on Ubuntu. Also again, thinking in this paradigm of small virtual computers makes your application easy to scale (just have more containers). But an appllication running in a virtual machine or in a container will NEVER be faster than the same app configured the same, on bare metal. The nice thing is that nowadays, in most cases, virtualizing has a negligible impact on performance, while the other benefits are massive. That's why everybody is using it now.
1
-
1
-
1
-
1
-
1
-
I just want to add that the attitude of "Valve doesn't care about snaps, or your package manager. They don't want to support it. Not their job yadda yadda yadda" is ... not that good.
Yeah, they might not like it, but if it isn't fundamentally flawed, as in making it impossible or incredibly difficult, then Valve should consider supporting and working with distributions and package managers, so it can be nicely integrated. That is, the way I see it, the desired outcome for an app that wants to have large reach, to be used by a large mass of people.
You can say it's the same as making a FOSS for proprietary OS like Windows or iOS. You can hate the inferior OS (in case of Windows) and the hurdles you have to go to to bring compatibility, but if you do want to have a high reach, then it is something you have to do. While on this, thank you GIMP and LibreOffice.
So getting back to the topic, I think everybody will have to gain, including less headaches and issues on Valve side, if Valve worked with the distros and package managers to make Steam work directly from the package manager so you don't need to go and download from Steam's website. That's what cavemen using Windows Neanderthal Technology (NT for short) do. Ok, snaps might still be a headache, though I guess it would be more from Canonical than from the snap system. If that's the only system not supported, it would still be better than now. And I suspect that a lot of this work would be front-heavy, that is, you work hard to integrate it once, then it's easy to maintain after.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@alomac8976 That slow startup times, last time I watched a video about it, was largely been addressed. Also, if waiting literally 2 extra seconds when you open a program is that much of a concern.... then I feel sorry. The running performance was already on par with flatpaks, IIRC.
The other side that xritics briefly mentioned, is that you can have anything as a snap, including daemons, system tools and I think, even libraries. That is, some things you can have them as snaps, but not flatpaks. From what I remember, a good example of that is Nextcloud, the cloud server software - which can be installed trivially as a snap.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@formdoggie5 I don't know what CPU this x58 is, but since you said it's 12 years old, I think it's not from Intel's Core lineup.
In any case, I can bet you that a last-gen processor, be it either Intel or AMD will beat your x58 @ 5.3 GHz in single threaded performance, while running at 4.0 GHz, while also consuming at most half the power.
Thing is, increasing the frequency does get exponentially harder and it's kind of a hard cap now.
However, the processors kept improving, by making the transistors smaller, and packing more of them. And by having more of them, they have now more cache, more instructions per cycle (literally, not just as a result of better cache), better branch predicition and other small improvements, including non-performance things like better reporting, automatic over and under clocking in order to preserve power or to increase performance while not hitting a thermal throttle.
All in all, there have been real improvements in the hardware of CPUs. They're not so evident though. Compared to a CPU from 12 years ago, I'd say that the new ones are about 50% faster/better at the same clock cycle and core count, while having other significant benefits (mostly power consumption). Compared with the increses seen in the '90, going from 33 MHz to 1000 MHz in about 10 years (I guess, too lazy to check exactly) is much more easier to see. But if we factor multi-core in the performance benefit, the increase has been quite good still.
1
-
1
-
That a big outlet can't a) pay a stupid fee to another outlets, like they're not in the same boat (sure, you can argue that it wasn't one of the bigger ones, so up until now they had no need to pay it, but then how did they found out?). Also, they should have some form of budget for things like this that doesn't need preapproval. After all, these sites are several bucks a month, even if you get all of them, it will be, what, several thousand dollars ? Hardly something to be concerned with at a big company.
also b) can't fuU^%^##$ng do a stupid translation and have to use things like google translate. From german to english, of all languages. I swear, they should be laughed out of the room. They don't deserve to be in the industry. Let alone by a big or medium outlet in the industry. But it's not like journalists in general have high standards... sigh
1
-
1
-
1
-
1
-
1
-
@terrydaktyllus1320 Everybody reading what you write and you (because you wrote it) would have a much more productive use of their time if you'd stop spewing bullshit that you have a very surface knowledge on.
In your fantasy cuckoo land there are these "good programmers" that somehow never make any mistakes, their software doesn't ever have any bugs.
In the real world, everybody makes mistakes. I invite you to name one, just one "good programmer" that doesn't ever write software with bugs. If it's you who's that person, let me know your non-trivial software that you wrote and that has no bugs.
And if you're going to bring the "I didn't say that good programmers don't make mistakes or don't make bugs" argument, then I'm sorry to inform you that Rust, or more evolved languages in general, were created exactly for that. Programmers, good AND bad, especially on a deadline, have to get the most help they can. That's why IDEs exist. That's why ALL compilers check for errors. A language that does more checks, like Rust, but still gives you the freedom do to everything you want, like C, is very helpful. Unlike your stupid elitist posts that "languages don't matter".
The bug presented in this video, that's a very classic example of something that would not happen in Rust.
With people like you, we wouldn't even had C, just assembler by now. Whenever there's something about programming languages, don't say anything, just get out of the room and don't come back until the topic changes. Hopefully to one that you actually know something about.
1
-
1
-
1
-
1
-
1
-
1
-
@theo949 Dude, you got triggered with no reasons. The video is not making a statement that EVs are not there yet, it's an observation that they're not there yet, and it tries to explain why.
You do have some points, and I think that Wendover missed some other factors to influence the tipping point: people's opinion of cost, range and charging needs with EVs.
With more people learning how they work, their needs, in the EV context, will also change, and surely the adoption rate would increase, surely also going over the tipping point.
In any case, I'd say that the production and infrastructure is already going well, and expanding very nicely. What we'll really need soon is a way of electrical grids to be able to sustain the massive increase in power needs as people start converting to EVs.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
While I agree on general with the video's ideas, I don't fully agree with the argumentation, and I want to challenge or maybe add to some things presented here.
I mean, on the "getting stuck" vs "learning new stuff, like distros, DEs etc" - at which point learning another thing which is kind of the same of what you already know and barely (if any) brings anything new to the table is actually worthwhile ?
What I mean is, having the growth mindset is important, but only on the personal "global" level. Meaning, a person should always have a growth and learning mindset, but the focus and topics can (and in most cases should) change over time. Which means that there are many legitimate cases and reasons when you simply want that something you learn or learnt to stay the same, so you can still be proeficient in it, and still can rely on it, so you can the focus on SOMETHING ELSE. While using what you learnt just as a tool with 0 more investment on learning about it or very similar alternatives.
Distros and DEs and so on are a perfect example of this, where, after you tried several and know what each is about and what you like most, yeah, not wanting to know about any other distro or DE (unless something truly revolutionary appears) is a perfectly legitimate mindset, when you want to now focus on learning something totally different.
"Hey, did you see that new KDE hammer ? It has a nice glossy look. Do you want to learn how to use it and maintain it so the gloss doesn't wear off ?"
"No, I like my Gnome hammer just fine, I don't care about that stupid glossy look"
"Duuude, wth, why are you so stuck in the past ? You need to have a growth mindset, otherwise your mind will rot"
It's like being in the 8th grade and having learnt to do quadratic equations, you keep searching and finding all the possible quadratic equations and challenge yourself to solve them. Yeah, it's good to do it for a while, so you know them by heart, but if you keep focusing just on them, you won't learn what integrals are and group theory and so on. I'd say that checking new distros and DEs (again, after you tried a good bunch of them and know what they're about) is analogous with this.
After a while, simply trying new stuff that is fundamentally the same you did before is not so different to jerking off. You get a sense of acomplishment, but you haven't really advanced.
There's also something: in my case I do have changes and updates anxiety of some sorts. I know, maybe I'm too much in the mind-rotten-cannot-accept-change-does-not-challange-himself territory. Thing is, I like perfectioning things. I like optimising things. Using a program and see it do things faster and changing it so I can have one less click in a workflow that I use rather often. So, for this, I do like the peace of mind of checking the "market", selecting what I think is the best (be it a distro, desktop environment or window manager, shell interpreter, text editor, video editor, audio editor, video player, audio player, browser etc) and sticking to it, and start optimising it. Usually through configs, then plugings then maybe even patches or actual code changes done by me.
And if everybody starts using another program because a new one looks nicer and is new and has 2 more features, then I get very... sad and ... I don't know anxious or nervous ? I mean, if you look at it, the old program can be rather simply upgraded to have those 2 more features and its looks can be configured to look almost the same as good... but people don't care. Then some big company or group decides that what they do will only work on this new program, because they like it, so you can't use the old program, at least not without massive pain. So at that point you're like... "great, basically now I wasted a lot of hours and I have to switch to another program" which, bar that new feature, is strictly inferior because it's slower and you need more clicks to do your thing. Of course you can learn this new program too and change it to your hearts delight. When you get to the same optimisation as you had on the older one BAM another new program to do the same thing, only (ever so slightly) different. And the history repeats.
There's the UNIX philosophy that each program should do one thing only and do it well. It's exactly for this reason: so you only learn it once. When you don't have to relearn the same things over and over again, because they got changed, you can then start building bigger things. No, I don't regularly check and learn new letters, digits, screwdrivers, hammers, pencils and so on.
So, yes, I would like to stay on Gentoo for 20 years, preferably 100 if I live that long and something better doesn't appear. Same with other apps. Not because I gave up on the growth mindset, but because I want to learn about other stuff now, like driver programming, electrical engineering, plant farming, the french language and so on. No, I couldn't care less right now about the new Arch spinoff. And I think that's totally fine.
Lastly @DT I have a genuine challange. Since you said recently that you actually didn't used Windows at all in more than a decade, I have this challenge for you: actually try Windows and MacOS and note at least 5 good things about them each (and not stupid things like "it's good because it's popular" or "the rounded corners are really nice". Actual good stuff). This isn't a "ha, got you with your own words" kind of thing. After this long of time, you might not be aware of what the others did and might not know exactly where Linux stands. Trying them, Windows and MacOS should give you good insights of some things that Linux could improve upon. And it might make you like Linux more. Errr, GNU/Linux, my apologies.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
OhMyGaaawd, I can't believe you missed the point on that C vs Go comparison at the end. It is a VERY GOOD comparison.
Why ? The very thing you mentioned, that you wouldn't use C and Go in the same places. Initially there was only C (ok, a few others too, doesn't matter). But because it is only good at performance and low level control, it sucks at everything else (well, arguably) Go was invented as an alternative. Now Go occupies its own niche because C couldn't properly cover that part. Compared to C, Go has decent everything, it's just not perfect or the best in any of them. And since in some contexts speed of running is not the most important thing, but ease of developing is, they would prefer Go's "decent" vs C's "horrible" part.
In regards to his SQL complaints... I'm still not sure what he wants/needs. Apparently statically analyzing that a query works is one of them. Ok, that can be done with SQL.
Maybe he wants something like having directly an ORM-like library for a language, bypassing SQL ? Like, you only call functions and everything is run directly or sent from one server to another directly as binary data ? I guess that would be something. I remember that at the beginning there was a screenshot in which I think he wanted to highlight that the parsing of the query took semnificative time. Which, if you use something more direct, could be bypassed. Can't say I dislike the idea of having the option to send directly the structures and data exactly as the DB works with/uses them, so no more parsing is required.
1
-
1
-
On the complaining about the company part - I strongly disagree.
First, just because you chose a [whatever] owned by a company (when you had the choice of [whatever] made by a community) that DOES NOT mean you must agree with everything they did or do, ESPECIALLY future decisions, and ESPECIALLY those that are different in nature to the past ones. The company released that [whatever] to the public for a reason, it wants users to use it, so it's normal to provide feedback for it. Which, when it works below expectations, comes in the form of critiques and complaints.
In the end, the company does what it wants, then if they do something stupid, people complain, and if the company doesn't fix it, then it shouldn't be surprised when people stop using their product(s) and/or stop caring about the company. It's normal stuff. People complaining online is normal behaviour and a way for Canonical to find out that they're not doing the best they can. And if they're slow, they'll also see that in the number of new & existing installs. Of course, some people get the complaints to toxic levels. That is a problem. But complaining in general, is totally fine, if it's argumented (and if the user tried to do something about it, made some minimum effort of researching and troubleshooting before starting to complain). It's basically saying "Hey, it's your product, but if you want to keep me as an user/customer, they you should stop doing X". How is that wrong ?
I have to say, the take that people shouldn't complain about a company, because it's in their right to take decisions and make changes... well, complaining is not the same as killing/cancelling/dismantling/disowning/forcing the company. That is, OF COURSE, not ok (and illegal). But complaining is 100% in the right of the users to do, just as the company has the right to do whatever with their product.
Also, the popularity standpoint, I find it to be a weak argument. The popularity means that something was good YESTERDAY. It's not instant, so it cannot reflect what [whatever] is today. If something has 80% popularity today and the owner made a stupid decision and tomorrow it drops to 70%, well guess what, it's still the most popular, but it's clearly not the best anymore. Just that some people realize it on the spot, while some other people bring the popularity argument. By that logic Windows is excellent, because it's crushingly popular on desktop and laptops. But we all know that it's not that stellar, even though it had good moments, and is still a good operating system, desptie its annoyances. But if history and popularity would have not mattered, and everybody would start from 0% today ? I doubt that Windows would pass 40% market share.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
I almost hate Microsoft with a passion, but I couldn't simply spew such gigantic BS that Win 11 uses almost 8 GB of RAM idling. Maaaybe if you have 128 GB of RAM and if you also have a lot of extra bloatware, then maaaaybe you can reach that.
But a clean Win 11, even with 128 GB (it matters how much total you have, as Windows will try to prefetch and cache things if you have extra/spare RAM, which is GOOD) I don't think it reaches that much.
My Win 10 was at about 3.6 GB of RAM un startup, and I have 64 GB of RAM in total. And I don't have a clean installation, this Win 10 was never reinstalled, it's still the same (but upgraded to Pro at one point) that came with the laptop, almost 8 (yes, eight) years ago.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
Holy crap, I really hate to do this, but the calculations at 9:00 are really off.
Let me put it in another way. Let's actually compute the energy needed for a 36 t vehicle going at 60 kmph for 901 km (560 miles for the cavemen out there).
Using the computing method as shown by Engineering Explained's video named "Why Teslas Are Bad At Towing (Today)" (search it here on YouTube) from 4th December 2019, and using the following assumptions: drag coeficient is 0.6 (what I found is typical for a truck, after a quick search), the same 3.7 sqm area as in the EE's video (40 sqft for the neanderthals out there that somehow know to read) which is almost 2 by 2 meters, and a rolling resistance coeficient of the same 0.015, which seems actually to be pretty conservative, I've found lower estimates after a quick search on the web. Also these 901 km are on completely flat land.
Formulas used: on short, the total force needed is Fa (force to overcome aero drag resistance) + Fr (force to overcome rolling resistance).
Fa = 1/2 * p (density of air, in kg/m^3) * v^2 (speed of the vehicle, in m/s then squared) * Cd (coefficient of drag resistance, unitless) * A (area displaced, in 1/m^2)
Fr = G (vehicle's weight, which is mass * g, in N) * Crr (coefficent of rolling resistance, depending on tires and the surface, like asphalt, unitless)
Then the force is multipled by the distance, to find out the energy in Joules and then I converted it to kWh, since that seems to be used more in these kind of discussions.
So, after doing the calculations, the energy needed is 94.94 kWh for the Aero drag resistance and 1325 kWh for the rolling resistance. For a total of 1420 kWh energy needed to move a 36 000 kg vehicle on flat asphalt for 901 km. And taking the current usually used estimate of 0.25 kWh per kg energy density of Li-ion batteries, we end up with a battery which has 5683 kg, so 5.7 tonnes.
I wouldn't say I used optimistic numbers. And even for this truly unneeded long range of 900 km, the battery is much lower than "8-16 tonnes" nonsense.
If the truck would go at 100 kmph then it would need 1589 kWh of energy, or 6358 kg of batteries.
If the truck would go at 60kmph but just for 400 km, then it would need a mere 630 kWh energy or 2523 kg of batteries.
As if it wasn't enough, there's one more thing that makes this even better, and can be seen in the Honda Accord vs Tesla Model 3: the car weight, without the batteries. The Tesla one vs Honda is 300 kg less. That's because the electrical engine is smaller and there's other stuff that's simply not needed. I expect this to be the same in a truck, and something like 1000 kg to be shed from a normal diesel truck when getting it to be electrical. So this means that the extra weight will be about 3-5 tonnes. And probably the range a bit smaller. But it's still up for 15 tonnes of payload or 25% less which might be less than the other economies of going electric. In other words, I can totally see this working.
Please Thunderf00t stop these cringe calculations and most importantly, stop giving musktards fuel for refuting you or your points and videos. You represent the science community and this 8-16 tonnes of batteries bullshit is just... sad.
1
-
1
-
1
-
1
-
1
-
1
-
#AskGN would disabling HyperTreading AND cores on a 10900K, coupled with overclocking it, the cache and memory, for specific games (like knowing that game X only uses 8 cores, so disable HT on all of them, while also disabling 2 cores), do you think that using that would significantly increase the performance and/or the power usage & temps of that specific CPU ?
If I got your attention, how much do you think that a 10900K with a 4c/8t would compare to a 6700K ? Or in 4c/4t configuration vs a 6600K ? I'm asking in the context of being curious of the overall advancements in the Skylake-generation of CPUs. Just with the cores&threads configuration, the 10900K should perform much better, only from the power budget alone. But I'm also curious if the frequencies would be the same if the power targets are matched. Or vice-versa, if the power usage is the same if they are locked at the same frequencies. In this last scenario I'm curious about the performance too, since I think the power usage will be lower for 10900K, effectively meaning that it has all the same limits, so in this case the IPC can be compared (if there's any improvement).
I have to say, I'm very excited about the overclocking improvements that Intel added to this 10th gen (...."gen"....). Too bad they're still a bit too expensive and behind in IPC and power draw. Intel really had a lot of bad luck by not being able to deliver the 10nm manufacturing even now, 3 years later (or is it 5 later than initially annouced?). The laptop CPUs looked interesting, with pretty good IPC increase, only drawn back by limited cores and frequencies. I was hoping they would have something by now, but I think that on desktop 10nm will be skipped entirely.
Also #AskGN, how would Intel's 14nm, TSMC's 7nm and the other 7nm variants would be called using the new metrics proposed by ... that consortium that I forgot it's name and I'm too lazy to check ? I even forgot the metrics they proposed to use. I only remember that there were about 3 of them, the main ones, and that one of them would be the transistor density. Anyway, seeing them in your videos and hearing them would get all of us used to them sooner rather than later :)
If you read this far, cheers!
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
However boring it may be when you see basically the same install for 25th time (not an exagerration on DT's videos), it is important for bringing new users to Linux. We've simply grown out of this phase, I'd say. DT still teaches 6th grade maths, and we're all here because we like DT, even though we're 9th graders now.
So getting back to the point, whenever an important distro is released, it's good to have coverage of it. Doesn't matter if other people are doing it, you, as a content creator, should do your thing, your spin on it. Everybody covering it - YT picks it up as an important topic and so it can reach to those outside of the bubble. And when a new-to-Linux user is curious, he should be able to trivially find beginner-friendly videos for exactly the latest versions of whatever he's searching for. And that's exactly what DT is doing. A new user will not know that there are other distros or that the same distro, the version from 5 years ago, installs the same. If you only find old videos in a topic, that gives the impression that topic/thing is abandoned.
1
-
1
-
1
-
1
-
1
-
@ishashka I was thinking of trying it out on my 4,5 yo laptop too. With 6700 HQ. Thing is, the laptop CPUs are actually a bit better on the efficiency than normal desktop CPUs. Sure, lower performance, but much less power draw too. In the end, after seeing what somebody computed in another comment here, and knowing that I don't have a new, efficient (7nm or 10nm) CPU, I realized that I better not waste time trying it.
Bottomline, you might have the efficiency, but if you consume $4 of electricity and make $5 of Monero... why bother ? :)
1
-
Hey, nice pointers! I have a question, regarding point 1 & 2. As I was already thinking of using ramfs to handle compiling, so I don't wear out the SSD. And I was wondering if it's possible to do compiling on one or a series of Rasperry Pis. That is, instead of using my beefy laptop for that, delegate it to one/few R Pis to 1) still be able to fully use the laptop with no interruptions or slowdows and 2) use (much) less power.
But I worry that R Pis might be too weak or have too little RAM (max 8 GB per Pi) for that, especially if I set it to use the RAM instead of the internal storage. From your experience, is this feaseable ? Or are the Pis too weak/have too little RAM to make it work ? I'm ok if one Pi is only doing a package at a time, even on a single core, and if it takes 20 hours for one of the bigger ones, as overall I'll think I'll still be able to be up-to-date.
1
-
1
-
1
-
1
-
You can't call something the "greatest OS of all time" if it doesn't work in the majority of devices (phones in this case) on the current landscape. Fight me!
For real though, I want it, but don't want to buy a Pixel phone, neither new nor SH. And saying it's supported just on Pixel because it has extra hardware security features - bullshit! I mean, I don't challenge that it's not, but I want to point out that it's a very stupid argument. Maybe in alpha and pre alpha stages it would be ok. But on the full release, it's MUCH MUCH more important to have people onboard and even with just like 90% of the privacy and 50% of the security (and still much better than what the original phone has) than to have only one line of phones that limit the exposure to 10% of the potential users. That's bad prioritisation at this point. At least if they care more about raising the global security and privacy.
It's not an easy decision to make, and I can't fault them, I'm just saying what I think would be more important (maybe a bit in a harsh matter, but, meh). I do think a lot of people would rather actually be able to use Graphene on their current phone, without the full security suite, than to have the Graphene OS team develop 3 security features extra, but keep the Pixel limitation (which means that people have to wait more until they try it, or switch to a Pixel phone - and maybe lose some features, like Louis does now)
Other than that, I agree, it's awesome! Won't try it on a Pixel, sorry. Can't wait for it to become more popular and branch into other models, hopefully Fairphone and Pinephone.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
That's so blatantly false and wrong (that these kind of pushes are necessary) that I'm doubting the ability to reason. I'm not referring only to the OP comment but many who defend it too.
First, there is a GIGANTIC difference between
a) forcing users to try something new and giving the option to use the old, which is known to work
and
b) forcing users to try something new and if they're missing something ... well tough luck ? How is that not OBVIOUSLY irresponsible ? What are they supposed to do, stay on the old one ? Go to a different distro or a different spin (which might be more different than another distro but with KDE) ? Well then, don't be surprised if they don't come back.
Second, the reason that "if they don't do that, people would not try or switch to and it will not evolve" is also blatantly false. Wayland now is progressing very nicely and fast. Yet NOBODY forces Wayland as the only option. Proof that removing options and functionality from users is not needed (DUUH). Doing that will only alienate the users and feed the Wayland (or whatever is pushed) haters. It's a lose-lose situation by infatuated people who care more about being/feeling bleeding edge than providing and caring for their users. It adds, I would argue, nothing, while raising all kinds of concern and stress and conflict, like this very thread. While waiting until Wayland is truly ready and then doing the switch, nobody would bat an eye.
You can see they're searching for excuses rather than actually caring from that statement that they'd rather do the switch on a major version change. Because it makes sense, it's something to be expected. But they didn't thought (too much of a distance) that removing it now is 10 times the distress than removing it in, say, KDE Plasma 6.4.
1
-
1
-
@elmariachi5133 "But I don't care what it brings for developers, when I am talking from a user's perspective"
Well, Rust will add absolutely nothing to the users, directly. But what you're saying is that you don't want the tree cutters to get chainsaws as an upgrade from simple axes, because they won't bring you better wood for your stove. Yes, it's THAT silly.
You could say that lumberjacks with chainsaws will bring more wood, or deliver it faster and probably, on average, cheaper. Well, that's the same with Linux's kernel too. With Rust instead of C, it allows developers to write & get to a secure & stable state faster. But it can't be quantified how much faster, so no promises can be made. For some things it might really be the same time. Maybe even worse, who knows. But on average, it should be faster, as more complex things will be able to have statically-determined memory safety, allowing the developer to not spend many hours checking and testing that (well, where it can, some things in the kernel have to be made in unsafe mode). Or it might allow the developer to release something that is secure, instead of releasing something in the same timeframe that will have bugs.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
It's one of the most customizable and most optimizable (if I can say that) OS.
With those use flags you can compile the programs you use to a) use your specific processor's architecture (and also things like O3) and b) only have what you want & use. For point b) there are big portions of some programs that you can effectively cut and simply not have them. In some cases it also makes for some apps to use less RAM, but that's usually a very minor concern.
Also, since it's source based, it's very free-software friendly.
Lastly, you don't really NEED to be on the cutting edge with all apps. Some of them you can do with not updating/recompiling for months, if you don't need the updates.
Lastly lastly, to answer your question ... it is suitable for desktop use. If what you mean if it's also usable as a main-computer-daily-driver, yes, that too, but it will take you some time until you can do it effectively. Not recommended until you're familiar and confortable with Gentoo, so you know exactly what to install, when and what to update and how to solve problems and conflicts when they arise.
1
-
1
-
1
-
1
-
Wait, that math section at the beginning sounds/looks pretty wrong. First, the theoretical maximum says 1.36kW/m^2, which, if I'm not mistaken is the solar power that hits the top of the atmosphere. On the ground it's more like 1000-1120 W/m^2, of course in very good scenarios. The computation of how much a solar panel would make doesn't include the solar panel efficiency. So, if we multiply that by 25% efficiency, it's something like 300 W/m^2
Next, in the more realistic scenario, 340 W/m^2 THEN multiplied by 55% irradiance (where is that figure from ? I guess that it's the average for the whole day, since the sun can't be at zenith the whole day. That's why the average daily energy taken at sea level is 6200 Wh, which would be only 6 hours of sunlight, but it's actually more hours, but most with reduced efficiency) THEN multiplied by the panel efficiency ? From what I know, usually the figures are around that 340W per panel, which usually are 1.7 square meters, so with an average of 20% efficiency, assuming 1000 W / m^2. I mean, that's what most people have, 300-400W per panel, at around 200 W/m^2 output, not that measily 37.4 W/m^2. If we're taking averages around the day, then it shouldn't be something expressed in hours.
All in all, seems it's comparing seeds to apples.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
That is an excellent video topic.
In case there isn't a video soon, or at all here are some tips for switching to Linux:
- some things continue to be a hard NO on Linux. To my knowledge the biggest are: Adobe Photoshop and I think their whole creation suite and games with anticheat (some work, but most don't). If you have to use/play these, just stick with Windows, at least for the foreseeable (1 year+) future. Things evolve pretty fast nowadays, so if Linux isn't viable today, trying again in one year you might find that a lot of issues are solved (not guaranteed, of course).
- be prepared that Linux is different and still has some things lacking, for many different reasons: some hardware simply don't have drivers made for Linux, as the manufacturers didn't put the effort to (justifyable or not) and might work subpar or not at all. Or if something just got launched, the drivers for Linux might arrive several months later. In general stuff works, with the only important stuff that might be a blocker being some Wi-Fi cards. The rest is more peripheral or niche products. Same with software, like I said above. Some things have to be done differently. In general, if you search online or on YT, you can find what works and not, and if something doesn't work for you, how to do it. But try to use recent sources when you can.
- if the above aren't a big concern (or not at all), the first thing to try would be to make a bootable USB stick (easy to do) and reboot your computer into that Linux and see how it is. Check if all the hardware works, especially Wi-Fi if you're on a laptop. Graphic card drivers might be a nuisance depending on the distro, so if that doesn't work that well, don't give up yet. Then try to install the browser(s) you usually use and then the applications / programs / games you need and use often
- that being said, here are some Linux distros worh mentioning. Before that, a Linux distro (short for distribution) is a fully working operating system (OS) that uses the Linux kernel (yes, technically Linux is just the OS kernel. A distro comes with many things on top: how it looks and how it can be customized, what package manager it uses (a package manager is basically the App Store / Google Play you have on a smartphone, you can use it directly to install all/most of the software you use and to keep them updated easily), how it general works and feels. So, here they are:
- Linux Mint - stable (doesn't change often, focused on things working and not crashing, is slow/delayed to adopt new things), user & begginer and non-technical friendly. My recommandation to try first
- Ubuntu - the OG begginer friendly distro, now fallen a bit out of favor in the community. Still very capable and compatible with many hardware and software, as it was the de-facto most used Linux distro for many years, most hardware and softare devs made sure their stuff works on Ubuntu first (and probably only there). Still, it has some caveats, if something doesn't work, do try other distros. Also pretty stable
- Fedora - focused on being quite cutting edge, sleek and for work/workstation usage. Not a personal fan of, but it's also a very popular distro that should work very well in most cases, so it's also worth a try
- Pop_OS! and TuxedoOS - made by Linux computer manufacturers. Pretty ok/decent, I'd say similar with Linux Mint. Also worth a try
- Manjaro, EndeavorOS - more cutting edge, somewhat less stable, but that doesn't automatically mean your system will crash daily or something. Pretty good as beginner distros
- Nobarra, CachyOS - gaming optimized distros, also fairly beginner friendly
- ZorinOS - focused on having a very Windows-like look and feel. Solid, but I didn't put it as the first option as I feel it's a bit behind now
- Debian - very stable, which also means it sometimes is way behind the latest versions of software and drivers. Not my choice for a beginner, though it's used in many situations, good to know about
- Arch - advanced distro, skip it for now (also here would be Gentoo if you hear about it)
- NixOS - can't say if it's beginner friendly or not
- the rest I either forgot or don't recommend for a beginner. If you're technical you can try the more advanced (aka, any distro after all) ones, preferably in a virtual machine first.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
Not that much... Tesla has some top-of-the line things in their cars. None which were invented by Musk, though I give credit to him, him being there and pushing everybody to slave-rate levels of labor and attracting funds certainly made Tesla what it is today. Without him, it would either be a small niche firm, or would've went bankrupt.
SpaceX ... I don't know, the claims that rockets will get so reusable and so cheap to launch and so fast to get reused, that you could get a launch every day ... and what we have now... quite the difference. Getting to Mars was supposed to be ... 2024 ? Or even sooner ? Now, if I'm not mistaken it's supposed to be 2029. I would bet that not even in 2030 it won't happen. First because it's stupid to send people there, so early. They'll have nothing to do that a robot like Perseverance can't do already. But maybe we'll at least see SpaceX send a rocket with a robot reach Mars by then, but I'm very skeptical they can do that too, until 2030.
But, there's a lot more things that are pure con-man snake oil salesman pitch. FSD ? Keeps coming next year since ... 2015 ? The name should also be illegal, it's so far from full self driving, it's not even funny. Robo taxis ? Yeah, right. Cybertruck ? Nope. Tesla Semi ? Clearly not in 2019. LA hyperloop ? Teslas in tunnels (with drivers) - absolutely not profound. The tunnels themselves were also, short, very small in diameter tunnels, dug at normal prices in not-uncommon time. Absolutely nothing revolutionary. Just from this LA thing alone and Musk should be laughed at away from the room when he starts even mentioning revolutionizing transports. Tesla bot ? Get outta here.
Splitting so there's not a giant wall of text. Let me continue. Tesla solar roof aquisition ? Quite scummy. Starlink ? Interesting idea, but extremely polluting and unpractical if you think seriously about it. It will never get to fully roll out because of its inherent problems.
In the end, the only companies that border revolutionary technology are SpaceX and, briefly, Tesla. Yawn!
Just how much and how many times must someone be proven wrong and/or lying for people to realize it's a conman ? Just because he got involved in some companies who actually have products (even at massive losses, like SpaceX is) doesn't mean he's not a vaporware salesman. He had promised a lot of things that range from somewhat impractical (but who knows, maybe it works, a genius could certainly make it work) to downright bullshit. He straight up lied multiple times about when things will get done/released (and a good bunch of them would never be practical). How is that not vaporware salesman ?
1
-
@mynameismatt2010 Correct, if you're only saying an incorrect date, it's not vaporwave. But if the product is still not released, there's still no guarantee that it will, so it still has the status of vaporware.
Also, in this context, if it announces something which sounds revolutionary, says it's ready next year or in 2 years, and it finally releases it in 10 years, at the same time the other companies also release the same thing, in other words, nothing revolutionary, while technically not vaporware (the product), can we agree that it's also a lie ? And that the promise of delivering something revolutionary in the near future is ... vapor ? That is, it doesn't materialize as advertised ?
Prime candidates for what I said above are the Tesla Semi and FSD. Unless they go bankrupt or something, these will surely come to market some day. But if they're at about the same time or even later than the competition, and also years and years later than promised, then I'll still label them as conjob, even though it's not on the same level as the things that never get released.
I'd say that having a manned mission to Mars is also here, but it doesn't impact the normal people. Gullible investors... I don't care if they don't do their research.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@ Possible. But it's the only way I'd come back, given how easy it would be for them to kick me out on a whim. If not, well, I wish them happy hunting.
Of course, it's easy for me to say this, I'm not in that position. I'm not even in the US. Here in Europe, we actually have some laws that prevent against this type of abuse, and largely, it works.
Also if an individual is with high debt or really struggling financially for any reason, I guess that, yeah, it would be hard (or too risky) to negociate like that. I hope they're not in this situation though.
1
-
1
-
1
-
YES! So happy to see this, though I'm a laptop now, with few chances of building a PC anytime soon. But I sooo wanted to see something like this.
If the passive cooler is made properly, with good interfacing (to say so), so it's not constrained by a particular board/chipset/CPU, then you can literally use it for 50 years! Now that's ECO, in my view. Rather high initial cost, but 0 cost (power draw) 0 maintenance and 0 landfill for the rest of it's life makes it the best option for long term. It might need a bit of work and resources for new CPU sizes/platforms, but it should still be uncontested.
Of course, it's only feaseable for moderate power CPUs. But a R5 5600X is already at 65W and is quite versatile, so it's not so low-end.
1
-
For GN: if you have the time and curiosity... since you can disable HT per individual core, and, if I'm not mistaken, also each individual core can be disabled completely, could you test a handful of games, to see if using these could notificeably increase performance (in a handful of games, theoretically each will differ from the others by A LOT) or decrease power&temp ? For example, in GTA 5, the CPU could be set to only have 8 cores, all without HT. I realize that this game is a pretty bad example, since it has that stupid 187 FPS limit, but you get the idea. Disabling HT should lead to performance because of less resource contention, while disable a core completely should help with power draw & temps and maybe latency ?
Related to this, I'm personally very curious how did the skylake architecture/platform/manufacturing did improve after all these years. Could you take a 10900K, set it to have the same a) core & threads, b) power targets, c) clock frequencies and d) all combinations of the before against a 6600K and 6700K ?
So, for example, a 10900K configured to match the 4/4 and 4/8 cores/threads of 6600K and 6700K also with the same frequencies of said CPUs, how much less power it consumes ? Does it perform the same ? On another test, with the same cores/threads and the same power limits, what frequencies and performance does it achieve ? Is that also the same ?
To remind here that the 10900K should have some security fixes directly in the hardware. I'm not certain if right now for a 6700K the security fixes would be in software or none at all.
1
-
1
-
1
-
Interesting thing about the interest rates. But I have to wonder... the interest rates are low, in order to increase the economy, right ? All those imports are contributing to economic growth, right ? Is there a possibility that Turkey's economy grows enough that it might jump start (well, jump-recover) the ... uhm... situation ? As in, at one point it allows the interest rate to raise, but still benefit from high economic growth, say 5-10% per year, while the inflation gets down to 2%, so in several years, the effective power of the lira is restored, while the country is much stronger economy/industrial-wise ?
I guess, I'm simply asking if all this deficit is actual invesment money or not.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@Gamers Nexus In the comparison to RTX 2060, isn't the initial RTX 2060 priced at $349 MSRP, effectively higher than the launch MSRP of RTX 3060 ? (the 2 years of inflation is negligible). The RTX 2060 TU-104-150-KC (what EVGA calls RTX 2060 KO) relaunch (if I can call it like that?) that was in January 2020, at $300 is ... I would say different. It's one year later, which is one year ago, basically with adjusted prices... Also, RTX 3060 has more VRAM which will get to be significant in the following years, if games evolve in the same trend.
And I'm not sure if it consumes less power than RTX 2060, as it was missing from your power charts. It certainly consumes much less than GTX 1080 Ti :))
Overall I agree on it being just nice, not wow, but don't fully agree with the stagnation part. It's still better in price/perf and perf/watt, even if by too little. And simply the fact that it was compared to that Jan 2020 release of failed 2080's is... just weird... I feel like it's cherry picked.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
I guess I do have a bit of an uptime fetish. Several things to mention:
- in Windows 10, with quite some hassle and at least the Pro version, you can actually control the updates. As I speak I have 99 days of uptime. I didn't actually wanted to reach this high, since effectively I'm behind the updates, I cannot have (most of) the important ones without a restart. But Windows, as I configured it, doesn't pest me neither to do the updaters, nor to restart. And usually between 1 to 3 months I do the updates and restart.
- having a high uptime is a sign of a properly configured system. It shows that you don't have weird leakages or simply bad software. As having a system that you don't have to reinstall (and even reformat the drive). I've never understood the people who got to the conclusion that you have to format + reinstall windows once per year. I've always reinstalled because the old one was too old (like from Windows 98 to XP, from XP to 7, from W7 to W10), not because it wasn't running ok. Windows is bad, but not THAT bad. Anyway, I'm going off topic.
- for me at least, not restarting isn't the time to reboot. That is below a minute. Is also opening ALL the apps, and in the same exact state that I left them. Still something that should be below 5 minutes, but I simply like not having to. Some apps don't have the "continue exactly where you left off" feature when restarting them. For this reason, I usually hibernate the laptop instead of shutting down most of the times I'm carrying it around (which is less than 1/week since the pandemic started). I do acknowledge that it's mostly convenience on my part, not actual need.
- having the computer open 24/7, if on low power (and low heat) will not damage the components that much, if at all. One power cycle actually might do more damage than 50 hours of uptime (as I said, if the uptime is in a non-stressful manner, with no overclocking and no large amounts of heat). As to why you would do that, some have things open, like torrents, folding at home, or mining. In my case, when I leave it running when I'm sleeping or I'm away, and I'm only keeping it open for torrents, I'm putting it in a custom power mode, which is low power, but with everything still turned on, except the display. This way, it consumes quite little, despite still being "on".
1
-
1
-
1
-
This was an interesting watch. Informative, but also flawed, in multiple points.
Before going point by point, I'd like to say that I am a Linux user and, while I don't go out of my way to bash Electron apps, I'm not a fan of it either, and I prefer to not use them whenever I can. At this moment I think I only use one: Postman, which annoys the hell out of me when it hangs for like 30 seconds, and I will change it soon, once I have more time to check the alternatives and settle on one. Still, Postman I think is trash because it is made like that, not because of Electron. And it annoys me for other reasons too. But regardless, it is very much looking and feeling like a much heavier application for what it should be. I do have an 8 yeah old laptop, which was powerful in 2017, not so much now, but it's not trash either.
One thing I must point out - I am using apparently a lot of software that has an Electron app. But I don't use the app, I use what is sane to me - a simple browser tab, in my browser. I was using this before Electron was a thing, so I kind of happened to be like that, before deciding to not use Electron if I can.
Why not use Electron ? Some time ago MS Teams had a bug which let remote code execution within it, because of bad sandboxing. That's when I knew I'll never install it, as running it in the browser I can be much more sure that it doesn't have access to my system. Especially after finding out that Discord for YEARS used an extremely outdated version of Electron. Imagine how secure that was! And the performance penalty is there, I don't want it. By only having those directly in the browser (seriously, why are people not using this ?) I have almost 0 disk space used, much less security issues / stress, much less RAM used. So I feel quite immune to the "either Electron either nothing" threat, though I do understand it. For me it didn't enable me in any way to use Linux. As IDE I use JetBrains.
On the performance side... that example with the SwiftUI and the conclusion that Electron might be faster than native ... I am soooo NOT buying that. I call 100% skill issue or just something related to SwiftUI at that point in time. Even the first update it says it swapped one component from SwiftUI Text to SwiftUI TextEditor. And apparently it does the rendering using the CPU ? Disable hardware GPU accelerated rendering on a browser and watch the same pain there too. You can't really say that native in general might be slower just from one example like that. Chrome, the only way to efficiently render text (without being John Carmack), really ? Talk about blowing things way out of proportion.
In the end, I don't like Electron for the same reason I don't like Flatpaks and snaps. Having everything bundled. In case of flatpaks and snaps, I'd rather have a statically-linked executable. In case of Electron I'd rather simply use my browser. Having one embedded will always be a problem to me. Maybe with servo it will get to be less of an issue. Still, I'm not thrilled on having JS either, though for simple applications it is fine. I would still like something that can be more efficient. Not just in CPU usage, but also in RAM usage. A waste is still a waste.
Even with the above, I don't hate Electron for existing. I do kind of hate it becoming the norm. It's like AGAIN we forget all the years of lessons we learned before, most of everything we've learnt and built for something that's short-term easier.
Maybe PWAs will become a thing in this lifetime, and we can stop shipping an almost full browser with every app.
1
-
1
-
1
-
Interesting to see what others said. I have 2 things to add:
1) in regards to a language being high-level: if it has the ability to abstract implementation (in any way), then it can be considered a high level language. It's not the only defining factor of a high-level language, but it's a very important one.
Having abstractions allows you to make the code easier to write, by being more human-readable and more compact, which are clearly (some of) the things that high level languages are wanted for (the reason they were created in the first place). C does have that, in functions. So, for me, it's clearly a high level language. In the grand scheme of languages, yeah, it probably is the lowest high-level language. But because you can abstract things away, it's still a high level language. Hell, if you want it, you could program a runtime for it and make it behave like a higher level language (of course, by limiting yourself to only use what was implemented for the runtime).
2) SQL is not a programming language. The high vs low only applies to programming languages. Ok, scripting languages too, though, by convention they're always high level (no need to create a scripting language to be low level). SQL, HTML, YAML etc do not qualify.
1
-
1
-
1
-
I see this as simply not great examples of what can be done. Like, in multiple occasions, the idea was to show that you can use CSS "has()", but it got filled with lots of grid stuff and other. Which is not mandatory, things can be done differently. It is, I guess, an insight that in some (many?) situations, this can get to gnarly CSS. I would've done these examples by starting with the base html & css already done, and only adding the interactive parts. The toggle and testimonials for example, they could've been done with easier to see/understand HTML and CSS. And actually just focus on the has() and/or checked parts (I think :active could've worked too, if you don't want to use form elements)
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
A minor correction - what Louis described as "valuing a broken Macbook" is actually "valuing FIXING broken Macbooks". I get the dramatisation, and in the end it doesn't change THAT MUCH the discourse, but there is a non-trivial distinction between a simple object that is trivial to replace (and thus insane to be obsessed about in this comparison) and an occupation,hobby,habit maybe even an ideal (hard to describe exactly).
And I do think it matters to underline the distinction. I don't think (I hope) that many people would value a commodity object over something like their own health. (well, some might, either from a lot of emotional attachment or if the object is very expensive, but then the object is no longer commodity IMO). However, there's a lot of people who do identify themselves with their work, what they do, what they provide and who might prioritise that over their personal health.
The main idea to take from this is that if you really care about that work, about what you do, what you (can) provide, for the longer term, then you have to take care of your health, otherwise you might "fall" too early and not be able to do what you love most.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@BrodieRobertson Of course the companies are interested in not being regulated. But it's not for them to decide that. That's why the regulations exist in the first place, because otherwise they wouldn't be respected.
And the idea that's proposed here is exactly the difference between "regulated" and "regulated into the ground". At least perception-wise.
I'm sure that any company that does a good job can do so and survive without (very) targeted ads.
Still, the proposed solution (which I think is also the only actual solution proposed) allows them to have decently targeted ads. Making ad spending for them somewhat effective. Otherwise it will either be something illegal (bypassing privacy rights or regulations, if they appear) or it will be random targeting, making ads much more ineffective. Which might make them panic.
It's normal to first create the means to behave decently, and then to enforce behaving decently, because you have the means to. Without something like this proposed solution, the "behave decently" is to basically not have targeted ads. Which is basically not an option for these companies. Maybe will not be a bad thing, but realistically, that will not happen anytime soon, nobody will come that hard on them.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
I'm kind of sad that games targeting Proton make the most sense right now, and that's definitely what the majority of developers have to do. Hopefully, with SteamOS gaining popularity in the gaming community, it might make developing native Linux games more worthwhile (both in the market-share department and in the ease/usefulness of developing), you know, to be able to unlock all the resources that the hardware has.
And if SteamOS does get big, that means that hardware manufacturers will finally (have to) think about drivers/support for Linux, which will make switching to Linux easier, with less/no concerns about hardware compatibility, which will further increase market share... ok, I'll stop babbling, but I do feel like this will be a big possitive loop and I can't wait to see it unfolding.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@nlight8769 Oh, wow, things got very complicated too fast.
The problem is actually much simpler. It's the word "performance". For some people it's not immediately obvious that it's about "something (a task) done in an amount of time". Well, where time is involved. That's the thing, it doesn't explicitly say the metric used. And if the metric is not explicitly said or obvious from the context, people make assumptions and that's how we got into this topic :D
But performance is very analogous to speed.
In our case the compile time is similar to lap time.
And speed is measured in kmph (some use miles per hour, but we've grown out of the bronze age). In our case it would be the not-so-intuitive compiles per hour. One could say that instructions run per second could also be a metric, but it has 2 problems: a) nobody knows how many instructions are run/needed for a particular compile, though I guess it can be found out and b) not all instructions are equal, and they NEED to be equal in order to give predictible estimations. For speed, all the seconds are equal and all the meters are also equal.
Here's another tip - degradation implies that it's worse and that the something degraded is *reduced*. If someone tells you something degraded by 80%, you KNOW that it's now at 20% of what it was (and not 180%). And something degraded by 100% would mean it's reduced by 100%, aka there's nothing left.
Lastly, correlating to the above - When the performance of anything degraded "fully", so to speak, we say it's 0 performance. Not that it takes infinity time.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
I really hate how the tabs look now. I was on a very classic title bar + menu bar + tabs bar, with lots of tabs. Now, before they were gray, with nice borders and clearly delimitated. Now they're bigger, whiter, much uglyer, shorter on the horizontal scale, things like the playing audio tab is now a stupid text which doesn't even fit, while before it was an instantly recognizable icon.
So in the end the whole bar is much uglier, I have less information and it's harder to find stuff in it. Like actually seeing which tab is playing audio when another tab is opened and the whole tab shifts and I have to go back to the tab with the video. I have 0 (read: ZERO) benefits from this change and only headaches. Not huge ones, in the end I can live with it, but the fact that it's all for absolutely nothing makes it 10 times worse.
Instead of useless changes, how about separate stop and reload buttons in the address bar that we had before Firefox ?? How about allowing to customize the address bar so things like "bookmark page" and "Reader view" can be taken out and maybe back, forward, stop, reload and home can be added in ? I know, I might be able to change some with themes. I didn't bother.
1
-
1
-
1
-
1
-
Yup. I know that Sweden is testing that, adding an overhead cable on some highway, on the first lane, so they can have electric-driven trucks. Heard of that about 2 years ago, I think, but haven't heard more yet. To be frank, I haven't searched for either, so for all I know, it might be already in place and used by actual trucks.
Also, in these videos, I think they go a bit too much on the idea. Yeah, it's overhyped, because Elon is Elon. But the Semis themselves I think that the can make actual sense in a few scenarios. Like hauling toilet paper or similarly high volume little weight cargo. And for short to medium trips (up to 800 km / 500 miles). One metric that Musk showed was that 80% of hauls are below 250 miles. I haven't seen anyone dispute that. So, there's certainly places where a cheaper fuel, more eco friendly, less noisy, less stinky and locally-polluting truck does make sense, including economically so.
What Musk said that it's better than Diesel in all regards... yeah, no, not even close. I think it might get there in 20 years, where Diesel will be required only in quite niche situations (hauling in winter somewhere very high for example. Or actual 2000 miles trips in the middle of nowhere). I do expect that in 20 years the batteries, at worst, will be a bit better, a bit cheaper and there will be enough charging stations or, for these trucks, maybe even battery swap stations, so they don't have to be fast charged which does degrade the battery a bit faster. Unless the actual battery as compared to the usable battery, is much bigger in capacity, which I kind of doubt. Anyway, I expect, at minimum, in 20 years, to have a 500 miles fully loaded truck with, say, only 5 tons of battery and a cargo hold of 3-4 tons less than a diesel truck.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@tandemcharge5114 Thunderf00t is not perfect, but his track record is still much better than Elon's is. For me, his worst video is the one which he titled that Tesla's batteries suck, which he then sets to prove that Li-Ion batteries have inherent limitations and are not a one-size-fits-all solution that some think it is. The video itself wasn't that bad, but the title was 100% bad and clickbait. Other than that, he's quite on point, even though his style might not be so pleasing for everybody.
Also, regarding to your initial comments. The Hyperloop is one extremely good example of the MASSIVE bullshit some people like Elon Musk and other billionaires knee-deep in money laundering like to spout. The hyperloop or maglevs-in-near-vacuum-tubes is extremely expensive and unpractical. If you cannot fathom that, it's about the same as every single person on the earth, all 8 billion of us, having a personal airplane and using it daily. That kind of ludicrous. When actual good solutions exist, but are just not implemented.
Maglevs without vacuum tubes are already extremely expensive and only really economically viable in extremely high density population areas. Vacuum tubes increase the cost and complexity by an order of magnitude. Why is it so freaking hard to understand this, when it's so easy and in plain sight ????
1
-
@tandemcharge5114 Hey, thanks for the reply. Sorry if I look like a thread necromancer, but for me 3 months is not that much, and the topic at hand is still relevant.
The "everybody gets to have an airplane", I wasn't explicit enough. It's something like: it would make more sense, economically, for every single person to have a plane, than having, say, 10 000 Km of hyperloop (which, of course, wouldn't benefit all people on the world). I guess it's a silly example, but I'm just trying to illustrate just how enourmous the costs are and just how impractical. Also, if it would be practical in, say, 100 years, it should be marketed as such. It totally isn't. The claims are off-the-charts bullshit.
On the Tf00t vs SpaceX I'm not very sure. I saw one video of Tf00t which critiqued SpaceX and their claims. I had some doubts about it. Then he made a response, and it made sense to me. But I'm still not so convinced (like I am on Hyperloop), so I won't comment there. Cheers!
1
-
1
-
1
-
@L0gicalPsych0 Yup, you are correct. Sadly, convenience often ... who am I kidding, ALWAYS trumps over correctness, when we're talking mainstream public. That's why I'm saying it's a losing battle.
To be fair, when the idea is to get across that you're using an OS which is neither Windows, nor MacOS, nor Android, but uses the Linux kernel... simply calling it Linux ... it DOES the job. The contexts where you would say "I'm running Debian" and the other person is like "Oh, Linux. Nice" and you then go "WRONG! It's GNU/Hurd" are really, really niche contexts.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
I do agree that what DoJ proposes is an overreach. And I think it's why it will not pass/continue.
However, I totally disagree that it will kill the web or anything remotely close to that. There's plenty of innovation done outside Google. If Chrome disappears today, sites and products won't break, as Chromium can serve them with absolutely no issues. If Chrome disappears today, we won't be stuck in the current version of the web, it will continue to be improved. Maybe slower, but we'll be absolutely fine.
I hate this type of argument so much. That if we don't sell our soul to the devil (figuratively) then we won't be able to do anything. Ok, maybe 10 or 15 years ago it might've been closer to the truth, but even then I don't agree that we wouldn't have have the benefits of the modern day browsers without Chrome. It would've taken longer most likely. But nobody would've died because of that. But it's even less of a concern now, with thousands of non-google engineers contributing to web standards and browser code.
I wouold also not discount Firefox so quickly. It's true that most of Mozilla's revenue comes from Google, but a lot of the revenue is wasted on useless projects (usually DEI stuff) and on the woke staff itself that managed to get the leadership. Firefox developers don't get a lot of cash. Not getting the money from Google might actually be better for Mozilla and Firefox, as it might be freed from the woke management and steered back into competence and relevancy by engineers and ACTUAL free speech activists. And if they make a dedicated Firefox-devs only donation category, I'm sure that there will be people chiming in (myself included), enough to at least keep the current funding the developers get.
1
-
1
-
1
-
It's easy how he got there - 90ms mean response time, meaning you can do about 11 requests in a second. 10 if you round it. So, in order to have 0 extra latency, he just said that you can do 10 requests per second per core.
It's true that in reality, that 90ms response time is more like 40ms CPU time and 50ms waiting for stuff to be loaded, like a database query response, or the SSD to return file data and so on. So a core could handle more than a request at a time, but it depends on the CPU time vs iowait time proportions just how much it can do. And after a point, it can do multiple at the same time, but there might be a bit of latency on all of them.
Still, it's just easier to compute the way he did it, and also has some headroom, just in case.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
The way I see it, the main thing, the main advantages of this "do one thing and do it well" is for having easy composition and less/no code duplication. That is, it's not important to follow it to the letter, but it's a nice, short way of conveying the goals of achieving several benefits that I'll try to list below:
That program or library has to be small enough so it can be used in a chain of commands or inside a bigger app, with a minimal footprint.
If everything follows this philosophy, then it's also easy to replace without having a depency hell, coupling issues and performance struggles. This small enough pushes the developer to try to be as narrowly focused in the thing it's doing, and when it does need more things to do also try to see if it can use another program or library.
This also allows projects to have few developers, since they can only focus on their specific program and domain. To give an example (I don't know if the reality is anywhere close to what I'll present, but it seems like a nice example), the Lynx browser. Their devs can simply use internally curl to fetch the resources and only deal with building the DOM and rendering it. Internally curl might also use a ssl library and a tcp library to handle the lower level networking and only focus on the HTTP & related standards. In this example, if HTTP 3 gets released (woohoo) it might get implemented into Lynx with minimal effort, by just updating the curl library (well, usually minimal, there might be breaking changes or new stuff to take care of). Do Lynx developers have to care about HTTP 3? Nope. Do they have to care about available encryption hashes used for HTTPS connections ? Nope. Do they have to care about opening sockets and their timeouts and buffer sizes ? Nope. They can focus on their specific thing. And that means they can also know very little of the underlying features, meaning less experienced developers can start to contribute, the project has a lower barrier of entry.
Having a smaller project/library also allows having manageable configurations. I mean, it can be made to be very configurable (including being modular) without getting overwhelming, because it's in the context of a rather small program/library.
Another interesting example is ffmpeg. As a program and cli command, it's actually pretty big. But it's still made so it's easy to be used with other tools and programs.
Of course, in the real world, the separation cannot be made perfectly. For some developer the big thing A would be split into b, c and d. Another developer would see A split into b, c, d, e and f, and each also split into 2-3 smaller programs, with one of them being used in 2 places (say, program t is used by both b and e). As you can see, technicallly the second split is better from the "do one thing and do it well", but it's also much more complex. This cannot go ad-infinitum. Theoretically, it would be nice if we'd have only system functions and calls and we'd only run a composition of them. But in the real life it's never going to happen. Also in the real life, the example above, a third developer might see the split of program A into B, C, D and E, with B being say 80% of what b does in the vision of the first developer + 50% of what c does in the vision of the first developer. And so on. And there would be arguments for all approaches that make sense.
Lastly, doing one thing and well allows for easier optimisation. Especially in the context of a program or library to be used in bigger projects or commands, having it well optimized is important. And because the program/library is rather small and focused on one thing, that is, it's into a single domain usually, it's easier for the developer to go deep into optimisation. Of course in the extreme cases, having one big monolithic program can allow for better overall optimisation, but you'd also have to code everything yourself.
Regarding the Linux kernel, I'd say that it achieves the goals of "do one thing and do it well" perfectly because it's modular (and each module does one thing) and all of them play nice with each other and with the userspace.
The problem that I see with systemd is that their binaries, while neatly split, are basically talking their own language. They cannot be augmented or replaced by the normal tools we already have (well, sometimes they can be augmented). Somebody would have to create a program from scratch just to replace, say, journald. And this replacement program would be just for that. It's this "we're special and we need special tools" thing that is annoying. Ten years from now if one of the binaries is found with massive flaw, well... good luck replacing it. Oh, and it's critical and you cannot run systemd without it, so you have to replace ALL the system management tools ? Oh well, warnings were shot, those who cared listened...
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@temp50 Well, not everything will be available from the getgo. If I'm not mistaken there's still some peripherals that are basically unusable on Linux because they don't have drivers and nobody has time or resources to reverse engineer one.
So initially it will be just rather basic stuff - CPU, GPU, mouse, keyboard, wired and wireless network, I guess bluetooth too. The rest... well, if what I wrote above will happen, the rest will come too, later, as the need for them will increase. But that's like more than 5 years into the future, I'm afraid.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
Brodie, I agree that this being opt-out is bad. However some other things I disagree.
Especially the points that the CTO discussed. I fully disagree with your take at 12:05 "This system does not do anything about stopping those economic incentives". And at 14:44 "The way you get this fixed is by talking with the regulators clamping on the advertisers [...] and THEN you can implement the system that gives them very minimal data."
With the above, you are suggesting is that for an unspecified amount of time businesses to spend money for completely random ads, instead of targeted ones, basically to throw money in the air and light a flamethrower on it, and then in some mythical future they can get some data so they can be back on targeted advertising. And that somehow they won't be strongly incentivised to find and use ways around these regulations (that often get more and more terrible). Also you're saying that providing the service beforehand, so businesses can switch to it, in a specified window of time until the regulators can come raining down on them is somehow bad or useless. That somehow they'll have the same exact incentive to spend money to find or make ways around this. WTactualF. Please try having a business first, maybe it will be more apparent that what the CTO did and said on this approach makes the most sense.
To put it more simply you're asking people that don't have a garage to park their car to first sell their cars, be carless for some time and then buy them back when the authorities built some parking lots. Nobody will do that. And it will be a massive backlash. Learn how things work in a society. Learn to think how is like for the other side.
And they ARE doing something about the dystopian state of the web today. So far I haven't heard any other actual solution, something that is actually feaseable to be both useful, to work and also to be implemented.
Another thing, at 10:16 "If you're unable to explain to the user in a short form why a system like this is benefical to them, why they would want a system like this running on their computer, you shouldn't be doing it". I agree that they should explain to the user. But I disagree on the "shouldn't be doing it" part. Many things are somewhat complicated and many people wouldn't understand because they're not that interested. Frankly many things are simply very subjective if they're explained or not. It can certainly be summarized quite shortly, but some would argue is not explained enough. And a more proper explanation would then be too long for some people. From "hard to explain" to "don't implement it" is a LOOONG road and the "hard to explain" shouldn't solely be the reason of not implementing something. People receive drugs and medication or even things like surgeries with very little explanation too. You can argue that maybe it shuoldn't be like that, but if you compare it to our case, this is orders of magnitude less damaging in any sense of the word, so in the grand scheme of things it can be explained very shortly and whomever truly wants to understand it can find that in the code or somehwere on the web in a blog post or a video or something.
1
-
1
-
Not quite, since php 8 or 8.1. You can name the parameters when you call a function, so you can have them in any order.
Also, frankly, any editor worth its salt should have a way to instantly tell you the parameters of a core function. Can't believe this is somehow still something to complain about to this day when it's at most 0.1% of the hassle you'd have working in PHP.
Yeah, it's not great, but it's really not such a giant thing either. Annoying probably the first or second time you encounter it, then you easily get used to it and go past it. If you can't get used to something so benign, then you either have extreme OCD or should simply quit programming, as you'll have much worse problems down the line, no matter what language or framework you choose.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@BitsOfTruth I think there's a saving grace here - FOSS projects are done basically exclusively online. With text. In places where you can have a pseudonim.
In this context, having a custom pronoun is (to me, at least) the same as having a pseudonim. My name is not really Winnetou17, but people call me that if they want to talk with me. In a way, you can say they are forced to do so. But nobody forced them to talk (well, write) to me.
Of course, there's a grey area that DT didn't talked about, what do you do when people take same troll names. If I call myself "God Allmight Himself" in the hopes that I'll see someone commenting in an issue "God Allmight Himself said that we should use the integer function" just for a cheap giggle... well, it's blasphemy for same people, and a known one. In these cases I'd say that the owner of such a name/title/pronoun should provide an id or proof that that's its actual name/title/pronoun and if not, to change its name/title/pronoun to one where such an action would not be neccesary.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@Gandalf721 a) that's what they say, that they don't focus on other phones, because they lack security features. And b) many others said that if I want it on my phone, I can take the source code and make it work.
From the two above it's obviuos: the devs of Graphene OS decided to not support other devices, not even in a lesser way, say, just a minimal install that works, though there we're getting into the realm of if it's even worth the time.
Also, in their FAQ they mention that
"Broader device support can only happen after the community (companies, organizations and individuals) steps up to make substantial, ongoing contributions to making the existing device support sustainable." So it's not just "other cell phone companies" (a thing you should've known already, when you replied me).
The only valid reason I see so far is that supporting more phones would be too difficult/impossible to do at the moment. But it should have been mentioned in the first place, not the "because it lacks security features".
Also, stop assuming somebody is a perfect idiot just because it doesn't want to buy a specific line of phones.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
The % that is preallocated, well, I don't think that it matters in this discussion.
Because, if you need 5% of 1000 GB, that's 50GB. If you have 2 partitions, say, 100GB for root and 900GB for the rest, well, then you'll have to sacrifice 5 + 45GB = the same 50GB for the filesystem's overhead.
The thing that might matter, is that you also need to leave some free space available. If it's a percentage, that it doesn't matter. But if the partitions get small enough then the actual absolute value might matter. Like, you need, say, 10GB free in any HDD-based partitions, for doing defragmentation. If you have one partition, you can safely make sure you always have 10-15GB free. If you go to 3 partitions... well, now you'll need 30+GB free.
Also, like other people noted, if using multiple partitions, the owner should know pretty well in advance how much space it needs for each of them, so it doesn't become a headache that one of them got full and you need more space there.
Of course, it's also a blessing, maybe something went haywire and the some app is logging like crazy. If that folder is on it's own separate partition, it won't make you root partition full, bringing a lot more problems.
1
-
1
-
@joeyvdm1 Oh wow, you really like this stuff :))
Well, maybe I heard/understood/remembered incorectly.
From what I remember (I also watch AdornedTV and CoreTeks?CoreTecks?) there was something that because you only have one CCX, the cache is now available for every core directly, without the need for accessing the other CCX or duplicating the data in the local cache.
The thing that might make our communication a bit harder here is that IPC is too broad of a term. Because it can be influeced by many things. Having a lower latency for... anything really, I see that as IPC increase. Don't know why companies wouldn't market it as such. Instead of saying "We got 5% lower latency, 5% better cache efficiency and 5% IPC", they will simply say "We got an average of 16% IPC increase". Because that's a bigger number, and very easy to understand: in average, everything will be 16% faster.
It's nerds like us who want to know how is that 16% made of, to better asses which workloads will be impacted more.
So, I still stand by my claim that IPC increase will encompass all the improvements, since that's the idea of IPC metric. Not "pure workload, no memory fetching or saving, no shared memory, no multicore communication" metric.
And, even if I'm wrong, I want to not be too hyped up. Seeing in some places that Zen 3 will have "only 10-15%" IPC increase, as if that's not great, only makes me sad.
Even with 10% IPC increase and absolutely nothing else improved, my quick mind maths tell me that this will be enough for Ryzen CPUs to match Intel in gaming (on average, on some games it will be better), while obliterating them in all other aspects. That's still something that I can't wait to see.
Cheers!
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
This was quite interesting. I feel like this can be great for single purpose computers when there's no "other applications". And if you think that you need things like apt, ls etcetc, those can simply be available by rebooting into a traditional kernel. I kind of want to build something like this with Gentoo. Have multiple installs for single purpose, like some game. And the init will simply launch the game, no bash, no login, no desktop env, not even a window manager. Updating the game and other things (like opening up a browser) would have to be done by rebooting. Might seem like overkill, but if you know you don't need the other stuff, and you need the performance ... ain't that neat ? Also, I'm thinking it might be a good way to setup the computer so the kids can play game A and B, but have it locked to do other stuff.
However the dynamic downloading confused the hell out of me. What does that have to do with exokernels ? Can't that dynamic downloading happen very happily on traditional OS/kernels too ?
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
Sooo, if it loses 1% the first year and "minimal degrade" (forgot the exact words) after .... I'm not THAT optimistic, so let's say it loses 1% of the current value each year.
So after 25 years, it will be at 0.99^25 = 0.778 aka 77.8% of initial output. From a base of 25% efficiency, adding the degradation to 77.8% of that, we get to a net efficiency of 19.45%.
The "normal" highend solar panel, if it has 22% efficiency, and the grade of deteoration is 8%, that means that in the end it will be at 22 * 0.92 = 20.24% efficiency.
In other words, what the perovskite wins in the first years, the normal solar panel makes up in the later years. Ok, maybe not to the full extent, but to a good one. If we add that the perovskite is more expensive (though I don't know, maybe it won't) and/or requires more maintenance, then ... it feels kind of a moot point. I hope I'm proven wrong though.
But it feels that it will be at least 5 years until the perovskites are out, proven, mature and at a good price. Those who can/want to buy solar panels now, shouldn't wait for perovskites.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
In case it helps, I prefer wired over wireless. In general. Nothing to do with that keyboard.
A wired peripheral I know it will always work, and have the least latency. And no issues with security, which bluetooth is (was?) full of. Additionally, never have to worry about a battery. Extra extra, even though it barely makes and difference, I'm happy that because there's no battery, it's friendlier to the environment. Current batteries are quite polluting to produce and they are guaranteed to have a quite limited lifetime. As I'm writing this, I'm still using a mouse which is close to 12 years old. Technically Li-Ion batteries can be used more than 12 years, but given how much I paid for the mouse and that I've only used it, and quite a lot (both work + personal), I'm sure that I would've need to have a battery change.
Anyway, long story short, the wired part can be a plus for some people.
1
-
1
-
I'm on Gentoo. It's a distro that I can't fully recommend to somebody without knowing that person. In general you'd know best if Gentoo is for you or not.
That being said, I'm on it for almost one year and a half (from Dec 2023-Jan 2024) and had no significant problems of any sort. No system instability at all.
Gentoo is very good for learning and very good for control and customization. Because of the use flags, you can customize what an individual program/app/package has or doesn't, allowing you to enable experimental or esoteric features or remove things you don't want or don't need. It also allows you to have the binaries optimized for your specific CPU, which can help performance. If you happen to want to have patches for some programs, you can streamline that with Gentoo, so those programs are updated with the rest of the programs, while still having your patches applied.
One thing I have to add ... the compilations are really exagerrated IMO. The laptop I'm using is almost 9 years old. It's from 2016. It has a 4 core Intel i7-6700HQ CPU. While it was high-end in 2016, now it's equivalent to a dual core CPU. It does help that I have 64 GB of RAM. Still, knowing that I don't have a fast system, the only program that's annoying to upgrade/compile is Chromium. Last compiles took about 14 hours, when not doing anything else (I just left it running while I went out). Everything else, no exceptions, takes up to 2 hours. Firefox is between 60 and 80 minutes. I'd say that, on average, I have about 1 (one) round of updates per week that takes more than 30 minutes (for everything that's new, not just a single program) and that's while I'm doing something else, like watching YT and commenting (which is pretty lightweight, true).
I'm sure that if I had more Chromium based browsers, each would take that 14 hours to compile. It's true that I've also been lazy to dig deeper into ways I can speed it up. And I don't have KDE or GNOME, which I know are quite big, so those might add a bit of time compiling too.
Still, if you have something low end or simply don't want to deal with the bigger compiles, there's binary packages. Not for all, but the browsers and bigger packages in general have a precompiled binary from Gentoo.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@raughboy188 This "I don't care until it's relevant for me" is a pretty bad take. Thing is, one thing leads to another, and by the time it gets to be relevant to you, it might be too late to correct/repair/deal with it.
To give a very simpified example, somebody warns you that there's termites in the area and you should be wary. But you don't do anything until it's clear that the termites are actually causing problems. So one year later you see termites existing from one of your wall so you decide to act then. You inspect and see that they indeed crunched at the wall and you promply kill of all them. But the wall is already compromised beyond repair, it will crumble any day now and you have to replace it and you might not have the time and/or resources to do it.
It's the same thing here - by the time something directly affects you, the big contributors to GNOME might've been outed already. Other contributors soured and so on. Even if you ditch the Microsoft guy and abolish the CoC comitee, the damage has been done, and it's really difficult to recover, as there's no guarantee that the former contributors will or can return, and finding new ones takes a lot of time. And recovering trust in general (though there can be exceptions, if you have trust-worthy people in lead)
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
Hey DT! Isn't ChromeOS kind of like having videos on YouTube ? I mean, ChromeOS by itself you can say it's AIDS, but, compared to the cancer that is Windows, you are in Linux ecosystem, so you can get used to it, so LATER you can have an easy transition to a FOSS Linux distro. There's also ChromiumOS ?
Also, reviewing I wouldn't say is the same as promoting. You could review it, and compare it to, say, Arch, and note it's shortcomings. Or simply state that it's doing its job ok (theoretically, I have no idea how it runs) but is proprietary and that's bad because X, Y and Z. Someone totally out of FOSS mindset might want to check ChromeOS, if it's ok to be used instead of Windows, land on your video and have more insight overall. Checking the competition from time to time and seeing the exact differences is healty in an honest debate, where you know exactly what Linux does better, and you can also see what Windows does better and can learn and comment that maybe Linux should do aswell.
Also, also, regarding GNU/Hurd, you can still review it, to see and show where it stands in terms of support and features. And of course, mention that it's not ready for bare-metal install and daily usage. People have done the same with HaikuOS and ReactOS and nobody complained that it ruined their computer or something. Wait, haven't you reviewed ReactOS at some point ? Why the change of heart ?
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
I have to say, every now and then I lose more respect about the USA.
Take this case: given how there were some absolutely insane cases, people going to court over really stupid things and the people having in general a "court-happy" attitude, seeing now that the big corporations have more and more pass on absolutely evil, obviously wrong, criminal stuff and people doing nothing about it... that's just sad, man.
A person can sue some company because it said that the noodles are done in 3 minutes, but it's actually not done in 3 minutes, it's 3 minutes to stay in the microwave oven, you still need about 1 minute to unwrap them and put them in a bowl and such. So, someone can do this, but when there's actually a really big problem, with big consequences, people having DAYS of their very expensive tools offline because they have to wait for the manufacturer to come over with their laptop and fix the problem in 10 minutes, of course with no compensation of any kind for the days when the tractor or whatever was offline. And when they clearly violated a license... like what more do you need ?
What a clown world... sigh
1
-
1
-
6:01 "It's impossible to leave yourself in an unbootable state" - my @s$! Have a "lucky" grub update and then tell me how well that quote aged.
I kinda get the reproductibility, but I don't see what it's SUCH a big thing. It's ... nice. Surely not everybody needs it. It does sound good for a dev environment though, I agree.
The "every program has its own libraries" aka kind of (or actually, I don't know) statically linked is ... not that good ? I mean, it's very nice to have the ability, when you need two or more libraries at different versions for different programs. But... it's good to also have it like only once if you only need it once, instead of potentially 100 times. And, yeah, in some instances I do prefer it instead of virtualization (like when it's not about security but about making sure it has all the dependencies properly set, with the correct versions). But I don't want this to be the default for all my programs. Feels like waste.
That graph with the number of packages and fresh packages ... it smells funny to me (to put it mildly). Any source on that ? I find it hard to believe that nix has so much over all the rest, it's either through a gimmick (like counting each version of a package as individual packages) or some other kind of BS. At least that's what I think. EDIT: ok, I checked a bit, apparently it's just super easy to contribute to it, which at least partially, is a good reason to have high number of packages. So I guess it's legit high number. Neat!
So... yeah, I'll stick with Gentoo. It's still the most powerful and configurable/customizable of the bunch. Also cutting edge and stable, mind you. And you can choose if you want a more stable version of an app or a more bleeding edge version. And theoretically you can set automatic updates but it wouldn't be a good idea, sometimes un update needs a bit of manual care, like a configuration change. Gentoo tells you nicely about this, but you won't be able to see it if it's running in the background. But starting the update and seeing if there's extra things to check/do barely takes a minute anyway, after which you can leave it compiling in the background, so I don't see the appeal anyway.
1
-
1
-
1
-
1
-
What the hell, did noone observed that the barcode import/read/whatever it is, was WAY off ?
Actual food label:
serving size 1 can of 106g
serving energy: 230 kCal. About 217 kCal per 100 g
of which:
total fat: 15g (roughly 9 kCal per g of fat, so 15 * 9 = 135 kCal)
of which saturated is 3g
Carbohydrate: 6g (roughly 4 kCal per g of carbohydrate, so 6 * 4 = 24 kCal)
Protein: 18g (roughtly 4 kCal per g of carbohydrate, so 18 * 4 = 72 kCal)
Summing these, we get 135 + 24 + 72 = 231 kCal, which is very close to the shown 230 kCal per serving
Extra stuff: 110 mg cholesterol; 330 mg sodium, 30 mg calcium, 10.8 mg iron and 60 mg potassium
The app:
Says there's two portions/servings, ok
110 kCal - where is this number from ? It's neither the serving energy, neither the per 100g energy.
6g of fat - uhm... totally wrong ?
Saturated Fat - 1.5g - uhm, half of per serving ?
Carbohydrates: 2g - off
Protein: 11g - ???
I assume that since it says there's 2 portions, it simply lists half of what the entire serving (can) is.
But those numbers are still off!
7.5g of fat, not 6g
1.5g of saturated fat, this one is correct
3g of carbohydrates, not 2g
9g of protein, not 11g
Also, if we sum up the energy from the app's numbers, we get 7.5 * 9 (fat) + 2 * 4 (carbos) + 11 * 4 (protein) = 119.5, aka 120 kCal, not 110
Ok, the numbers are not THAT far off, but still quite far in order to have trust in them.
1
-
1
-
1
-
@LifeIsCrazyAsShit I had this exact problem in a big table in MySQL several years ago. For example, doing
SELECT whatever FROM table_name WHERE id > 433453 LIMIT 100 OFFSET 1000000;
This killed the performance, because the id column was a clustered index (PRIMARY KEY). It knew to jump to that id immediately, but couldn't compute the offset and also do that jump directly, it had to go through all that one million rows/ids. So the higher the offset, the slower it ran (and that table had between 25 and 35 million rows). The job was to export basically the whole table, but in order, from where it left off. And we had small batches of exports. Once I changed the job to remember the last id, I changed the query to simply be
SELECT whatever FROM table_name WHERE id > 124342343 LIMIT 100;
And voila! the speed was back. No matter the id, if I was at the beggining, middle or end of the table, the query was fast, in constant time basically.
So the lesson is - watch out when using OFFSET. Some indexes allow "jumping", but some don't, and in these cases, high offset values are a performance killer.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
My several cents, that I haven't seen on other commenters:
- asks for unsigned ints but reads them as %hi instead of %hu . But this is a very minor thing, I agree
- I don't know for the life of me the requirement to enter the types in order. You can handle all cases TRIVIALLY by simply having something like
switch(ptr->type) {
case 124: // nobreak or whatever signal you have for your compiler to not see this as a mistake
case 142: // nobreak or whatever signal you have for your compiler to not see this as a mistake
case 214: // nobreak or whatever signal you have for your compiler to not see this as a mistake
case 241: // nobreak or whatever signal you have for your compiler to not see this as a mistake
case 412: // nobreak or whatever signal you have for your compiler to not see this as a mistake
case 421:
arrayCreation(0, 52, 62, 92, ptr);
break;
}
- the example above was the most complex. When there's only two "types" chosen, there's only two cases (12 and 21 or 24 and 42 etc).
- while writing the above, I just observed that the "124" case is missing. tsk tsk tsk, -1 point!
- I think the main idea for most tips presented here is missing: the whole code I'd say is quite ok and readable and maintainable. Because it's small. When you get into a project with hundred of source files and thousands of lines of code and hundreds to thousands of functions, THEN having comments is important. When you have those "magic" numbers spread in 20 files, THEN you'll be doomed if you have to update them, so using a define for it is better, so it can be trivially updated
- especially since it's a small program, I don't like the idea with the enum and the extra parsing, at least not being given as a blanket statement. That's basically overengineering. Feature creep. You're creating extra code, including RUNTIME code (which is why I dislike it) for a POTENTIAL FUTURE. You're making the code more complex and which runs slower just because "it feels right". There should be disclaimers. Like, do you know with decent certainty that this will not be updated (like getting that 5th type) ? If yes, then the current code is perfect. If you know it will be updated, or if you're not sure, THEN, maybe, think of a more extendible solution. Still, it's it's something small, it might still be ok to make it like that and only refactor it when it's needed. Don't fear refactorings, since you cannot avoid them. Instead embrace them and get used to them.
1
-
Regarding the GPU (or anything else) bottleneck... it usually isn't so simple.
Let's say that for a particular game, to draw a frame, it needs x CPU work and y GPU work. But, the GPU work can only start after, say, 20% of CPU work is done.
So, the frame length then would be computed like 0.2 * x + max (0.8 * x, y).
This, is, of course oversimplified. But even so, it can be seen that if the CPU work is done very quick (let's say x is 2ms) compared to the GPU work (let's say y is 20ms), then this will appear as a GPU bottleneck. Because it will take 20.4 ms to finish the frame. And, if you double the CPU speed, it will take.... 20.2 ms to finish the frame.
That's why, even though in some scenarios the GPU bottleneck is clearly seen, there's still a smaaall difference between the CPUs.
I think we can say that in the scenarios where doubling the [whatever] performance, the net gain is less that 1% of total performance, then we have a hard [whatever-else] bottleneck.
If the increase is still too small, but actually visible, like 10%, then we can say there's a soft cap.
1
-
1
-
1
-
1
-
1
-
I think that what they wanted to avoid (but put 0 effort into explaining it) that's actually understandable, is using the CT to be used as a home battery a lot, and still try to have the 8 years of warranty, because you're driving 0 miles, but degrade the battery as if you drove 1,000,000 miles in those 8 years, then ask for a replacement after 7.5 years.
1,000,000 miles sounds too much ? Let's do some math if it's possible. Let's assume that the battery has 100 kWh energy capacity. It's good to not use 0% to 100%, so let's say the usable capacity is 60 kWh.
11,5 kW, means that if full, it will discharge in, say, 6 hours, rounding up a bit. If recharging from 20% back to 60% again, it would mean it needs 10 hours just for the discharge part to cover 100% of the battery capacity. That means it's totally doable to have a full charge-discharge cycle in a day. Also, that would be like driving about 300 miles
What does 7 years mean in terms of days ? 7 * 365.25 = 2556.75 days. Let's round that to 2500
So, 2500 days means potentially 2500 battery cycles. I read somewhere that they officially said that their batteries can be used for 1500 cycles. So that's already over the limit.
Also, 300 miles * 2500 = 750,000 miles.
Ok, so I was off, but it's still 5 times over what they would've covered, if you used it to drive, not to power a home or whatever. And it is over its expected life time.
To put it in another way, 150,000 miles would mean it only needs 150,000/300 = 500 cycles, about 1/3 of the battery lifespan.
Still doesn't excuse what they wrote. Or Elon being a gigantic jerk that needs to be jailed, along with the many people that enabled him to go this far.
1
-
Just realized that they're incompetent not only from failing to word up to cover the potential case I explained above, without also including totally BS opportunities to not cover the warranty, but they're also so incompetent that they failed (I think, I didn't read that whole thing, only what Louis showed us) to also exclude another edge case that's not their manufacturing fault (though you can say it's their marketing fault for making the claims, so ... I'm not sad if they get punished).
So, here's the thing. The CT can tow about 5000 kg (11,000 of caveman-age freedom units). And from several reviews I've seen, when doing that, the range is only about 100 miles (yeah, I should use km, not miles, but I'm lazy to do the computation when all the given details are already in miles).
That means that you can have one full battery cycle for only 100 miles.
How many cycles would then 150,000 miles mean ? Exactly 1500, which is the expected life span of the battery, so it's basically guaranteed that it won't still be over 70% of its initial capacity. So they' have to cover its replacement.
Though to be frank, they also say that they won't give you / restore you to a brand new battery, just one that is in spec (aka working and above 70% capacity) so they probably aren't losing much on a repair like this. Still, you could theoretically do this multiple times. In reality they'll surely deny service.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@crocodile2006 LoL, you're the one coping. I have no problem seeing Thunderf00t making mistakes and he does so from time to time. He's still basically spotless in comparison to Elon Musk though. If USA wasn't a slave-rider-owned 3rd world grade of a country, Elon would've been in jail since long ago. "Beats rail", yeah, no, not even close. "Full self driving" - any normal country would have made that name illegal.
Back to the topic. How did Thunderf00t lie, when he gave an estimation ? Why are you not at all concerned that Elon/Tesla were happy to give A LOT of bullshit totally meaningless details, but didn't said the max cargo ? How can you not see that that's the typical sign of a con man (as if he didn't showed plenty of signs already) ?
Oh, sorry, you were speaking about the charging time, not the cargo limit. Here's the thing. In the Semi Event they only said that this new v4 supercharger is capable of delivering 1MW and that is uses liquid cooling in the cable so it doesn't require a massive-width cable. There's no mention of 70% in 30 minutes, or anything about the speed of recharding so there's no explanation either.
Even so, in practice, I'm not even sure in how many years will those 1 MW stations will get to be plenty enough so you don't have to wait for your turn. 1 MW is no joke, you can't just simply make that everywhere. And it won't be able to simply service 20 trucks at once.
1
-
1
-
1. How is that different than simply having a pseudonim online ? Do you feel bad in any way that you have to call me Winnetou17, which, let me tell you, is not my real name ? Do you feel like you have to lie about other parts of our conversation because of this ?
2. Stop communicating, as part of the project, on instant messaging, which is more emotion prone. You can still have discussions in github issues and on mailing lists. Where people usually put more thought and effort and aren't simply going to make a bad joke, or other quickly go into a flame war because somebody typed a bit faster than was able to think. It's a bit strict, but I see the value, it should work.
3. How can not joining as a contributor stop you in any way of being interested ? I am a contributor of exactly 0 (zero) FOSS projects, even though I am a developer, and I'm still very much interested in a good bunch of FOSS projects. When I'll retire, I think I'll even start contributing. Are you actually trying to be trolling or do you actually think like what DT said is to stop being interested ?
4. Here DT could've given more examples. The idea is to separate and to have a project about a single thing. Like, a browser is about being a browser, you enter an address, it fetches the data from that addreess and renders it. It should not save the planet, care about the poor kids in Africa and so on. Those should be different projects. This allows the project that is about software to stay apolitical and in general focused and not dwelve into endless talks about inclusivity and other flame wars like that (though a flame war about micro vs monolithic kernel is totally ok)
1
-
@bakters Yeah, you're right on the pseudonym vs identity, but I'd say that matters when you are in person.
On the internet, on github issues, maybe even on Discord ... does it matter ? As in, is there a context where the difference between just using a name and believing it to be true, is there a context where these are different ?
It's true that DT didn't mention that this is only for in-text-over-the-internet collaboration. Given the nature of FOSS projects nowadays, that can be implied, or most would think as such, but it would've been good to be mentioned, to set the stage, so to speak.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
The really nice feature of FOSS and Linux is giving you the choice. The option. The idea that it's your hardware, your system, you do whatever you wish to it, nobody can tell you what you should do.
To answer your question, DT, I'd say that having a checkbox somewhere at the top, saying "Include proprietary software" and with a tooltip explaining what it is, and maybe list the main licences that are or aren't free. That would enable everybody to have all the power, while also making it visible and easy to stay FOSS (also easy for someone who has no idea what it is, to learn about it)
Though it might be a bit too much work, to list on all software the license it uses would be neat too. Though then it might be a slippery slope of useful information that can be added. Like the version, the release date and so on, it might clutter the visuals a bit. Designs are hard!
1
-
1
-
1
-
1
-
1
-
1
-
You know, your computer not booting properly because of the anticheat is malformed is actually the least problematic of the disasters it can cause.
That's like your car no longer starting. It sucks but there's no real damage, just a potentially very annoying situation, depending on the timing.
What's much worse, in the car analogy would be for the car to not respond to your input and accelerate by itself into a crowd of people or into a building or off some cliff.
Back to the computer, a much worse outcome would be for the anticheat to have a vulnerability (or Riot to get hacked and the hacker pushing a malicious update) and that to get exploited without the users knowing. Suddenly millions of computers can have all the data on their system leaked, or all the computers starting to do stuff on their own like posting really offensive stuff on the internet, it could be part of a gigantic DDoS attack or it can simply be used by a deranjed individual which might simply delete and scramble all the data on the drive, making it unable to be used without a full reinstall, and also all the data unrecoverable from the local drives.
The potential for damage is off the charts, really. And just like the Crowdstrike update, it might take for someone to actually do something this disatreous for people to realize what they're doing. Absolute insane dystopian times we're living, jeez.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@lprimak Unfortunately or not, that's not simply a take, that's the definition of high vs low level languages, even though, indeed, in today's world a low level language is pretty narrow, and there's quite the variance between the high level languages. Because of this, people started using terms like "higher" and "lower" level languages, while still talking about high level languages. And at some point some people simply started to call them "high" or "low" instead of "higher" and "lower".
Thing is, a high level language allows abstractions. With good enough libraries, you could have quite some easy time in C too, only calling simple functions.
There's also the term of "medium level" language specifically for C class of programs, where it is a high level language, but where you do have access to pointers and high degree of control in general.
Like alansaunders1828 said above, we already have multiple levels, not just two, neither 3 and to actually be useful, I'd say that a new definition need to arise, maybe something like that the levels of autonomy of cars. So a level 1 language is assembler aka low level - full access, most likely not portable and all the way up to a level 5 or 10 which would be probably something interpreted, garbage collected and so on.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
Yes, the lengths at which 14nm was stretched is quite extraordinary. If we were to compare a 6700K versus, say, a 10700K... I truly wonder, if you take a 10700K or a 10900K and set the clocks to be the same as 6700K, and also disable cores so it's also 4c/8t, how much less power would it consume ? Or how much better it would perform with the same power usage, but with frequencies and/or cores unlocked (including having 8c/8t).
Anyhow, I kind of feel bad for Bob Swan. It seems that the company is coming back with Alder Lake, as late as it is. It seems as Intel is getting a new tempo now, and Bob Swan is getting the short stick of it.
Of course, at the end of the day, we can all appreciate the engineering part of Intel, while also acknowledge how bad and evil their marketing and some business decisions are. And those didn't get better under Bob Swan. That's what I hope that maybe Pat Gelsinger can change.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
For me it's not that unexpected. I didn't knew he was planning (and for so long) to have a new channel. But, he did mention that he's trading a lot in at least one HeyDT videos, so I knew about that part. And the new channel, can't say I'm too surprised even if I didn't knew. Regarding what he's focusing on the FOSS world, he kind of already touched on everything, talked about it, made a video or an answer in a HeyDT video about it. Going forward there would be much rehashing the same things, just with minor changes, especially obvious for the "take a quick look at distro X" when there's new versions. For vim and emacs, there's not much to do/say. A lot of other CLI utilities, not much to say extra. So he's kind of out of new topics (and I say it was a bit visible) unless he forces himself to find new ones.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
2-3 and half years ? That's nothing! Hate to be that guy, but my Windows 10 install on my over 7 years old laptop is still doing very fine. Never reinstalled it. And I'm sure that it can still do 100 days uptime just fine, but I can't test that now, I'm dual booting into Gentoo. Hope that by the summer I'll be able to do everything in Gentoo. And I also hope that maybe this year I'll finally find and buy the next laptop that will serve me for another 7, preferably 10 years. Anyway, I'm getting off topic.
It's good to see people taking time off to chill, arrange stray ends in their lives, maybe even meditate a bit. And it's very nice to see the massively positive reaction from the comment section. I hope this becomes more common, with people being more responsable and less obsessed about the internet life, especially a ranking algorithm. It's good for the audience too, if they at one point have no videos to watch, they migth be a bit more productive that day (or simply find something new)
1
-
Can we also talk about the TOTAL lack of responsability ? The idea that if X convinces Y of something, it's not Y's fault, it's X's ? You know, the basis of censorship - don't read/listen to A or B, because you're too dumb to realize it's not good, therefore "we" have to restrict your access to it. It's not like this kind of censorship went into really awful stuff 100% of the time...
I mean, if someone is converted out of christianity ... why is the random guy on the internet guilty for that ? Maybe it's God's way of testing his subjects, what do you know and care ? And what happens someone comes to the conclusion that God is not real by itself ? We must pray the all-knowing God, but God-forbid us of learning logic, or we might have second-guesses of this all-mighty God and its religion. Does that make sense ?
Every input you have, must be considered and it's you alone that comes to a conclusion. And by that, you're solely responsable for that. If the beliefs are so easily challenged... maybe God should up its game ?
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
I remember, though I might be wrong, that Intel wanted to have a yearly release cycle. For now, Battlemage seems to be arriving at exactly 2 years after Alchemist, but like you said, I think they had to wait to sort the driver issues. The driver is still not perfect, but it's actually usable for a good bunch of people now.
What I fear most with Battlemage is that it's again a bit too late. If its top SKU fights with RTX 4060 or RTX 4070 or RX 7700X, at 250W ... and then both NVidia and AMD launch a new xx70 or xx60 class GPU 5 months later ... then Battlemage would have again to be extremely low priced, in order to be competitive ... which might mean very well that it's sold at a cost by Intel. If I'm not mistaken that was kind of the situation with Alchemist. And if it's again with Battlemage, well, Intel isn't exactly doing that good financially, so I'm not sure they can support it, if it doesn't have some profit.
The less gloomy part is that the same architecture and drivers will be used in Lunar Lake and the next CPU generation (rumors say that Arrow Lake has Alchemist+, not Battlemage). And those might sell quite well.
Right now the MSI Claw is basically the worst handheld, buuut, with some updates and tuning, it can ... get there, so to speak. I don't expect it to win against RoG Ally or Steam Deck, buut, it can get to be kind of on the same level, and with no issues. I'm so curious of seeing a Steam OS (or Holo or whatever it was called) on the MSI Claw, I'm really curious how it would work. Anyway, an MSI Claw 2 might actually be competitive this time. And be launched in time. Still speculation, but there is hope.
1
-
1
-
1
-
1
-
1
-
1
-
@xiphoid2011 You're not wrong, but for many, the money they got actually helps them train better. Because many live in countries where there's massive underfunding and the athletes are more of an exception rather than the norm. It's the harsh reality.
I know this is the case with my country, Romania, literally all the medals we got are the sole merits of their respective athletes and those very near them, and basically 0% merit for the state/government or the mass-media (which has like 30% football aka soccer news, 30% tennis news, 30% gossips, usually around the people involved in football/soccer, 10% at most all the other sports combined). And Romania is, overall, kind of mid-level in terms of how rich it is, there's certainly countries which are much worse off.
So, yeah, $1000 extra can be a significant boost in the revenue, helping with the stress of travelling and having adequate equipment and nutrition and people around the athelte also getting to a decent level of payment and so on.
1
-
1
-
1
-
1
-
1
-
1
-
I don't want to sound like apologist, I'm certainly against DRM, but you simply cannot say that DRM was created "just so you cannot share content with your friends". It's so a few people will not share it with the entire world. This is much, much different, and a massive example of a strawman argument. And that is a totally possible, real, genuine concern for the companies that created that content, in regards to revenue.
The best solution that I can think of, for everybody involved, is to have some sort of DRM, preferably not as intrusive as they currently are, and release a patch with no DRM 1-2-3 years after the game is launched. Like Id Software did with their engines, releasing them to the world after several years, after they made most of the money from their games anyway.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
EE: "We don't drive penguins"
People running Gentoo Linux (or other distro): say what ?
Regarding the theoretical limits, seeing that weight drop by half (maybe by simply having a smaller battery) would still be very neat. With 8kWh per 100km, a battery of 50kWh means basically 600 km range. That's pretty good. And with a less than a ton car, it should be pretty easy to handle, it should require a bit even less power, so less heat and cooling.
Over time, we'll hopefully get better batteries. So, the 50 kWh becomes the norm, but the battery will weight, say, only 200 kg.
On that front, I can't wait for more advanced Citroen Ami, Renault Twizy etc, really small cars, as much penguin shaped as they can be, with something like 30 kWh batteries, at, say, 500 kg the whole vehicle. For 60 to 80 kmph speed, it could get to 6kWh/100 km. So, 500 km range, which for a go-to-work-then-shopping-then-home car, it would be perfect. Charge it like once per week. Or can keep it sipping power from normal household plugs. Hell, for something like 6 kWh needed per day, it would only need about 4-8 sqm of solar panels installed in your parking lot at home, to charge a battery that the car will refill at night.
Also, like others said, can't wait for the similar analysis for Aptera's car. Seemed a bit too good to be true, but it starts getting into the realms of possibility. Having the chassis itself being the heat radiator is a really neat idea.
1
-
1
-
1
-
@DeeSnow97 If your budget is strained, I'd say to not be an early adoper. Of course, it's totally up to you, how much of an idealist you are or how much you have faith in the thing you're supporting.
The way I see it, unless it happens that what you buy is very good or perfect for you, then you're essentially donating to the cause. Which is very nice, but usually it should be done with disposable income. Thing is, if it doesn't suit your needs enough, you'll get to hate it while also hampering you. Better to get the thing you need most, at the best price, go on with your life, be productive, improve your life, and when you're ready to give (aka have disposable income) then give.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
Sorry to be negativistic, but I think that nowadays, 9 years later, and given how much capital the big tech companies have (Apple is worth $2 trillion, right ?), I fear that you don't need $5-$20 million, but more like $100-$1000 million. Yup, up to one billion.
It's a massive undertaking. Monumental, if you don't want to have the "easy" option of simply getting the money beforehand, so you can spend on whatever comes up. It will take AT LEAST another year until you could start it.
Not to mention that after the lockdowns mostly cease, most likely starting from this summer (nothern hemisphere summer), people will be to busy being outside, enjoying not being locked up, to care as much for things like these. I hope I'm wrong though.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
LoL at the "it may expose CPU vulnerabilities" ... uhm... it's not a "maybe" it's a definitive "it WILL expose CPU vulnerabilities". I guess the caveat is it depends on your CPU. Skylake (6th gen) are probably the most exposed, and by the 10th gen I think both meltdown and spectre, at least the initial forms, were mitigated in the hardware directly.
I think this is a very nice option. Assuming you have some software you trust, you can simply have another kernel with maximum performance and no network support in which you boot to simply run that software you trust, as fast as possible. Of, course, also assuming you don't need internet in that time. And whenever you need the "normal" computing, you just reboot into the normal kernel. What's neat is that with Gentoo, you might even compile the software you use to not have networking support (very niche or rare, but still a bonus).
Of course, like others said, you might simply have a dedicated computer for some software which is not connected to the internet at all, in that case, yeah, even better.
1
-
1
-
I agree that the supply chain is the motive that they cannot repair in a reasonable timeframe.
BUUUT, it's completely their making, their choice, their problem, their mistake for having this. I do not see this as a good enough argument. Legislation should not care about this. If you can produce new cars, then you should be able to provide parts for the SAME FREAKING CARS. If you have "very lean supply chain" that's a you problem not a me problem.
Overall, I think that a legislation stating that whatever you make, you cannot have license to sell unless you can provide service in a timely manner and provide replacement parts in a timely manner (I know, I know, timely is too subjective, I'm only saying the idea). If you do not provide them, then you are forced to release the schematics for the product and all of its parts. In case of service, those who still have warranty should be able to get a full refund. If you do not have the ability to fully provide the schematics for the product and all of its parts, then you cannot get the license to sell, easy.
Ok, I know what I wrote above is currently impossible. Some parts cannot be made inhouse and also cannot be had with schematics, as they're 3rd party vendors who do not care. In this case there can either initially be made specific exceptions and b) longer term - the manufacturers of those parts be liable under the same as above, if they don't provide the parts for general sale, then they are forced to release the schematics.
I think that in both cases, when aquiring a license to sell, the schematics should be provided upfront to the a government entity. So when needed, the schematics can be made public without interference or possible "accidents" from the original company.
You know, sometimes I cannot not think how far we'd have reached if we weren't so petty. What I described above is so much extra work just because we cannot have common sense and a bit of moral integrity to not steal and profit from others. Sigh
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@bvd_vlvd Don't mind Terry. He usually likes to tell people what to do, without bothering to properly explain his view. But he can give valuable insight at times.
Back to the topic, I'm a systemd hater. To be clear, I don't know in what state it is now, from what I know it is getting better (maybe it even got ok), but for a long period of time it had a monolithic problem. Yes, that thing with "do one thing and do it well", the unix philosophy, that was not embraced.
The best example I have of why that is bad, and why systemd, at least at that point, was bad is CVE-2018-16865 . It's a vulnerability that was found in journald in 2018, I think (judging by the CVE id). It was, of course, later fixed. The problem is that it was a high severity vulnerability and ... you couldn't disable/remove journald at that time, without getting rid of systemd completely. Because if was so tightly coupled. Imagine being a system administrator and knowing you have a vulnerability that you are forced to live with until a fix arrives. On a critical part of the system. Not nice.
The thing is that we DO know better. We knew that in 2014 too. We know to make things that are smaller and can have parts disabled or replaced (for security, customizability and reusability). And this was pointed out before the CVE happened. That being so big and monolithic is a massive risk down the line. Imagine the same scenario as the above one, but in a uncontested systemd, 5-10 years later, with everybody using it. Another security hole found that cannot be instantly disabled until the fix is ready would pose risk similar to Windows XP-era vulnerabilities that infected countless computers.
And the jump from "not for me" or "dislike" to "hate" is the way it was pushed, got mainstream, and the flags raised completely brushed off as inexistant or irrelevant or other stupid excuses. It was the insistance that it was perfect and that those who are against it are somehow against progress or sys-V-init lovers.
Systemd is clearly very potent, full of features and capable. And when it appeared it kickstarted people getting out of sys-V mess and in general having better init systems and better service managers. But the hate appeared (rightfully so, I'd argue) because of how ... uhm ... ignorantly it was pushed, so to speak.
1
-
1
-
@Purpleheart62001 Jesus Christ buddy, put some paragraphs there. Also, I said one thing and you started to rant about another.
At no point did I said that there's no fault in the goverment and, well, usually in all the people that rule.
I was merely responding to Blues who complained that Cuomo suggested starting to work something you might not want. To which I replied that it's something normal in a crisis. Check my example above, in the reply I did to Jorden, with a hypothetical case where somebody is full-time youtuber and suddenly might find itself with a massive viewer and revenew drop because o a crisis. It's common sense that in this situation you go and find yourself another job, including doing things you might not want or like. That's why it's called "a crisis"!
And, to give another example, yeah, in a crisis where people stay at home and have trouble putting food on the table, your job as a bartender for some clubs might be much less marketable, as clubs suddenly have much less revenue.
In that last example, it shouldn't take another person to tell the bartender to search for another job, it should be obvious to him/her. And if a complete asshoIe like Cuomo says it, it isn't any less true.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@jasonthirded Sorry, I just remembered your comment that I forgot to answer. Apparently there's many problems, mostly in NVidia. Screen capturing when it's something not using Vulkan might be problematic. Global hotkeys for app, I understand, are quite missing. Some people (I think only on NVidia) have quite some stutters and lagging. VSync enabled non optional is also a problem for some. Apparently scaling on monitors with different resolutions might also not work correctly. And there's more.
Michael Horn, a channel here on YouTube just relased a video called "Wayland is NOT ready...". His experience was quite bad, below average, but still if you go through the comments there's many other complaints. Apparently most of those which had no issues have AMD cards like RX 570 and RX 580. And use KDE.
Also Linus from LTT, he had recently a video, I think on Shortcircuit, about a laptop which was shipped with Ubuntu and NVidia and, strangely, it had all sorts of problems, as if the manufacturer never tested it. I think the video title is something like "I bricked it in less than an hour". And I think that most problems were NVidia + Wayland = bad. (also, while we're here, f**k NVidia!)
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
I'm part of this statistic change 😀
Long overdue, this start of the year (well, started just before the year change) I finally installed my first Linux on bare metal and switched to it. I went directly with Gentoo. In these 8 months, I only logged in back into Windows 10 (which still functioned flawlessly, I might add, one of the reasons I switched so late) a total of 3 times, one of which was only to check that it's still working and to do updates, just in case.
I'm not surprised that Gentoo isn't in the statistics, it's very niche by its nature, and while I love it and I think it's, by a good margin, the distro with the best customizations possible (in an easy to do manner), I think it will always be niche.
I also have to point out (fortunately at least one other comment saw it) the Steam Desktop share is flawed and it is trivially to see it. SteamOS users are Steam Deck users, which, last time I checked, it is not a desktop PC neither a laptop. Even if you keep it on a desktop or in your lap. So the Steam Desktop PCs users is actually about 1%. Which, I do have to admit, is significantly lower than I expected. Steam somehow saw the least Linux increase, even though it should've actually increased the most, since it has two vectors: desktop PCs AND Steam Deck. But then again, the surveys are only on a portion and it's random, so there's always a chance of stats being skewed by how the random picks happened. That's why it's better to wait several months too see a trend, like Bryan said in the video.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1