Youtube comments of Jeff Huffman (@tejing2001).
-
481
-
196
-
171
-
135
-
72
-
I'd love to see a comparison between how much it would cost to deal with the eventual effects of climate change vs how much it would cost to reduce CO2 emissions quickly. Essentially view the reduction of CO2 emissions now as an investment, and (probabilistically) determine what its interest rate looks like. Is it a good investment? or will it actually disrupt our society less to let things go for a while?
I'm all in favor of pushing nuclear past the NIMBY problem and switching to it over coal. CO2 aside, it's far safer, and with higher marketshare it would become yet safer and likely a lot cheaper as things get streamlined and more R&D goes into improving designs. With better designs, the amount of nuclear waste produced can be made extremely small, and you can even avoid ever needing to handle it at all.
I'm also all in favor of manufacturing hydrocarbon fuels using atmospheric CO2 and nuclear energy. It needs more R&D, but it has a huge potential to deal with CO2 emissions without drastically changing our society and the way we do things.
Renewables like solar and wind? ... not a fan. They need large amounts of grid energy storage of some sort, which isn't really feasible with current technology or anything on the horizon. They take tons of land area away from nature, and they take lots of manufacturing to make all those solar panels/wind turbines, and energy storage devices too (and all that manufacturing often releases CO2), and all that equipment needs to ultimately be disposed of when it breaks down, which is yet another source of environmental issues. Not to mention repairing and maintaining such a large amount of equipment takes a lot of human effort and isn't actually all that safe.
52
-
33
-
26
-
26
-
24
-
Karsten Hoff: Actually the storm at the start didn't bother me. True, martian atmosphere is very thin, but wind speeds are higher, so as I understand it the actual dynamic pressure isn't that much lower than what you get from winds on earth. Combine that with the low gravity and it sounds pretty plausible that a once-in-a-decade storm, say, would have enough wind force to tip the ascent rocket. What bothered me about the martian was how people didn't think about obvious issues until so late in the game, and how they ignored fairly obvious alternatives to their choices. How can they not start thinking about whether the mav can intercept the hermes until AFTER they've committed to the plan and the hermes is already on its way to do the hyperbolic flyby? Also, blowing the front hatch near the end seems like a stupid risk given that they still had more fuel. They can sacrifice some of the fuel allotted for the capture burn into earth orbit, so long as they still have enough that they stay bound to earth (and that's way more than enough delta-v). They would end up in some higher elliptical orbit around earth instead of a nice circular one, but it's not much harder for nasa to get a resupply vessel to them (than it would have been anyway) so long as they're actually gravitationally bound to earth. Also, given that they had the maneuvering backpack thing, why was disconnecting from the tether not even an option? Yes, it's risky, but it's way better than mark poking a hole in his suit.
20
-
19
-
19
-
I think you've missed a lot of the issue with systemd (not that I blame you... most people who hate systemd express themselves very poorly). To preface this, I use systemd. I'm just not entirely happy about it, and still on the lookout for something that sets off fewer warning flags in my head.
Systemd (the project, not just the init system) takes an approach to low level system management that assumes monoliths are good. Using systemd tends to be an all-or-nothing affair for the most part (not systemd-boot, but that's the exception, more than the rule). If you use one part of systemd, you kind of end up having to use the rest of it, too. The pieces all interlock without understandable, stable interfaces between the components to allow interchanging them. Functionality sprawls across the interconnected pieces, and the sheer quantity of obscure features is downright disturbing to anyone who understands the value of the unix philosophy, and bothers to really look at what's going on.
Systemd-init also has a couple of really excellent ideas at its core. The event/transaction system it uses to manage services has changed how people think about ongoing state in their systems in general, and given them a much more powerful language to express how they want their system to behave. The idea of treating many different sorts of system state with the same concept of "units" has a lot of power. The dependency relationships that are possible among units, though not very consistently structured, are very expressive, allowing you to ensure things you had no hope of ensuring before. Having an "init" for each user allows user-level configuration to gain these same benefits as well. It's no wonder it took over, despite the issues.
So I use systemd. It has some really nice features, and the cost of using anything else is just really high right now. But I continue to keep my eye out for a solution that gives similar benefits, but is more modular, keeps the agility to replace components fairly painlessly, and avoids lock-in and feature creep.
17
-
13
-
12
-
11
-
7
-
7
-
7
-
7
-
6
-
6
-
4
-
4
-
4
-
@jpisello That's a common misconception. See https://www.quora.com/Is-there-anything-smaller-than-a-Planck-length-1 for more detail, but this statement is only very very tangentially related to truth. There is some length scale at which determining position that accurately requires amounts of energy that distort space, possibly even creating a black hole, and defeating the original purpose of determining position, however without a proper theory of quantum gravity, no one knows how exactly this would work, and even then, it doesn't mean that space isn't continuous in at least some senses (there should still be a truly continuous concept of translation, for example). And even if position certainty is limited, that doesn't mean that smaller lengths don't conceptually exist. The planck scale is ROUGHLY the scale at which this is expected to occur, but it could be an order of magnitude off easily enough.
4
-
3
-
3
-
@DerKiesch When it comes to whether the mav can intercept the hermes, I wasn't so much talking about the crew's decision as the discussion of possibilities back at nasa. The whole working out of how to lighten the mav enough to make it happened after nasa had considered and rejected the plan and the crew had gone against orders to force their hand. How did the question of whether the mav could be lightened enough to make the intercept not come up far sooner, when the plan was still under initial consideration? Also, the thrust doesn't really matter much. Assuming they have almost any maneuvering capability at all, then the only real time constraint is the time till mark runs out of air. Given that it is (and must be) a very low velocity intercept, they don't have to make the first intercept, they can set up a second one with very little thrust. Also I didn't get the impression they were even considering using the ion thruster at that point. They were using some kind of maneuvering thrusters with a different fuel source. And all their discussion at the time was about having enough fuel, not about having enough thrust. Also, if the final stage of the mav really has zero delta-v, and the hermes really has so little maneuvering capability, how was a rendezvous supposed to work under normal conditions?
3
-
3
-
3
-
3
-
3
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
The unix philosophy needs a bit of translation for modern times. The core principles are making your software components composable, using well-defined, interoperable interfaces between components, and defining the boundaries of components in such a way that each component has a single, easily abstractable purpose. Those core principles are still very relevant, even in the spaces where they aren't applied much right now. The linux kernel is one example, things like blender and web browsers are too. You could build something like blender in such a way the it was essentially just a UI, which made manipulation of underlying software components, which had interoperable interfaces, easier. The linux kernel could also be a LOT more modular than it is, if the infrastructure was designed differently.
1
-
I think the fact that GPLed software can't feasibly be sold actually constitutes a "bug" in the GPL. The explicit allowance of sharing the software should perhaps be limited to only those who have bought a license (this would have to be worded in such a way that redistribution through distro package managers was clearly allowed). The other freedoms the FSF defines could all still be protected in a copyleft manner, and the license could automatically "convert" to full GPL after a set amount of time (initial suggestion: 10 years, though it would probably vary depending on the software). I think it's a reasonable compromise between the freedom of the end user and the need to pay for the development. We could call it something like "user serviceable software." It might catch on among more open-minded commercial software creators.
You could also implement license-reminder systems, which, unlike DRM, wouldn't be designed to control what you can do, but just to help prevent accidental violation of the license, by requiring the user to make an actual action signifying that they believe the system is in error and they actually have a license in order to override it. (This would also be important if it was distributed through distro package managers, since users could easily mistake it for free software if they weren't paying attention)
Somewhere in this set of ideas I think we can get a solution that actually works for the whole society. Right now, open source software developers are giving their talents away as a kind of charity, and I don't think that's a viable way for software in general to work for the whole society, but we should have a path that is, and is also ethical in regards to how it treats the end user.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
I know I'm raising a raising a dead thread here, but I've found a good solution to this problem on nixOS (it should also work when using nix + home-manager on a non-nixOS distro):
References (most recent first, but the reasoning of why to do this is explained best in the original one)
https://elis.nu/blog/2020/06/nixos-tmpfs-as-home/
https://elis.nu/blog/2020/05/nixos-tmpfs-as-root/
https://grahamc.com/blog/erase-your-darlings
I think I prefer having grahamc's approach with snapshots rather than the tmpfs route, since I don't really want to take up system memory, but the idea is this:
Delete everything every boot (maybe even more often)... EXCEPT what you have personally decided you want to keep.
Using home-manager allows you to declaratively create your static dotfiles from a single central configuration file which you can comment all you like, and follow DRY principles with (it is, in fact, written in a full-fledged programming language), but for the data actually saved and/or manipulated by programs, you can instead add that a file or directory should be redirected to your permanent store.
This means that a necessary part of installing and using any piece of software is determining where it saves information, deciding whether you want to save that information, and if so, changing your home-manager configuration to redirect that to your persistant directory, of course with comments to remind you, in your own terms, what program/purpose you're saving it for.
Any program that silently adds files to your homedir will just get them obliterated, and so anything that IS there is either something you ran quite recently, or you have a comment about what it's for in your home manager configuration.
A nice side effect of this is also that you know your backups work. As long as you back up your persistent area, you're essentially loading from a backup every time you boot. If some state you care about isn't getting backed up, you're going to know it pretty quick.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
Sorry for the wall of text, I've had this on the mind for a while and just had to say it.
Why postulate many worlds at all, when you can think in terms of knowledge, probability, and correlation? The density operator (wave functions only represent pure states, and thus aren't a sufficiently complete description in my view) represents your knowledge about a system, much like a classical probability distribution (and in fact diagonal density operators are classical probability distributions over the basis states. Also, by choosing an appropriate state basis, any density operator can be made diagonal). The difference from classical probability distributions is that there is no assumption that there are any density operators that make every property of the system definite simultaneously (this is the key idea, take some time to think it over), which also means it's no longer possible to do away with the probability stuff and deal with a single fully definite state instead, since those don't exist. You still get determinism though, in the sense that no information is created (or destroyed, for that matter) through time evolution. Entanglement is just correlation (so no action at a distance), superposition is a result of your knowledge about the system not being applicable to the question at hand (rather than your lack of knowledge, like classical uncertainty), wave function collapse is the change to your knowledge that results from interaction (much like the classical probability distribution for what hand of cards you were just dealt "collapses" when you look at the hand). It's also worth noting that both decoherence and wave function collapse are a result of interaction of the system with things that aren't being modeled, namely the environment and yourself, respectively, so the apparent creation/destruction of information in how we think about those processes isn't real (no "dice" involved, even in measurement). The No Cloning theorem just says that when you run an object through a perfect copier, the outputs will be perfectly correlated (nawww, really? copiers do that?) (and you'll need raw materials about which you have maximal information in order to make the copy too). The No Communication theorem is no surprise since nothing about this whole perspective involves action at a distance. And so on. Once you accept that there are no probability distributions that make every property of a system definite, everything else falls into place quite intuitively. You lose the expectation of definiteness of reality, sometimes called "realism" (that your questions have definite answers, even though you don't know them), but this is a much smaller sacrifice than it might at first seem, since you still have the expectation of coherence of reality (that if you and I have incompatible knowledge about reality, one of us is wrong).
1
-
1
-
1
-
1
-
If you like the configurability of gentoo, but don't want to compile literally every package on your system, there's also NixOS. It provides the same level of customizability as gentoo, being a source-based distro, but because of the reproducibility demands it makes on build processes, it can transparently use cached build results from the main nixos organization. So you only run compiles for things you actually change.
With NixOS, you also actually create configuration files as part of your system build process, from a single declarative config (which is actually written in a full-fledged programming language). So your entire system, as you've customized it, aside from your genuinely stateful data (what programs need to actually change at runtime), can all be built as part of a single process, and atomically updated and rolled back. The stateless nature of this process, combined with the organization of having all your tweaks documented in a single place, can be very freeing.
That said, NixOS is quite different from normal distros in order to accomplish this. You need to do a lot more things differently due to the extra layer involved in creating all your config files, and in fact general linux binaries will not even run on nixos without some special tweaks, due to it not having an ld-linux.so in the normal place.
1
-
Regarding installing things by piping curl into sh (which you brought up briefly), it's not necessarily that bad. Nix uses this method for its standard install script, and that rubbed me the wrong way, but try as I might, I couldn't come up with a concrete reason this was any less secure than other potential ways of installing that software.
The shell script is entirely contained in one big code block, so the shell will actually not execute any of it if you get an incomplete download, because of the bracket mismatch causing a parse failure. If curl has an error of some kind, that goes to stderr, and stdout gets an empty string, causing sh to do nothing. The url is https, so you're not vulnerable to a man in the middle attack. Assuming you trust the website creators enough to run their code on your system (which you do*, *obviously*, if you're trying to *install their software), there's no reason not to pipe something downloaded from their site into sh, if appropriate care has been taken regarding these details.
All of this only applies to a one-time event of software installation from a trusted source, however. Doing this on an ongoing basis is... bad. I agree this example was horrible advice. The source was sketchy, it was totally unnecessary, and very open to lots of things going wrong.
1
-
1
-
@japanada11 Yeah, I forgot to add that unique factorization only applies to multiplicatively cancellable numbers, which discounts zero. Thanks for pointing that out.
I would take the view that -3 is a prime number, but it's "the same" prime as 3. Viewed that way, 6 = 2*3, 6 = (-2)*(-3), -6 = (-2)*3, and -6 = 2*(-3) are all "the same" factorization of "the same" number. (This is slightly different from what I said before, I know.) Explained by example like that, I don't think it's too hard to follow, though the precise abstract statement is indeed too complex to use as an introduction to the subject. I just think it's a shame that when we teach it, we don't even address the issue of negative numbers at all.
1
-
After digesting this for a bit, I've settled on the idea that the electrons are a thin interface between the fields and the energy source/sink. The chemically-forced movement of electrons against the E field in the battery moves energy into the fields, and the tendency of electrons to vibrate nuclei in the resistive load provides a means for them to transmit the energy from the fields into heat. In both cases, though, the electrons themselves carry almost none of the energy at any time. They are dragged around by the fields and whatever the source or sink of energy is, like a piece of paper in between a heavy pushing object and another being pushed by it.
One illuminating thought experiment along these lines is to consider the value of the poynting vector inside the conductor. It's near zero because the electric fields are small compared to outside, but what power flow there is actually points into the wire, not along it. Energy from the fields is entering the wire to be converted into heat due to the wire's resistance, but the power flow along the wire is all outside. Really the only power flow that's actually inside the wire is the (non-thermal) kinetic energy of the electrons, which is minuscule, both because the drift velocity is quite low, and because electrons weigh almost nothing. That minuscule amount of energy is the transfer medium all the power goes through when interfacing with some other physical system, but carries basically no energy itself.
1
-
1
-
The interesting question for me is why only the orange hue gets a new name for its darker variant. I suspect it might be because we actually have 4 kinds of photoreceptors, not 3, and although the sum of the red, green, and blue response curves is pretty similar to the rod response curve, it dips a good bit between the red and green peaks in comparison, so spectral orange light can get the rods responding more than would be predicted from the response of the red, green, and blue cones. This extra dimension of perceivable color, though we have low fidelity in it, might be the reason brown gets its own name. It might also be a notable difference between what our eyes can see in reality and what can be shown on a screen. Does anyone know how big an issue this is?
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1