Comments by "" (@grokitall) on "Brodie Robertson"
channel.
-
1
-
there are some minor misunderstandings of somethings in the threads which need to be a little clearer, so here goes.
to produce an operating system from scratch, you need to either 1, write everything yourself, or 2, go online and download a bunch of stuff from people you don't know and hope you can trust the code.
1 does not work very well, apple and microsoft did not do it, neither did google or steam. it is slow and expensive. look at how long it took reactos to get to the point where people who were not kernel developers could work on it. (not criticising them, this stuff is hard).
this only leaves you with the second option, which you solve by going to conferences and establishing a network of trust through key signing parties.
as this requires the person to show the other person id, it is moderately secure against everyone but state actors who can just issue an id, and id thieves.
all the network of trust does is produce a lot of people who can assert that the person you just met has been verified to them as being the person they say they are (for a certain level of verified).
these people then commit code to version control, and once you get to centralised and distributed version control, you also have the person signing that they produced the work. this means if it later turns out that they were a problem, you can easily go back, and track what they touched and audit it if needed.
it does not stop a bad actor like the xz maintainer, you need other processes for that.
this gets you to the point were you can confirm the code you got was the same as the code they distributed (at least if it does cryptographic hashing like git) and the network of trust identifies all of the contributers.
then you need to link the code together with libraries it depends on. the original paper that started the nix package manager, which lead to nixos, described the purpose to be to declaratively manage the exact version dependencies so that you could be confident that what you used to build it last time is the same as what you used to build it this time. effectively semantically versioning the build dependencies. it appears that the people behind nixos have extended this a bit, but the principle remains the same. if the dependencies change, then the key for the dependent packages will also change. guix did not like the nonclomenture, and thus decided to declare it using scheme, but otherwise they do the same thing.
this gets you to the point where you can compile stuff and be confident where all the code came from, as you have a complete audit trail.
reproducible builds go one step further, validating that the stuff you then compile will always produce the same patterns of bits in storage. this is non trivial for various reasons mentioned by others, and many others. declarative dependency management systems might also give you reproducible builds, but it is not what they were designed for.
then you take the output of the reproducible build, put it in a package, and sign it. this gets you to the point where you as the person installing it can be confident that the binary packages you just installed are exactly the same as the stuff the original upstream contributers intended with a few tweaks from your distribution maintainers to make it work better together.
and you can audit this all the way back to the original contributer to the upstream project if needed.
none of this says anything about the quality of the code, or about the character of the contributers, you need other steps for that.
as the sysadmin for your business, you can go one step further, and create versioned ansible install scripts to do infrastructure as code, but it does not add to the model, as your ansible scripts are just another repository you use.
i hope this clarifies things a bit.
1
-
1
-
1
-
1
-
@fanshaw for a lot of the machines which got hit, virtualisation is not an option, and neither is having enough backup machines.
how do you have backups for flight status boards 20 feet in the air?, or for the tills and card payment systems built into your Restaurant?
what about having a spare hotel booking system for your 200 room 5 star hotel?
too many people in these discussions seem to be ignoring some pretty important facts about this outage. most of the machines which got hit are locked down for a good reason. most need access to internet services in order to do their job. lots of them are in places which make it impossible to allow the ordinary staff to undo the security in order to just reboot the machine in safe mode and do the simple fix. this is what made the disruption take so long, and cost so much, because you need specialist workers to physically go to these machines to get the things unlocked to escape the boot loop.
and of course you cannot keep enough spare workers on staff to cover a once in a long while outage like this, because such staff are in short supply, and thus cost a lot to employ and retain.
every business that got hit will need to revisit their resiliency plans, looking at what other steps they can take to make it less of a problem next time, wh8ch by definition will also include looking at how the outsourced software is delivered. if one of their competitors decides to do it properly, so it can be tested prior to installing the update, they will immediately jump up the list of providers. equally if a company does what cloudstrike did and deliberately subverts the testing and resilience plans of the customers, they will rapidly fall near the bottom of the list.
a lot of the people who have moved to linux have made the choice that it is cheaper to move than it is to put up with all the nonsense that comes with staying on windows. i am sure software other than the operating system is equally liable to be vulnerable to the choice to move to an alternative supplier.
1
-
1
-
1
-
1
-
@fuseteam you seem very set on the idea that every provider downstream of redhat is just a rebrand, which just is not true.
there were whole classes of people who were only using redhat and their derivatives because redhat as part of their marketing said that if you need enterprise timescales, then use us as your stable base and do respins and derivatives based on us. that is what centos was. people are annoyed because redhat promissed 10 year support for centos 8, then ended it after only 1 year, while people were still migrating to it. even worse, they gave almost no warning.
as to the derivatives, each exists for a specific reason, and supports customers redhat no longer wishes to support.
clear linux is for an all intel hardware stack.
rocky linux is for centos users where the move to rhel is not an option.
scientific linux was a cantos derivative with extra software which was needed mainly in places like fermilab and cern.
oracle linux needed specific optimisations which made running their databases better.
others were used for embedded systems and infrastructure, or for alternative architectures.
pretty much all of these use cases were at one time actively supported by redhat or centos, and are now prohibited under their dodgy eula.
even the case where the city of Munich needed to create a respin specifically for their 15000 seat council rollout to include extra software only they needed is now banned.
redhat used an opencore approach in order to grow, and a use us as upstream approach to enter markets that were not otherwise open to them. it had the added benefit of not fragmenting the enterprise linux market much. unfortunately for them, not everyone can suddenly switch to paying them lots of money on short notice, and even more cannot afford the rat on your boss tactic made disreputable by microsoft and their enforcement arm the business software alliance.
when you run a business, you make a profit, and then decide how much of it to invest in research and innovation. the current management at redhat seems to think that it should work the other way around, where they decide what needs doing and how fast, and then tries to force people who never needed to pay with their blessing to make up the shortfall.
the current fracturing of the enterprise market is a direct consequence of this attitude, as is the percentage of redhat customers looking for ways not to be held hostage by the next silly move they make.
these people who have forked rhel had a total right to do so as redhat had encouraged them to do it. lots of them do testing for scenarios redhat does not support, and then pushes those changes both to stream, and to the primary upstream developers so that they do not have to keep large patchsets supported out of tree.
these patches and extra bug finding are then made available to rhel from either upstream directly, through fedora, centos, or derectly as patches to redhat.
this is fundamentally how open source works, someone finds a problem, develops a fix, and sends it upstream, and then the downstream users get to us it without necessarily having a need for support. when support is needed, then they find a company who is responsive to their support needs, which redhat increasingly is not.
redhat has now become just another entitled proprietary software company who happens to use lots of open source software to try and keep the costs down, while the management has forgotten this fact and decided to stop playing well with others. this has already come back to bite them, and will continue to do so.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@travisSimon365 hurd would not have won if history was different. i was around at the rime, and everyone was looking to work from bsd code, but the at&t vs berkley case had a chilling effect. the other possibility was that lots of people wanted to extend minix to be more useful, but andrew tenenbaum not only wanted to keep it simple as a teaching tool, but also refused to allow others to maintain a set of patches in the same way as was done for the ntsc mosaic server, wjh8ch was h9w we got apache, literally a patchy servrr due the sequence of patches on patches on patches which were maintained at the time.
also remember that it was not until someone wanted to support a third architecture that the default changed from forking the codebase, then getting it to work on the new architecture and instead got the code needed for arch architecture to be brought into the mainline kernel, managed with configuration flags.
so you had 386bsd being slowed down by at&t, minix expansi9n being actively opposed by tenenbaum, gnu's kernel being delayed by indecisiveness over how it should work, and multiple commercial unixes just being too expensive for students.
then along comes linus, who like a lot of students wanted a unix workalike and happened to be going to university 8n a country with a funding model that did not exist anywhere else. he even used the monolithic design which he thought was worse, for speed.
it was not that linus was cleverer, withmsomemgrand plan, just that everyone else could not stop shooting themselves in the foot.
also your alternatives at the time were either a primitive version if dos, or cpm.
1
-
1
-
1
-
1
-
1
-
1
-
@jamesross3939 by its nature, diverse software stacks have some level of source incompatibility. just look at the problems in making the same program work across multiple versions of windows.
as regards multiple distributions, we don't live in a world where everyone has identical needs so naturally at some point you start to get divergence. this even applies with windows where you have different releases for servers, desktops, oems, embedded, and others.
these divergences naturally make it so that you cannot guarantee that a given program will work the same way, or at all on the different platforms, and the only way to deal with that is lots of testing.
true binary compatibility requires a level of control by the vendor which results in large groups of people being ignored (causing divergence through making new systems which address their needs), or severe levels of bloat (to accommodate needs most users do not have). often it does both.
in particular, you would need every variant to use exactly the same versions of every library on exactly the same compiler, all the way down. good luck getting blind people to move to wayland which currently has no support for them.
the best mitigation we have at the moment is flatpacks, which package non interacting code with their needed library versions to produce cross distribution packages of user space applications.
most distributions get created because their users have a need not covered by the mainstream ones, and a lot of them are extremely niche. their use case often will never become part of mainstream distributions, and their effect on the ecosystem as a whole is negligible.
for others, the minority use case gradually becomes more important getting mainstream adoption as the work by these niche distributions becomes available in software used outside those distributions, and the niche distribution either becomes irrelevant, or remains as a testbed which feeds back into the wider ecosystem. this is what happened with the real time linux distroibutions, and as more of their work made it upstream, less of their users needed the full real time systems.
1
-
1
-
1
-
@darukutsu the zfs story is simple. the people writing it for linux are doing a from scratch reimplimentation, so from that point of view the oracle licence only matters if those people have seen the original source code.
where the issue comes for the kernel is that oracle has a history of suing people for look and feel and for work a likes, so the kernel people will not include it without a statement from oracle that they won't sue.
for reiser fs, yhe issue is that it is version 3 in the kernel, it is barely used, and has some fundamental bugs around the 2038 date problem. as most of the developers moved on to other things, and the remaining ones have moved on to version 4 which has not been upstreamed, and version 5 which is a wor, i progress, the bugs in version 3 wi.l not be fixed, leaving the only choice being to remove it.
as for kent, and bcachfs, he does not play well with others, so he needs someone else handling upstreaminb the code.
1
-
@qwesx sorry, but that is just wrong.
the licence gives you the right to modify, distribute, and run the code.
what compiling locally does is moves the legal liability from the distribution to individual user, who is usually not worth going after.
as regards testing in court, the options are relatively few, and apply the same no matter what the license is.
if you claim not to agree to the license, then the code defaults back to proprietary, and you just admitted in court to using proprietary code without a license.
if the licences are incompatible, your only choice is to get one side or the other to relicense their code under a compatible license for your usage, which us usually somewhere between unlikely and nearly impossible with most projects, meaning that again you do not have a valid license for the derivative work, and you just admitted to it in court.
with zfs, the problem is even worse, as you have oracle who sued google for having a compatible api, which was then resolved to be fair use, but only after costing millions to defend, and taking many years.
because of this the linux kernel community will not take code providing any oracle apis without a signed statement from oracle that they will not sue, not because they do not think they will win, but because they cannot afford the problems that will occur if they do sue.
individual distributions shipping zfs would face the same potential consequences, which is why most do not ship it.
this leaves you back at moving the liability from the kernel, to the distribution, to the end user, where the benefits to suing most of them are just not worth it.
as to trying it in court, there are lots of licenses, and lots of people either being too silly to check the licenses properly, or trying clever things to skirt the edges of legality because they think they have found a loophole.
there are also lots of places to sue, and as floss is a worldwide effort, you have to consider all of them at once, which is why it is a really bad idea to try and write your own license.
in america, people have tried the trick of not accepting the license, and have failed every time. the same is true in germany under a mix of european and german law.
this covers the two biggest markets, and can thus be considered settled. what happens in every case, is that the license forms a contract for you to use the otherwise proprietary code under more liberal terms, and when you reject it it reverts back to the proprietary case, where you then have to prove why you are not using the software without a license.
trying to be clever has also been tried, and while the law is less settled than for rejecting the license, you need every judge in every venue to agree with your interpretation of the license, which normally does not happen, so you are back at being in breach of the license, and hoping to get a friendly judge who does not look to give punitive damages for trying to be too clever. the insurance risk usually is not worth it.
the only other option is to try and comply with the license, but when you have multiple incompatible licenses this is not a valid option.
1
-
@marcogenovesi8570 if you mean the rust developers expecting the maintainers to go and hunt up a lot of extra semantic information not neded in c just to comply with rusts expensive typing system, and calling it documentation, that is one aspect of it. when you choose to work in a different language, which has tighter requirements, you make building the language api bindings harder. that is fine, but then you have to be prepared to do the work to find that estraminformation, and only after you think you have got it right do you get to call requesting confirmation documentation.
this happened with the ssl project in debian, where the person who initially wrote the code was not the person who provided the clarification, resulting in a major security hole, but the patch developers did the work and asked is it case a or case b, and got the wrong answer back because the answer is not always obvious.
this is why the c maintainers push back at the claims that it is just documenting the api, and it is cheap, when it is neither.
like with kent, and some of the systemd developers, the issue is not the language the code is being developed in, but the semantic mismatch between the information needed by the existing work, and potential ambiguities relating to how people want to use the existing apis in a different way to how they are currently being used, which might require disambiguation, which might require digging around in the code base and mailing lists to see if a close enough use case came up in potentially thousands of posts in the discussion to clarify the new semantics for the previously unconsidered use case.
the time for them to do this is at merge time if there is an issue, not all upfront because it is just documentatiron.
the general principal in most code bases is that if you want to create new code, go for it, but when you want to merge it with the existing mainline code base, do it in small separate chunks and be prepared to do the extra work to not just get it working, but to move it to a shape that is compatible with the main code base, and if it is more than a drive by bug fix, expect to stick around and be prepared to do a lot of the maintainance yourself. this goes double if you are working in a different language than the maintainers.
otherwise, it eventually gets treated as unmaintained code, and deprecated prior to removal.
again, it comes down to respecting the existing development process, being willing to work within it, and if you need minor changes to that process to make the work easier for both sides, working within the existing change process to gradually move the standard way of doing things in the desired direction, while bearing in mind that in the kernel there are 15000 other people whose direction does not necessarily match yours.
kent does not get this, so i see his code getting booted unless someone who does steps up to maintain it.
the systemd guys did not get it either, which is why kdbus went nowhere, after getting a lot of push back from lots of kernel maintainers.
a significant subset of the rust in the kernel developers don't seem to get it either, harming their case and making things harder for their codevelopers.
this problem is not confined to the kernel. middleware developers like systemd, gtk, wayland, and others seem to forget that it is not just their pet project, and that in the case of middleware they not only have the same problems as the kernel, with new developers having to play nice with their rules, but as someone with other communities involved, they also need to play nice with those below them, and not cause too many problems for those above them in the stack.
1
-
@alexisdumas84 i am not suggesting that every rust dev wants the maintainers to do everything, only that those who don't are being conspicuous in their absense with dissenting opinions or are failing to see how their additional semantic requirements to get the type system to work cause a semantic mismatch between what information is needed to do the work, and when.
for c, it comes when the patch is finished and you try and upstream it, at which time any such problems result in considerable rework to get from working code to compatible code. this is why the real time patch set took nearly 20 years to get fully integrated into the mainline. for rust, all this work seems to need to be done upfront to get the type system to work in the first place. this is a major mismatch, and the language is too new and unstable for the true costs of this to be well known and understood.
rust might indeed be as great as the early adopters think, with minimal costs for doing everything through the type system as some suggest, but there is an element of jumping the gun in the claims due to how new the language is. python 3 did not become good enough for a lot of people until the .4 release, and for others until the .6 release.
as you maintain your out of tree rust kernel, with any c patches needed to make it work, have fun, just make sure that when it comes time to upstream the maintainers need to be able to turn on a fresh install of whatever distro they use, do the equivalent of apt get install kernel tools, and then just build the kernel with your patches applied. it is not there yet, and thus some code will stay in your out of tree branch until it is.
1
-
1
-
1
-
1
-
1
-
1
-
it was not decided to make it into a second class citizen, but was just the recognition that when you add an additional language to a large project in a different language it will remain on the periphery for a long time and those pushing the new language will have to maintain the compatability layer.
this minority use case with the work to maintain the bindings by its nature is what makes it a second class citizen.
rust has an additional problem in requiring additional constraints when implementing those bindings, as the c code has the attitude of "don't break existing code", but the rust people want an additional level of semantic information on top of that to constrain future code, which is work not needed by any other bindings for any other language that i have seen, but they want the overworked c maintainers to do all of this additional work just for rust.
as the maintainers are overworked, and this extra semantic information is not needed for any other language binding, they are quite reasonably saying to the rust advocates that if their language needs the extra information not needed by any other language, use the code to go and find it, and make your best guess, but don't expect your resulting conclusions which you encode into your bindings to constrain those not working in your language.
to the extent this extra information is only required by rust, and does not constrain the underlying c code, it raises the question of how bad a match rust is for working with large existing c code bases.
so far, i have yet to see a single rust advocate even acknowledge the validity of that question, let alone address it in any way, instead just trying to get the c maintainers to do all of the extra work in perpetuity.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1