Comments by "" (@grokitall) on "Brodie Robertson"
channel.
-
The fundamental issue here is that you have an important maintainer (linus) slightly advocating for rust for linux, and blocking c only code which breaks the rust build, and a less important maintainer (christoph) basically saying he wants to burn that whole project to the ground.
This basic incompatibility affects the basic survival of not just the rust for linux project, but also those c coders who for reasons compile the kernel with the rust flag set to off.
Calling for a definitive answer to what the truth is is a long overdue request, and the faliure of linus ang greg to address that issue will continue to cause problems until it is resolved.
As was said in the thread, it is a basic survival issue for the rust for linux project, and for those getting their c code bounced for rust incompatibility.
Stating that in the end, the only solution in the absence of such an answer from linus is to try and submit directly to linus and get the answer that way is basically just stating the truth about how to deal with a maintainer who is just obstructing code for personal reasons.
Given that the maintainer has publically stated he would like to burn the entire project to the ground, when breaking code in the project is already getting c code patches bounced, it just strikes me that refering this to the code of conduct guys seems fairly obvious, as this level of hostility towards other contributers, the late nacking for non technical reasons, and so on seem like the sort of thing they should have an opinion on if they are to be of any relevance.
While there are valid issues about the inclusion of rust code, that is not what is happening here. It is not about the quality of rust code in the kernel, but the existence of such code, which by now seems to at least have implicit support from linus. The technical question of not having the wrapper duplicated in every driver is basic programming, and the undesirability of this has been accepted practice for well over a decade.
Having such code exist was responded to by christoph basically telling the contributer to go pound sand, rather than giving a constructive suggestion as to an alternative location which would be acceptable.
Almost nobody came out of this looking good.
The maintainer got away with being toxic about a decision which in theory at least seems to have already been made by linus.
The code of conduct guys got away with ignoring at the very least a request as to if the behaviour of the maintainer was in scope for them to consider.
Linus and greg got away with refusing to address the core question of what is the status of rust code in the kernel. Either it is a first class citizen, and christoph should not be blocking it, or it is not, and linus should not be blocking pure c code for breaking it. You can't have it both ways.
69
-
16
-
12
-
11
-
10
-
10
-
9
-
5
-
one of the main reasons for the dislike of systemd is the mentality of the developers, starting with leonard and continuing through his fanboys. first, he identifies potentially valid problems. he did this with avahi, with pulseaudio and then with systemd.
then he get the bit he is working on to be "dev complete", where it works on his machine (in a way that is deliberately incompatable with the alternatives) , and doesn't care if it works on anyone elses machine, and relies on his fanboys to get it to work on anything else, while he completely loses interest in it.
this bit is then excused as being optional, and thus it doesn't matter that is incompatable, until one of the higher levels of stuff has a hard dependency on it, and suddenly the alternatives are squeezed out of the mainstream ecosystem, making it harder to remove it when it needs replacing.
also, when they do something that causes problems for some other project, well that is ok, because we won't fix it because it in our code.
these and other behaviours are just some of the reasons that people don't trust the systemd developers and the fact that redhat has not made any efforts to tone down these bad behaviours is why people don't trust redhat about systemd
4
-
4
-
4
-
3
-
This is the actual reason for the regulation.
Unlike the us, the EU recognises that climate change is a thing, and that most power is currently generated from fossil fuels. The best way and the cheapest is to not waste the power in the first place.
Using a TV as an example, cheap tvs used to just turn off the tube when you put it on standby, wheras expensive ones did it properly, leaving just the remote control unit turned on, so it could turn the TV back on. The difference in power usage could sometimes be as high as 80% of the peak usage when using the TV, which is a lot of wasted power you have to generate.
The same types of mistake were made with multiple generations of devices, including satellite TV boxes, fridges, home automation tech, etc, and to fix this they made this series of regulations basically saying that when you don't need to be wasting power, you should not do it if you do not need to.
The issue with the kde and gnome suspend flag seems to come from conflating 2 different use cases under the same flag.
The first case is the one relating to power usage and sleep, hibernate and power off. The default should be to reduce power usage when it is not needed, but is currently used as a flag to turn autosuspend on and off.
The second use case is where no matter what you are doing, you need to force power off due to running low on battery power. This applies both to laptops and to any desktop or server running with a decent ups, and gradually degrading functionality can extend the time needed until forced shutdown is needed. An example would be to disable windows whole drive indexing on battery power, thus extending battery life.
This second use case should have the default be forced shutdown for laptops and for desktops and servers on battery power, and is irrelevant to both on mains power.
By conflating the 2 different use cases, you just end up with the currently broken understanding of what the flag should do, and the related arguments about the defaults.
3
-
@kuhluhOG yes lifetimes exist in every language, but rust seems to need them to be known at api definition time for every variable mentioned in those apis, unlike just about every other language. when you put this constraint on another language just to get your type system to work, it does not come for free.
i do not doubt that it is possible to write kernel code in rust, nor do the c maintainers, in fact some people are writing a kernel in rust, and good luck to them.
people are also writing code for the kernel, but when you can't even get a stable compiler to work and need to download the nightly builds to get the code to even compile (according to one of the compiler devs st a rust conference), and you need an unspecified amount of extra work to even be able to start doing work on the unstable internal apis, there naturally arises the question of how much work and who has to do it.
as to the problem with the drm subsystem, i've not seen the thread, so i don't know if it was done as a discussion around "this variable is used this way here and an incompatible way there", or if they just went and fixed the more problematical one for them to work in the way that made it easier for the rust developers, and then did a big code dump with little prior discussion.
if it is the second case, it is the same issue of not playing well with others except on your own terms, and the resulting problems are deserved.
if it is the first, then the issue should be raised on the usual channels, and initial patches proposed, with ongoing work to figure out if those patches are the best fix to the identified problems, just like with any other c developer.
i just don't have enough context to determine if it was the rust devs, the c maintainers, or both talking past each other, and thus do not have a right to an opinion on the details i do not know.
3
-
3
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
1
-
1
-
no, the major question is can they do anything about it without shooting themselves in the head. all a backup checker needs to do is checksum every file, and export that audit report to a non rhel machine. this is a good idea for security auditing anyway, so banning it would be very difficult. so would banning any non rhel machine from the network.
once you have this audit database, you can do the same for any other distribution, and instantly flag up any differences, which from a technical viewpoint is no different from comparing todays and yesterdays reports, so they cannot easily ban that either.
once you have the list of files which is different, you can decide if it matters, and just tweak your patch list until you get the same checksum. as this bit is not being done on rhel, they have no say in the matter.
1
-
1
-
An example of the difference would be to keep the screen turned on when playing a movie, but turning it off if doing computation or drive indexing without the need for a display. This lets the monitor go into standby, and also stops the graphics card from burning energy it does not need to use. Screensavers used to be about avoiding screen burn on tube based displays, but are now either about inactivity wallpaper or power saving depending on your option settings.
Similarly, cpus have different power levels, depending on what level of usage is needed. You can step those levels down if you are barely doing anything, stepping things back up if you start a cpu intensive task.
This means you could walk away from the computer when it started to do a task which was a cpu hog, but did not need the screen, and after a wait, it would blank the screen and let the monitor go into standby, and then when the task finishes, it can move the processor into a lower power mode. When you come back and move the mouse, it can turn the screen back on, waking up the monitor, and you can check if it is finished. You could even have it automatically power off when it is done.
1
-
1
-
there are some minor misunderstandings of somethings in the threads which need to be a little clearer, so here goes.
to produce an operating system from scratch, you need to either 1, write everything yourself, or 2, go online and download a bunch of stuff from people you don't know and hope you can trust the code.
1 does not work very well, apple and microsoft did not do it, neither did google or steam. it is slow and expensive. look at how long it took reactos to get to the point where people who were not kernel developers could work on it. (not criticising them, this stuff is hard).
this only leaves you with the second option, which you solve by going to conferences and establishing a network of trust through key signing parties.
as this requires the person to show the other person id, it is moderately secure against everyone but state actors who can just issue an id, and id thieves.
all the network of trust does is produce a lot of people who can assert that the person you just met has been verified to them as being the person they say they are (for a certain level of verified).
these people then commit code to version control, and once you get to centralised and distributed version control, you also have the person signing that they produced the work. this means if it later turns out that they were a problem, you can easily go back, and track what they touched and audit it if needed.
it does not stop a bad actor like the xz maintainer, you need other processes for that.
this gets you to the point were you can confirm the code you got was the same as the code they distributed (at least if it does cryptographic hashing like git) and the network of trust identifies all of the contributers.
then you need to link the code together with libraries it depends on. the original paper that started the nix package manager, which lead to nixos, described the purpose to be to declaratively manage the exact version dependencies so that you could be confident that what you used to build it last time is the same as what you used to build it this time. effectively semantically versioning the build dependencies. it appears that the people behind nixos have extended this a bit, but the principle remains the same. if the dependencies change, then the key for the dependent packages will also change. guix did not like the nonclomenture, and thus decided to declare it using scheme, but otherwise they do the same thing.
this gets you to the point where you can compile stuff and be confident where all the code came from, as you have a complete audit trail.
reproducible builds go one step further, validating that the stuff you then compile will always produce the same patterns of bits in storage. this is non trivial for various reasons mentioned by others, and many others. declarative dependency management systems might also give you reproducible builds, but it is not what they were designed for.
then you take the output of the reproducible build, put it in a package, and sign it. this gets you to the point where you as the person installing it can be confident that the binary packages you just installed are exactly the same as the stuff the original upstream contributers intended with a few tweaks from your distribution maintainers to make it work better together.
and you can audit this all the way back to the original contributer to the upstream project if needed.
none of this says anything about the quality of the code, or about the character of the contributers, you need other steps for that.
as the sysadmin for your business, you can go one step further, and create versioned ansible install scripts to do infrastructure as code, but it does not add to the model, as your ansible scripts are just another repository you use.
i hope this clarifies things a bit.
1
-
1
-
1
-
@fuseteam you seem very set on the idea that every provider downstream of redhat is just a rebrand, which just is not true.
there were whole classes of people who were only using redhat and their derivatives because redhat as part of their marketing said that if you need enterprise timescales, then use us as your stable base and do respins and derivatives based on us. that is what centos was. people are annoyed because redhat promissed 10 year support for centos 8, then ended it after only 1 year, while people were still migrating to it. even worse, they gave almost no warning.
as to the derivatives, each exists for a specific reason, and supports customers redhat no longer wishes to support.
clear linux is for an all intel hardware stack.
rocky linux is for centos users where the move to rhel is not an option.
scientific linux was a cantos derivative with extra software which was needed mainly in places like fermilab and cern.
oracle linux needed specific optimisations which made running their databases better.
others were used for embedded systems and infrastructure, or for alternative architectures.
pretty much all of these use cases were at one time actively supported by redhat or centos, and are now prohibited under their dodgy eula.
even the case where the city of Munich needed to create a respin specifically for their 15000 seat council rollout to include extra software only they needed is now banned.
redhat used an opencore approach in order to grow, and a use us as upstream approach to enter markets that were not otherwise open to them. it had the added benefit of not fragmenting the enterprise linux market much. unfortunately for them, not everyone can suddenly switch to paying them lots of money on short notice, and even more cannot afford the rat on your boss tactic made disreputable by microsoft and their enforcement arm the business software alliance.
when you run a business, you make a profit, and then decide how much of it to invest in research and innovation. the current management at redhat seems to think that it should work the other way around, where they decide what needs doing and how fast, and then tries to force people who never needed to pay with their blessing to make up the shortfall.
the current fracturing of the enterprise market is a direct consequence of this attitude, as is the percentage of redhat customers looking for ways not to be held hostage by the next silly move they make.
these people who have forked rhel had a total right to do so as redhat had encouraged them to do it. lots of them do testing for scenarios redhat does not support, and then pushes those changes both to stream, and to the primary upstream developers so that they do not have to keep large patchsets supported out of tree.
these patches and extra bug finding are then made available to rhel from either upstream directly, through fedora, centos, or derectly as patches to redhat.
this is fundamentally how open source works, someone finds a problem, develops a fix, and sends it upstream, and then the downstream users get to us it without necessarily having a need for support. when support is needed, then they find a company who is responsive to their support needs, which redhat increasingly is not.
redhat has now become just another entitled proprietary software company who happens to use lots of open source software to try and keep the costs down, while the management has forgotten this fact and decided to stop playing well with others. this has already come back to bite them, and will continue to do so.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@travisSimon365 hurd would not have won if history was different. i was around at the rime, and everyone was looking to work from bsd code, but the at&t vs berkley case had a chilling effect. the other possibility was that lots of people wanted to extend minix to be more useful, but andrew tenenbaum not only wanted to keep it simple as a teaching tool, but also refused to allow others to maintain a set of patches in the same way as was done for the ntsc mosaic server, wjh8ch was h9w we got apache, literally a patchy servrr due the sequence of patches on patches on patches which were maintained at the time.
also remember that it was not until someone wanted to support a third architecture that the default changed from forking the codebase, then getting it to work on the new architecture and instead got the code needed for arch architecture to be brought into the mainline kernel, managed with configuration flags.
so you had 386bsd being slowed down by at&t, minix expansi9n being actively opposed by tenenbaum, gnu's kernel being delayed by indecisiveness over how it should work, and multiple commercial unixes just being too expensive for students.
then along comes linus, who like a lot of students wanted a unix workalike and happened to be going to university 8n a country with a funding model that did not exist anywhere else. he even used the monolithic design which he thought was worse, for speed.
it was not that linus was cleverer, withmsomemgrand plan, just that everyone else could not stop shooting themselves in the foot.
also your alternatives at the time were either a primitive version if dos, or cpm.
1
-
1
-
1
-
1
-
1
-
@jamesross3939 by its nature, diverse software stacks have some level of source incompatibility. just look at the problems in making the same program work across multiple versions of windows.
as regards multiple distributions, we don't live in a world where everyone has identical needs so naturally at some point you start to get divergence. this even applies with windows where you have different releases for servers, desktops, oems, embedded, and others.
these divergences naturally make it so that you cannot guarantee that a given program will work the same way, or at all on the different platforms, and the only way to deal with that is lots of testing.
true binary compatibility requires a level of control by the vendor which results in large groups of people being ignored (causing divergence through making new systems which address their needs), or severe levels of bloat (to accommodate needs most users do not have). often it does both.
in particular, you would need every variant to use exactly the same versions of every library on exactly the same compiler, all the way down. good luck getting blind people to move to wayland which currently has no support for them.
the best mitigation we have at the moment is flatpacks, which package non interacting code with their needed library versions to produce cross distribution packages of user space applications.
most distributions get created because their users have a need not covered by the mainstream ones, and a lot of them are extremely niche. their use case often will never become part of mainstream distributions, and their effect on the ecosystem as a whole is negligible.
for others, the minority use case gradually becomes more important getting mainstream adoption as the work by these niche distributions becomes available in software used outside those distributions, and the niche distribution either becomes irrelevant, or remains as a testbed which feeds back into the wider ecosystem. this is what happened with the real time linux distroibutions, and as more of their work made it upstream, less of their users needed the full real time systems.
1
-
@darukutsu the zfs story is simple. the people writing it for linux are doing a from scratch reimplimentation, so from that point of view the oracle licence only matters if those people have seen the original source code.
where the issue comes for the kernel is that oracle has a history of suing people for look and feel and for work a likes, so the kernel people will not include it without a statement from oracle that they won't sue.
for reiser fs, yhe issue is that it is version 3 in the kernel, it is barely used, and has some fundamental bugs around the 2038 date problem. as most of the developers moved on to other things, and the remaining ones have moved on to version 4 which has not been upstreamed, and version 5 which is a wor, i progress, the bugs in version 3 wi.l not be fixed, leaving the only choice being to remove it.
as for kent, and bcachfs, he does not play well with others, so he needs someone else handling upstreaminb the code.
1
-
@qwesx sorry, but that is just wrong.
the licence gives you the right to modify, distribute, and run the code.
what compiling locally does is moves the legal liability from the distribution to individual user, who is usually not worth going after.
as regards testing in court, the options are relatively few, and apply the same no matter what the license is.
if you claim not to agree to the license, then the code defaults back to proprietary, and you just admitted in court to using proprietary code without a license.
if the licences are incompatible, your only choice is to get one side or the other to relicense their code under a compatible license for your usage, which us usually somewhere between unlikely and nearly impossible with most projects, meaning that again you do not have a valid license for the derivative work, and you just admitted to it in court.
with zfs, the problem is even worse, as you have oracle who sued google for having a compatible api, which was then resolved to be fair use, but only after costing millions to defend, and taking many years.
because of this the linux kernel community will not take code providing any oracle apis without a signed statement from oracle that they will not sue, not because they do not think they will win, but because they cannot afford the problems that will occur if they do sue.
individual distributions shipping zfs would face the same potential consequences, which is why most do not ship it.
this leaves you back at moving the liability from the kernel, to the distribution, to the end user, where the benefits to suing most of them are just not worth it.
as to trying it in court, there are lots of licenses, and lots of people either being too silly to check the licenses properly, or trying clever things to skirt the edges of legality because they think they have found a loophole.
there are also lots of places to sue, and as floss is a worldwide effort, you have to consider all of them at once, which is why it is a really bad idea to try and write your own license.
in america, people have tried the trick of not accepting the license, and have failed every time. the same is true in germany under a mix of european and german law.
this covers the two biggest markets, and can thus be considered settled. what happens in every case, is that the license forms a contract for you to use the otherwise proprietary code under more liberal terms, and when you reject it it reverts back to the proprietary case, where you then have to prove why you are not using the software without a license.
trying to be clever has also been tried, and while the law is less settled than for rejecting the license, you need every judge in every venue to agree with your interpretation of the license, which normally does not happen, so you are back at being in breach of the license, and hoping to get a friendly judge who does not look to give punitive damages for trying to be too clever. the insurance risk usually is not worth it.
the only other option is to try and comply with the license, but when you have multiple incompatible licenses this is not a valid option.
1
-
@marcogenovesi8570 if you mean the rust developers expecting the maintainers to go and hunt up a lot of extra semantic information not neded in c just to comply with rusts expensive typing system, and calling it documentation, that is one aspect of it. when you choose to work in a different language, which has tighter requirements, you make building the language api bindings harder. that is fine, but then you have to be prepared to do the work to find that estraminformation, and only after you think you have got it right do you get to call requesting confirmation documentation.
this happened with the ssl project in debian, where the person who initially wrote the code was not the person who provided the clarification, resulting in a major security hole, but the patch developers did the work and asked is it case a or case b, and got the wrong answer back because the answer is not always obvious.
this is why the c maintainers push back at the claims that it is just documenting the api, and it is cheap, when it is neither.
like with kent, and some of the systemd developers, the issue is not the language the code is being developed in, but the semantic mismatch between the information needed by the existing work, and potential ambiguities relating to how people want to use the existing apis in a different way to how they are currently being used, which might require disambiguation, which might require digging around in the code base and mailing lists to see if a close enough use case came up in potentially thousands of posts in the discussion to clarify the new semantics for the previously unconsidered use case.
the time for them to do this is at merge time if there is an issue, not all upfront because it is just documentatiron.
the general principal in most code bases is that if you want to create new code, go for it, but when you want to merge it with the existing mainline code base, do it in small separate chunks and be prepared to do the extra work to not just get it working, but to move it to a shape that is compatible with the main code base, and if it is more than a drive by bug fix, expect to stick around and be prepared to do a lot of the maintainance yourself. this goes double if you are working in a different language than the maintainers.
otherwise, it eventually gets treated as unmaintained code, and deprecated prior to removal.
again, it comes down to respecting the existing development process, being willing to work within it, and if you need minor changes to that process to make the work easier for both sides, working within the existing change process to gradually move the standard way of doing things in the desired direction, while bearing in mind that in the kernel there are 15000 other people whose direction does not necessarily match yours.
kent does not get this, so i see his code getting booted unless someone who does steps up to maintain it.
the systemd guys did not get it either, which is why kdbus went nowhere, after getting a lot of push back from lots of kernel maintainers.
a significant subset of the rust in the kernel developers don't seem to get it either, harming their case and making things harder for their codevelopers.
this problem is not confined to the kernel. middleware developers like systemd, gtk, wayland, and others seem to forget that it is not just their pet project, and that in the case of middleware they not only have the same problems as the kernel, with new developers having to play nice with their rules, but as someone with other communities involved, they also need to play nice with those below them, and not cause too many problems for those above them in the stack.
1
-
@alexisdumas84 i am not suggesting that every rust dev wants the maintainers to do everything, only that those who don't are being conspicuous in their absense with dissenting opinions or are failing to see how their additional semantic requirements to get the type system to work cause a semantic mismatch between what information is needed to do the work, and when.
for c, it comes when the patch is finished and you try and upstream it, at which time any such problems result in considerable rework to get from working code to compatible code. this is why the real time patch set took nearly 20 years to get fully integrated into the mainline. for rust, all this work seems to need to be done upfront to get the type system to work in the first place. this is a major mismatch, and the language is too new and unstable for the true costs of this to be well known and understood.
rust might indeed be as great as the early adopters think, with minimal costs for doing everything through the type system as some suggest, but there is an element of jumping the gun in the claims due to how new the language is. python 3 did not become good enough for a lot of people until the .4 release, and for others until the .6 release.
as you maintain your out of tree rust kernel, with any c patches needed to make it work, have fun, just make sure that when it comes time to upstream the maintainers need to be able to turn on a fresh install of whatever distro they use, do the equivalent of apt get install kernel tools, and then just build the kernel with your patches applied. it is not there yet, and thus some code will stay in your out of tree branch until it is.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@warpedgeoid black box statistical ai has the issue that while it might give you the same results, you have a lot of trouble knowing how it got those results.
this is due to the fact that it does not model the problem space, so is inherently about plausible results, not correct results.
there is an example from early usage where they took photos of a forest, and later took photos of the same forest with tanks in it, and trained the system. it perfectly managed to split the two sets. then they took some more photos with tanks, and it failed miserably. it turned out it had learned to tell the difference between photos taken on a cloudy day, and photos taken on a sunny day.
while this story is old, the point still applies. the nature of this sort of ai is inherently black box, so you by definition don't know how it gets its results, which makes all such systems not suitable for man rated and safety critical systems.
symbolic ai like expert systems on the other hand have a fully auditable model of the problem space as part of how they work. this makes them just as checkable as any other software where you can access the source code. this is referred to as white box ai, as you can actually look inside and determine not just that it produces the right result, but why and how it does it.
this sort of system should be compatible with aviation standards.
1
-
1
-
@MarkusEicher70 people have different needs, which leads to different choices. red hat built it's business on the basis of always open, and base yourself on us. later the accountants started to complain, and instead of reducing the developer headcount through natural churn, they decided to go on a money hunt, closing source access to a lot of people who believed them, thus causing the current problems.
rocky, alma, parts of suse, oracle linux and clear linux exist to provide support to people left high and dry after red hat decided not to support the needs of those customers. as red hat is an enterprise platform, the support needs can be up to 10 years if you get a problem at the right part of the cycle.
third party software is often only tested against red hat, so you either have to pay them the money and sign up to their dodgy eula, or use one of the derivatives.
the open source mentality views access restrictions as damage and looks for ways around it.
moving to other non derived distributions comes with added costs, as not all the choices are the same and the software you need might not be tested against those choices, so you have to do a lot of testing to make sure it works, then either fix it if you can get the source, or find alternatives.
this adds costs, hence people getting annoyed.
1
-
1
-
1
-
1
-
1