Comments by "" (@grokitall) on "Brodie Robertson" channel.

  1. The fundamental issue here is that you have an important maintainer (linus) slightly advocating for rust for linux, and blocking c only code which breaks the rust build, and a less important maintainer (christoph) basically saying he wants to burn that whole project to the ground. This basic incompatibility affects the basic survival of not just the rust for linux project, but also those c coders who for reasons compile the kernel with the rust flag set to off. Calling for a definitive answer to what the truth is is a long overdue request, and the faliure of linus ang greg to address that issue will continue to cause problems until it is resolved. As was said in the thread, it is a basic survival issue for the rust for linux project, and for those getting their c code bounced for rust incompatibility. Stating that in the end, the only solution in the absence of such an answer from linus is to try and submit directly to linus and get the answer that way is basically just stating the truth about how to deal with a maintainer who is just obstructing code for personal reasons. Given that the maintainer has publically stated he would like to burn the entire project to the ground, when breaking code in the project is already getting c code patches bounced, it just strikes me that refering this to the code of conduct guys seems fairly obvious, as this level of hostility towards other contributers, the late nacking for non technical reasons, and so on seem like the sort of thing they should have an opinion on if they are to be of any relevance. While there are valid issues about the inclusion of rust code, that is not what is happening here. It is not about the quality of rust code in the kernel, but the existence of such code, which by now seems to at least have implicit support from linus. The technical question of not having the wrapper duplicated in every driver is basic programming, and the undesirability of this has been accepted practice for well over a decade. Having such code exist was responded to by christoph basically telling the contributer to go pound sand, rather than giving a constructive suggestion as to an alternative location which would be acceptable. Almost nobody came out of this looking good. The maintainer got away with being toxic about a decision which in theory at least seems to have already been made by linus. The code of conduct guys got away with ignoring at the very least a request as to if the behaviour of the maintainer was in scope for them to consider. Linus and greg got away with refusing to address the core question of what is the status of rust code in the kernel. Either it is a first class citizen, and christoph should not be blocking it, or it is not, and linus should not be blocking pure c code for breaking it. You can't have it both ways.
    69
  2. 53
  3. 16
  4. 14
  5. 12
  6. 11
  7. 10
  8. 10
  9. 10
  10. 10
  11. 9
  12. 5
  13. 4
  14. 4
  15. 4
  16. 4
  17. 4
  18. 4
  19. 3
  20. This is the actual reason for the regulation. Unlike the us, the EU recognises that climate change is a thing, and that most power is currently generated from fossil fuels. The best way and the cheapest is to not waste the power in the first place. Using a TV as an example, cheap tvs used to just turn off the tube when you put it on standby, wheras expensive ones did it properly, leaving just the remote control unit turned on, so it could turn the TV back on. The difference in power usage could sometimes be as high as 80% of the peak usage when using the TV, which is a lot of wasted power you have to generate. The same types of mistake were made with multiple generations of devices, including satellite TV boxes, fridges, home automation tech, etc, and to fix this they made this series of regulations basically saying that when you don't need to be wasting power, you should not do it if you do not need to. The issue with the kde and gnome suspend flag seems to come from conflating 2 different use cases under the same flag. The first case is the one relating to power usage and sleep, hibernate and power off. The default should be to reduce power usage when it is not needed, but is currently used as a flag to turn autosuspend on and off. The second use case is where no matter what you are doing, you need to force power off due to running low on battery power. This applies both to laptops and to any desktop or server running with a decent ups, and gradually degrading functionality can extend the time needed until forced shutdown is needed. An example would be to disable windows whole drive indexing on battery power, thus extending battery life. This second use case should have the default be forced shutdown for laptops and for desktops and servers on battery power, and is irrelevant to both on mains power. By conflating the 2 different use cases, you just end up with the currently broken understanding of what the flag should do, and the related arguments about the defaults.
    3
  21. 3
  22.  @kirglow4639  so you are saying the bcachefs maintainer did not do a large code dump of new code extremely late in the release cycle, causing him to get called out by linus, and that lots of rust people came out against him and in support of linus? or what about the rust in linux maintainer who quit, because while the real work was great, the political garbage was not worth putting up with, and the rust community was not helping fight back against it? what about all the people in the threads around multiple videos here claiming that despite not being rust programmers, the c kernel maintainers should just go into the rust binding and fix them just like they do with the c code they actually write? not to mention all of the people who say gathering the lifetime information is so trivial that it is not worth them doing it themselves, because it is only documentation, even though that information is not needed in c and takes quite a bit of time to collect? or how about the vitriol poured on ted tso for pointing out that at kernel scale, the rust code is a minority use case, and most of the work will continue to be done on the c code, which might break the rust code as it will not be part of the stable apis, and that the rust coders will need to fix it because most of the c maintainers don't write rust? all of these and more crop up repeatedly in these threads, but not a single voice from the rust side ever speaks up to say that the c maintainers might have valid points, or that the hard core rust advocates might be being less than fair to the c maintainers. this is not setting up a straw man, just observing that the rust community in these threads when seeing a problem with adding third party rust code to the c kernel seem to expect the c maintainers to do all of the work, even when they don't know rust and the things they want are not something that matters in c, and then seem surprised when the c maintainers quite reasonably say that they don't write rust, and that what they are asking for is a non trivial amount of work.
    3
  23.  @kirglow4639  i won't ask you to cover any points about most of the cases, but if you have any knowledge of any comment by any rust contributer who thinks that the bcachefs developer doing a huge dump of new code right at the end of the merge window is anything but perfectly fine, i would be pleased to hear about it. as to lifetimes, documentation, and api changes, the problem is that the rust community is approaching them from only a rust perspective. lifetimes are only of interest to rust programmers and compiler writers, so it is not really a c thing. the documentation being asked for is largely for how the rust semantics would work with the c code, so again their is a fairly significant mismatch. api changes in c are generally looked down on, and the level of resistance is proportional to the number of callers in the surrounding code. the more callers the less the desire to change the code. however then general method to do api changes in c is first to point out what the problems are in the existing code and why in a convincing manner, then to come up with a better api which can be wrapped by the existing api so that the existing code still works and does not need changing all at once, and then be better enough that other users find it preferable to the prior api and use it instead. this does not seem from the comments i have seen to be the way the rust developers try and get improved apis. note: i am not saying that the apis could not be better, or that the documentation could not be improved, or that extra tests could not be written to make how the existing apis are supposed to work clearer, only that from what i have seen the rust developers do not seem to be engaging with the c developers in the way that the c conventions usually have such interactions work, and if they did, it might work a lot better. after all, one of the purposes of testing is to document and clarify how the interface actually works.
    3
  24.  @kuhluhOG  yes lifetimes exist in every language, but rust seems to need them to be known at api definition time for every variable mentioned in those apis, unlike just about every other language. when you put this constraint on another language just to get your type system to work, it does not come for free. i do not doubt that it is possible to write kernel code in rust, nor do the c maintainers, in fact some people are writing a kernel in rust, and good luck to them. people are also writing code for the kernel, but when you can't even get a stable compiler to work and need to download the nightly builds to get the code to even compile (according to one of the compiler devs st a rust conference), and you need an unspecified amount of extra work to even be able to start doing work on the unstable internal apis, there naturally arises the question of how much work and who has to do it. as to the problem with the drm subsystem, i've not seen the thread, so i don't know if it was done as a discussion around "this variable is used this way here and an incompatible way there", or if they just went and fixed the more problematical one for them to work in the way that made it easier for the rust developers, and then did a big code dump with little prior discussion. if it is the second case, it is the same issue of not playing well with others except on your own terms, and the resulting problems are deserved. if it is the first, then the issue should be raised on the usual channels, and initial patches proposed, with ongoing work to figure out if those patches are the best fix to the identified problems, just like with any other c developer. i just don't have enough context to determine if it was the rust devs, the c maintainers, or both talking past each other, and thus do not have a right to an opinion on the details i do not know.
    3
  25. 3
  26. 2
  27. i think this whole thread misses a basic point about developing large systems. in such systems you want a stable public api, and a flexible private api. this is further constrained by the in tree code, so that if you break it, you find out and fix it. with out of tree code, you have exactly the same problems you get with any long lived unmerged branch, that the ground underneath it changes over time, and like with the large snowflake block everyone is afraid to touch, it eventually gets broken because it is effectively being run as a long lived fork of the main code base. the only solution to this is to work hard to move more of the code into the mainline branch, to get all of the benefits and reduce the costs of maintaining it out of tree. this out of tree cost is further agravated for rust in that the bindings for the unstable private apis require an additional layer of future proofing semantics which the core c code does not need yet. for c (and most other languages) the idea of did that change break anything within the existing code determined at commit time is good enough, and those extra semantics the rust people want are only added as they are discovered by breaking code with the next change. the linux kernel usually does an exceptional job of providing a stable public api, but what the rust people are basically asking for is to make the private api a stable public api as well, and quite understandably the maintainers are saying that it does not make sense to do so, but feel free to do the work yourself, but don't be surprised if it breaks sometimes. to the maintainers, every piece of code not written in the core language of the project is effectively acting as out of tree code, so if you want to play in their sandbox, you get to enjoy the additional work that goes with it.
    2
  28. 2
  29. 2
  30. 2
  31. 2
  32. 2
  33. 2
  34. 2
  35. 2
  36. 2
  37. 2
  38. 2
  39. 2
  40. 2
  41. 2
  42. 2
  43. 2
  44. 2
  45. 2
  46. 1
  47. 1
  48. 1
  49. 1
  50. 1
  51. 1
  52. there are some minor misunderstandings of somethings in the threads which need to be a little clearer, so here goes. to produce an operating system from scratch, you need to either 1, write everything yourself, or 2, go online and download a bunch of stuff from people you don't know and hope you can trust the code. 1 does not work very well, apple and microsoft did not do it, neither did google or steam. it is slow and expensive. look at how long it took reactos to get to the point where people who were not kernel developers could work on it. (not criticising them, this stuff is hard). this only leaves you with the second option, which you solve by going to conferences and establishing a network of trust through key signing parties. as this requires the person to show the other person id, it is moderately secure against everyone but state actors who can just issue an id, and id thieves. all the network of trust does is produce a lot of people who can assert that the person you just met has been verified to them as being the person they say they are (for a certain level of verified). these people then commit code to version control, and once you get to centralised and distributed version control, you also have the person signing that they produced the work. this means if it later turns out that they were a problem, you can easily go back, and track what they touched and audit it if needed. it does not stop a bad actor like the xz maintainer, you need other processes for that. this gets you to the point were you can confirm the code you got was the same as the code they distributed (at least if it does cryptographic hashing like git) and the network of trust identifies all of the contributers. then you need to link the code together with libraries it depends on. the original paper that started the nix package manager, which lead to nixos, described the purpose to be to declaratively manage the exact version dependencies so that you could be confident that what you used to build it last time is the same as what you used to build it this time. effectively semantically versioning the build dependencies. it appears that the people behind nixos have extended this a bit, but the principle remains the same. if the dependencies change, then the key for the dependent packages will also change. guix did not like the nonclomenture, and thus decided to declare it using scheme, but otherwise they do the same thing. this gets you to the point where you can compile stuff and be confident where all the code came from, as you have a complete audit trail. reproducible builds go one step further, validating that the stuff you then compile will always produce the same patterns of bits in storage. this is non trivial for various reasons mentioned by others, and many others. declarative dependency management systems might also give you reproducible builds, but it is not what they were designed for. then you take the output of the reproducible build, put it in a package, and sign it. this gets you to the point where you as the person installing it can be confident that the binary packages you just installed are exactly the same as the stuff the original upstream contributers intended with a few tweaks from your distribution maintainers to make it work better together. and you can audit this all the way back to the original contributer to the upstream project if needed. none of this says anything about the quality of the code, or about the character of the contributers, you need other steps for that. as the sysadmin for your business, you can go one step further, and create versioned ansible install scripts to do infrastructure as code, but it does not add to the model, as your ansible scripts are just another repository you use. i hope this clarifies things a bit.
    1
  53. 1
  54. 1
  55. 1
  56.  @fanshaw  for a lot of the machines which got hit, virtualisation is not an option, and neither is having enough backup machines. how do you have backups for flight status boards 20 feet in the air?, or for the tills and card payment systems built into your Restaurant? what about having a spare hotel booking system for your 200 room 5 star hotel? too many people in these discussions seem to be ignoring some pretty important facts about this outage. most of the machines which got hit are locked down for a good reason. most need access to internet services in order to do their job. lots of them are in places which make it impossible to allow the ordinary staff to undo the security in order to just reboot the machine in safe mode and do the simple fix. this is what made the disruption take so long, and cost so much, because you need specialist workers to physically go to these machines to get the things unlocked to escape the boot loop. and of course you cannot keep enough spare workers on staff to cover a once in a long while outage like this, because such staff are in short supply, and thus cost a lot to employ and retain. every business that got hit will need to revisit their resiliency plans, looking at what other steps they can take to make it less of a problem next time, wh8ch by definition will also include looking at how the outsourced software is delivered. if one of their competitors decides to do it properly, so it can be tested prior to installing the update, they will immediately jump up the list of providers. equally if a company does what cloudstrike did and deliberately subverts the testing and resilience plans of the customers, they will rapidly fall near the bottom of the list. a lot of the people who have moved to linux have made the choice that it is cheaper to move than it is to put up with all the nonsense that comes with staying on windows. i am sure software other than the operating system is equally liable to be vulnerable to the choice to move to an alternative supplier.
    1
  57. 1
  58. 1
  59. 1
  60.  @fuseteam  you seem very set on the idea that every provider downstream of redhat is just a rebrand, which just is not true. there were whole classes of people who were only using redhat and their derivatives because redhat as part of their marketing said that if you need enterprise timescales, then use us as your stable base and do respins and derivatives based on us. that is what centos was. people are annoyed because redhat promissed 10 year support for centos 8, then ended it after only 1 year, while people were still migrating to it. even worse, they gave almost no warning. as to the derivatives, each exists for a specific reason, and supports customers redhat no longer wishes to support. clear linux is for an all intel hardware stack. rocky linux is for centos users where the move to rhel is not an option. scientific linux was a cantos derivative with extra software which was needed mainly in places like fermilab and cern. oracle linux needed specific optimisations which made running their databases better. others were used for embedded systems and infrastructure, or for alternative architectures. pretty much all of these use cases were at one time actively supported by redhat or centos, and are now prohibited under their dodgy eula. even the case where the city of Munich needed to create a respin specifically for their 15000 seat council rollout to include extra software only they needed is now banned. redhat used an opencore approach in order to grow, and a use us as upstream approach to enter markets that were not otherwise open to them. it had the added benefit of not fragmenting the enterprise linux market much. unfortunately for them, not everyone can suddenly switch to paying them lots of money on short notice, and even more cannot afford the rat on your boss tactic made disreputable by microsoft and their enforcement arm the business software alliance. when you run a business, you make a profit, and then decide how much of it to invest in research and innovation. the current management at redhat seems to think that it should work the other way around, where they decide what needs doing and how fast, and then tries to force people who never needed to pay with their blessing to make up the shortfall. the current fracturing of the enterprise market is a direct consequence of this attitude, as is the percentage of redhat customers looking for ways not to be held hostage by the next silly move they make. these people who have forked rhel had a total right to do so as redhat had encouraged them to do it. lots of them do testing for scenarios redhat does not support, and then pushes those changes both to stream, and to the primary upstream developers so that they do not have to keep large patchsets supported out of tree. these patches and extra bug finding are then made available to rhel from either upstream directly, through fedora, centos, or derectly as patches to redhat. this is fundamentally how open source works, someone finds a problem, develops a fix, and sends it upstream, and then the downstream users get to us it without necessarily having a need for support. when support is needed, then they find a company who is responsive to their support needs, which redhat increasingly is not. redhat has now become just another entitled proprietary software company who happens to use lots of open source software to try and keep the costs down, while the management has forgotten this fact and decided to stop playing well with others. this has already come back to bite them, and will continue to do so.
    1
  61. 1
  62. 1
  63. 1
  64. 1
  65. 1
  66. 1
  67. 1
  68. 1
  69. ​​ @travisSimon365 hurd would not have won if history was different. i was around at the rime, and everyone was looking to work from bsd code, but the at&t vs berkley case had a chilling effect. the other possibility was that lots of people wanted to extend minix to be more useful, but andrew tenenbaum not only wanted to keep it simple as a teaching tool, but also refused to allow others to maintain a set of patches in the same way as was done for the ntsc mosaic server, wjh8ch was h9w we got apache, literally a patchy servrr due the sequence of patches on patches on patches which were maintained at the time. also remember that it was not until someone wanted to support a third architecture that the default changed from forking the codebase, then getting it to work on the new architecture and instead got the code needed for arch architecture to be brought into the mainline kernel, managed with configuration flags. so you had 386bsd being slowed down by at&t, minix expansi9n being actively opposed by tenenbaum, gnu's kernel being delayed by indecisiveness over how it should work, and multiple commercial unixes just being too expensive for students. then along comes linus, who like a lot of students wanted a unix workalike and happened to be going to university 8n a country with a funding model that did not exist anywhere else. he even used the monolithic design which he thought was worse, for speed. it was not that linus was cleverer, withmsomemgrand plan, just that everyone else could not stop shooting themselves in the foot. also your alternatives at the time were either a primitive version if dos, or cpm.
    1
  70. 1
  71. 1
  72. 1
  73. 1
  74. 1
  75.  @jamesross3939  by its nature, diverse software stacks have some level of source incompatibility. just look at the problems in making the same program work across multiple versions of windows. as regards multiple distributions, we don't live in a world where everyone has identical needs so naturally at some point you start to get divergence. this even applies with windows where you have different releases for servers, desktops, oems, embedded, and others. these divergences naturally make it so that you cannot guarantee that a given program will work the same way, or at all on the different platforms, and the only way to deal with that is lots of testing. true binary compatibility requires a level of control by the vendor which results in large groups of people being ignored (causing divergence through making new systems which address their needs), or severe levels of bloat (to accommodate needs most users do not have). often it does both. in particular, you would need every variant to use exactly the same versions of every library on exactly the same compiler, all the way down. good luck getting blind people to move to wayland which currently has no support for them. the best mitigation we have at the moment is flatpacks, which package non interacting code with their needed library versions to produce cross distribution packages of user space applications. most distributions get created because their users have a need not covered by the mainstream ones, and a lot of them are extremely niche. their use case often will never become part of mainstream distributions, and their effect on the ecosystem as a whole is negligible. for others, the minority use case gradually becomes more important getting mainstream adoption as the work by these niche distributions becomes available in software used outside those distributions, and the niche distribution either becomes irrelevant, or remains as a testbed which feeds back into the wider ecosystem. this is what happened with the real time linux distroibutions, and as more of their work made it upstream, less of their users needed the full real time systems.
    1
  76. 1
  77. 1
  78. 1
  79.  @qwesx  sorry, but that is just wrong. the licence gives you the right to modify, distribute, and run the code. what compiling locally does is moves the legal liability from the distribution to individual user, who is usually not worth going after. as regards testing in court, the options are relatively few, and apply the same no matter what the license is. if you claim not to agree to the license, then the code defaults back to proprietary, and you just admitted in court to using proprietary code without a license. if the licences are incompatible, your only choice is to get one side or the other to relicense their code under a compatible license for your usage, which us usually somewhere between unlikely and nearly impossible with most projects, meaning that again you do not have a valid license for the derivative work, and you just admitted to it in court. with zfs, the problem is even worse, as you have oracle who sued google for having a compatible api, which was then resolved to be fair use, but only after costing millions to defend, and taking many years. because of this the linux kernel community will not take code providing any oracle apis without a signed statement from oracle that they will not sue, not because they do not think they will win, but because they cannot afford the problems that will occur if they do sue. individual distributions shipping zfs would face the same potential consequences, which is why most do not ship it. this leaves you back at moving the liability from the kernel, to the distribution, to the end user, where the benefits to suing most of them are just not worth it. as to trying it in court, there are lots of licenses, and lots of people either being too silly to check the licenses properly, or trying clever things to skirt the edges of legality because they think they have found a loophole. there are also lots of places to sue, and as floss is a worldwide effort, you have to consider all of them at once, which is why it is a really bad idea to try and write your own license. in america, people have tried the trick of not accepting the license, and have failed every time. the same is true in germany under a mix of european and german law. this covers the two biggest markets, and can thus be considered settled. what happens in every case, is that the license forms a contract for you to use the otherwise proprietary code under more liberal terms, and when you reject it it reverts back to the proprietary case, where you then have to prove why you are not using the software without a license. trying to be clever has also been tried, and while the law is less settled than for rejecting the license, you need every judge in every venue to agree with your interpretation of the license, which normally does not happen, so you are back at being in breach of the license, and hoping to get a friendly judge who does not look to give punitive damages for trying to be too clever. the insurance risk usually is not worth it. the only other option is to try and comply with the license, but when you have multiple incompatible licenses this is not a valid option.
    1
  80.  @marcogenovesi8570  if you mean the rust developers expecting the maintainers to go and hunt up a lot of extra semantic information not neded in c just to comply with rusts expensive typing system, and calling it documentation, that is one aspect of it. when you choose to work in a different language, which has tighter requirements, you make building the language api bindings harder. that is fine, but then you have to be prepared to do the work to find that estraminformation, and only after you think you have got it right do you get to call requesting confirmation documentation. this happened with the ssl project in debian, where the person who initially wrote the code was not the person who provided the clarification, resulting in a major security hole, but the patch developers did the work and asked is it case a or case b, and got the wrong answer back because the answer is not always obvious. this is why the c maintainers push back at the claims that it is just documenting the api, and it is cheap, when it is neither. like with kent, and some of the systemd developers, the issue is not the language the code is being developed in, but the semantic mismatch between the information needed by the existing work, and potential ambiguities relating to how people want to use the existing apis in a different way to how they are currently being used, which might require disambiguation, which might require digging around in the code base and mailing lists to see if a close enough use case came up in potentially thousands of posts in the discussion to clarify the new semantics for the previously unconsidered use case. the time for them to do this is at merge time if there is an issue, not all upfront because it is just documentatiron. the general principal in most code bases is that if you want to create new code, go for it, but when you want to merge it with the existing mainline code base, do it in small separate chunks and be prepared to do the extra work to not just get it working, but to move it to a shape that is compatible with the main code base, and if it is more than a drive by bug fix, expect to stick around and be prepared to do a lot of the maintainance yourself. this goes double if you are working in a different language than the maintainers. otherwise, it eventually gets treated as unmaintained code, and deprecated prior to removal. again, it comes down to respecting the existing development process, being willing to work within it, and if you need minor changes to that process to make the work easier for both sides, working within the existing change process to gradually move the standard way of doing things in the desired direction, while bearing in mind that in the kernel there are 15000 other people whose direction does not necessarily match yours. kent does not get this, so i see his code getting booted unless someone who does steps up to maintain it. the systemd guys did not get it either, which is why kdbus went nowhere, after getting a lot of push back from lots of kernel maintainers. a significant subset of the rust in the kernel developers don't seem to get it either, harming their case and making things harder for their codevelopers. this problem is not confined to the kernel. middleware developers like systemd, gtk, wayland, and others seem to forget that it is not just their pet project, and that in the case of middleware they not only have the same problems as the kernel, with new developers having to play nice with their rules, but as someone with other communities involved, they also need to play nice with those below them, and not cause too many problems for those above them in the stack.
    1
  81.  @alexisdumas84  i am not suggesting that every rust dev wants the maintainers to do everything, only that those who don't are being conspicuous in their absense with dissenting opinions or are failing to see how their additional semantic requirements to get the type system to work cause a semantic mismatch between what information is needed to do the work, and when. for c, it comes when the patch is finished and you try and upstream it, at which time any such problems result in considerable rework to get from working code to compatible code. this is why the real time patch set took nearly 20 years to get fully integrated into the mainline. for rust, all this work seems to need to be done upfront to get the type system to work in the first place. this is a major mismatch, and the language is too new and unstable for the true costs of this to be well known and understood. rust might indeed be as great as the early adopters think, with minimal costs for doing everything through the type system as some suggest, but there is an element of jumping the gun in the claims due to how new the language is. python 3 did not become good enough for a lot of people until the .4 release, and for others until the .6 release. as you maintain your out of tree rust kernel, with any c patches needed to make it work, have fun, just make sure that when it comes time to upstream the maintainers need to be able to turn on a fresh install of whatever distro they use, do the equivalent of apt get install kernel tools, and then just build the kernel with your patches applied. it is not there yet, and thus some code will stay in your out of tree branch until it is.
    1
  82. 1
  83. 1
  84. 1
  85. 1
  86. 1
  87. it was not decided to make it into a second class citizen, but was just the recognition that when you add an additional language to a large project in a different language it will remain on the periphery for a long time and those pushing the new language will have to maintain the compatability layer. this minority use case with the work to maintain the bindings by its nature is what makes it a second class citizen. rust has an additional problem in requiring additional constraints when implementing those bindings, as the c code has the attitude of "don't break existing code", but the rust people want an additional level of semantic information on top of that to constrain future code, which is work not needed by any other bindings for any other language that i have seen, but they want the overworked c maintainers to do all of this additional work just for rust. as the maintainers are overworked, and this extra semantic information is not needed for any other language binding, they are quite reasonably saying to the rust advocates that if their language needs the extra information not needed by any other language, use the code to go and find it, and make your best guess, but don't expect your resulting conclusions which you encode into your bindings to constrain those not working in your language. to the extent this extra information is only required by rust, and does not constrain the underlying c code, it raises the question of how bad a match rust is for working with large existing c code bases. so far, i have yet to see a single rust advocate even acknowledge the validity of that question, let alone address it in any way, instead just trying to get the c maintainers to do all of the extra work in perpetuity.
    1
  88. 1
  89. 1
  90. 1
  91. 1
  92. 1
  93. 1
  94. 1
  95. 1
  96. 1
  97. 1
  98. 1
  99. 1
  100. 1
  101. 1
  102. 1
  103. 1
  104. 1
  105. 1
  106. 1
  107. 1
  108. 1
  109.  @davidpaulos2943  when not working in oo languages, adding an extra field to an object would change the signature of the function header, and a number of other refactorings can also affect the api, which is not a problem as long as it is fixed everywhere inside the private api. because these things are in the private api, the stability constraints on the public api are not enforced, but still good practice. the code from the public api still behaves the same, as the ci tests confirm, as long as all the calls are fixed at the same time. this is something that is not generally understood by oo practitioners, but is implicit in both opdykes phd, and in fowlers book, when talking about non oo languages. this is why the distinction between the public and private apis matter. for oo, the information hiding is done at the level of the object and the class, and the public private split is much less important. for non oo, the hiding occurs at that boundary, and api changing refactorings which would be hidden in oo only happen behind the boundary, or when doing a major semantic version update on the public api. the rust advocates ignore this difference, and various community mouthpieces make statements to the effect that if they want to use it in the bindings, it should act like a public api, or the c maintainers should be forced to stop work for a month to learn enough rust that they effectively take over as maintainers of the rust bindings. this just is not realistic in any large project, especially when the language is not oo. also requiring them to collect a lot of data including liveness, but not limited to it, which is not used in c is also silly. for both these reasons, the overworked and understaffed maintainers have the attitude that if the rust people want to write bindings for what in a project this size is a minority use case, fine, but don't expect us to not break the private api, or to have to learn a new language to fix your bindings, and if you need extra info that other languages don't need, well, here is the source code so find it yourself. as rust attaches itself to more of the periphery of the kernel, this tension between the two communities will only grow, and the c maintainers will have to push back harder. when coming to play in someone else's sandbox, first you need to learn how to play by the rules, then you can try and improve them, and this gets more important as the project gets bigger. the author of bcachfs proved he cannot do this regarding the stable development cycle rules, and plenty of others are proving the same thing about other conventions in kernel development.
    1
  110. 1
  111. 1
  112. 1
  113. 1
  114. 1
  115. 1
  116. 1
  117. 1
  118. 1
  119. 1
  120. 1
  121. 1
  122. 1
  123. 1