Comments by "" (@grokitall) on "Brodie Robertson"
channel.
-
1
-
1
-
1
-
1
-
1
-
1
-
@warpedgeoid black box statistical ai has the issue that while it might give you the same results, you have a lot of trouble knowing how it got those results.
this is due to the fact that it does not model the problem space, so is inherently about plausible results, not correct results.
there is an example from early usage where they took photos of a forest, and later took photos of the same forest with tanks in it, and trained the system. it perfectly managed to split the two sets. then they took some more photos with tanks, and it failed miserably. it turned out it had learned to tell the difference between photos taken on a cloudy day, and photos taken on a sunny day.
while this story is old, the point still applies. the nature of this sort of ai is inherently black box, so you by definition don't know how it gets its results, which makes all such systems not suitable for man rated and safety critical systems.
symbolic ai like expert systems on the other hand have a fully auditable model of the problem space as part of how they work. this makes them just as checkable as any other software where you can access the source code. this is referred to as white box ai, as you can actually look inside and determine not just that it produces the right result, but why and how it does it.
this sort of system should be compatible with aviation standards.
1
-
1
-
@davidpaulos2943 when not working in oo languages, adding an extra field to an object would change the signature of the function header, and a number of other refactorings can also affect the api, which is not a problem as long as it is fixed everywhere inside the private api.
because these things are in the private api, the stability constraints on the public api are not enforced, but still good practice. the code from the public api still behaves the same, as the ci tests confirm, as long as all the calls are fixed at the same time.
this is something that is not generally understood by oo practitioners, but is implicit in both opdykes phd, and in fowlers book, when talking about non oo languages.
this is why the distinction between the public and private apis matter. for oo, the information hiding is done at the level of the object and the class, and the public private split is much less important. for non oo, the hiding occurs at that boundary, and api changing refactorings which would be hidden in oo only happen behind the boundary, or when doing a major semantic version update on the public api.
the rust advocates ignore this difference, and various community mouthpieces make statements to the effect that if they want to use it in the bindings, it should act like a public api, or the c maintainers should be forced to stop work for a month to learn enough rust that they effectively take over as maintainers of the rust bindings.
this just is not realistic in any large project, especially when the language is not oo. also requiring them to collect a lot of data including liveness, but not limited to it, which is not used in c is also silly.
for both these reasons, the overworked and understaffed maintainers have the attitude that if the rust people want to write bindings for what in a project this size is a minority use case, fine, but don't expect us to not break the private api, or to have to learn a new language to fix your bindings, and if you need extra info that other languages don't need, well, here is the source code so find it yourself.
as rust attaches itself to more of the periphery of the kernel, this tension between the two communities will only grow, and the c maintainers will have to push back harder. when coming to play in someone else's sandbox, first you need to learn how to play by the rules, then you can try and improve them, and this gets more important as the project gets bigger.
the author of bcachfs proved he cannot do this regarding the stable development cycle rules, and plenty of others are proving the same thing about other conventions in kernel development.
1
-
1
-
1
-
@MarkusEicher70 people have different needs, which leads to different choices. red hat built it's business on the basis of always open, and base yourself on us. later the accountants started to complain, and instead of reducing the developer headcount through natural churn, they decided to go on a money hunt, closing source access to a lot of people who believed them, thus causing the current problems.
rocky, alma, parts of suse, oracle linux and clear linux exist to provide support to people left high and dry after red hat decided not to support the needs of those customers. as red hat is an enterprise platform, the support needs can be up to 10 years if you get a problem at the right part of the cycle.
third party software is often only tested against red hat, so you either have to pay them the money and sign up to their dodgy eula, or use one of the derivatives.
the open source mentality views access restrictions as damage and looks for ways around it.
moving to other non derived distributions comes with added costs, as not all the choices are the same and the software you need might not be tested against those choices, so you have to do a lot of testing to make sure it works, then either fix it if you can get the source, or find alternatives.
this adds costs, hence people getting annoyed.
1
-
1
-
1
-
1
-
1
-
1
-
legacy code is a nightmare.
snowflake systems nobody wants to touch have exactly one advantage, they have not failed YET!, but they will as the environment around them changes.
so for these systems the question is "do we start trying to fix them now, or wait until they break and then have to try and fix them on an emergency priority basis". now is almost always the better answer.
for unsupported legacy binary projects, you have only 2 choices, reverse engineer it to get code you can compile, or implement a replacement with a big bang switchover. neither are very good options, so it is better to try and avoid getting these systems in the first place.
for legacy code where you have buildable source code, i can recommend michael feathers book "working effectively with legacy code". to summarise, it says get all new code under continuous integration and continuous delivery, and gradually migrate the rest to use it, making each change be covered by tests.
this requires trunk based development, and everyone being committed to not breaking the tests.
your only other question is if you are prepared to accept new legacy code.
1
-
1
-
1
-
1
-
1
-
1