Youtube comments of (@grokitall).

  1. The fundamental issue here is that you have an important maintainer (linus) slightly advocating for rust for linux, and blocking c only code which breaks the rust build, and a less important maintainer (christoph) basically saying he wants to burn that whole project to the ground. This basic incompatibility affects the basic survival of not just the rust for linux project, but also those c coders who for reasons compile the kernel with the rust flag set to off. Calling for a definitive answer to what the truth is is a long overdue request, and the faliure of linus ang greg to address that issue will continue to cause problems until it is resolved. As was said in the thread, it is a basic survival issue for the rust for linux project, and for those getting their c code bounced for rust incompatibility. Stating that in the end, the only solution in the absence of such an answer from linus is to try and submit directly to linus and get the answer that way is basically just stating the truth about how to deal with a maintainer who is just obstructing code for personal reasons. Given that the maintainer has publically stated he would like to burn the entire project to the ground, when breaking code in the project is already getting c code patches bounced, it just strikes me that refering this to the code of conduct guys seems fairly obvious, as this level of hostility towards other contributers, the late nacking for non technical reasons, and so on seem like the sort of thing they should have an opinion on if they are to be of any relevance. While there are valid issues about the inclusion of rust code, that is not what is happening here. It is not about the quality of rust code in the kernel, but the existence of such code, which by now seems to at least have implicit support from linus. The technical question of not having the wrapper duplicated in every driver is basic programming, and the undesirability of this has been accepted practice for well over a decade. Having such code exist was responded to by christoph basically telling the contributer to go pound sand, rather than giving a constructive suggestion as to an alternative location which would be acceptable. Almost nobody came out of this looking good. The maintainer got away with being toxic about a decision which in theory at least seems to have already been made by linus. The code of conduct guys got away with ignoring at the very least a request as to if the behaviour of the maintainer was in scope for them to consider. Linus and greg got away with refusing to address the core question of what is the status of rust code in the kernel. Either it is a first class citizen, and christoph should not be blocking it, or it is not, and linus should not be blocking pure c code for breaking it. You can't have it both ways.
    69
  2. 17
  3. 16
  4. 12
  5. 11
  6. 11
  7. 10
  8. 10
  9. 10
  10. 9
  11. 8
  12. Your comment demonstrates some of the reasons people don't get tdd. First, you are equating the module in your code as a unit, and then equating the module test suite as the unit test, and then positing that you have to write the entire test suite before you write the code. This just is not how modern testing defines a unit test. An example of a modern unit test would be a simple test that when given the number to enter into the cell perform a check to see if the number is between 1 and the product of the grid sizes and returns a true or false value. For example your common sudoku uses a 3 x 3 grid, requiring that the number be less than or equal to 9, so it would take the grid parameters, cache the product, check the value was between 1 and 9, and return true or false based on the result. This would all be hidden behind an API, and you would test that given a valid number it would return true. You would then run the test, and prove that it fails. A large number of tests written after the fact can pass not only when you run the test, but also then you either invert the condition, or comment out the code which supplies the result. You would then write the first simple code that provided the correct result, run the test, see it pass, and then you have validated your regression test in both the passing and failing mode, giving you an executable specification of the code covered by that test. You would also have a piece of code which implements that specification, and also a documented example of how to call that module and what it's parameters are for use when writing the documentation. Assuming that it was not your first line of code you would then look to see if the code could be generalized, and if it could you would then refactor the code, which is now easier to do because it already has the regression tests for the implemented code. You would then add another unit test, which might check that the number you want to add isn't already used in a different position, and go through the same routine again, and then another bit of test and another bit of code, all the while growing your test suite until you have covered the whole module. This is where test first wins, by rapidly producing the test suite, and the code it tests, and making sure that the next change doesn't break something you have already written. This does require you to write the tests first, which some people regard as slowing you down, but if you want to know that your code works before you give it to someone else, you either have to take the risk that it is full of bugs, or you have to write the tests anyway for continuous integration, so doing it first does not actually cost you anything. It does however gain you a lot. First, you know your tests will fail. Second you know that when the code is right they will pass. third, you can use your tests as examples when you write your documentation. fourth, you know that the code you wrote is testable, as you already tested it. fifth, you can now easily refactor, as the code you wrote is covered by tests. sixth, it discourages the use of various anti patterns which produce hard to test code. there are other positives, like making debugging fairly easy, but you get my point. as your codebase gets bigger and more complex, or your problem domain gets less well understood initially, the advantages rapidly expand, while the disadvantages largely evaporate. the test suite is needed for ci and refactoring, and the refactoring step is needed to handle technical debt.
    8
  13. 7
  14. 7
  15. 6
  16. 5
  17. 5
  18. 5
  19. 4
  20. 4
  21. 4
  22. 4
  23. 4
  24. 4
  25. 4
  26. 4
  27. 4
  28. 4
  29. 4
  30. 4
  31. 3
  32. 3
  33. 3
  34. This is the actual reason for the regulation. Unlike the us, the EU recognises that climate change is a thing, and that most power is currently generated from fossil fuels. The best way and the cheapest is to not waste the power in the first place. Using a TV as an example, cheap tvs used to just turn off the tube when you put it on standby, wheras expensive ones did it properly, leaving just the remote control unit turned on, so it could turn the TV back on. The difference in power usage could sometimes be as high as 80% of the peak usage when using the TV, which is a lot of wasted power you have to generate. The same types of mistake were made with multiple generations of devices, including satellite TV boxes, fridges, home automation tech, etc, and to fix this they made this series of regulations basically saying that when you don't need to be wasting power, you should not do it if you do not need to. The issue with the kde and gnome suspend flag seems to come from conflating 2 different use cases under the same flag. The first case is the one relating to power usage and sleep, hibernate and power off. The default should be to reduce power usage when it is not needed, but is currently used as a flag to turn autosuspend on and off. The second use case is where no matter what you are doing, you need to force power off due to running low on battery power. This applies both to laptops and to any desktop or server running with a decent ups, and gradually degrading functionality can extend the time needed until forced shutdown is needed. An example would be to disable windows whole drive indexing on battery power, thus extending battery life. This second use case should have the default be forced shutdown for laptops and for desktops and servers on battery power, and is irrelevant to both on mains power. By conflating the 2 different use cases, you just end up with the currently broken understanding of what the flag should do, and the related arguments about the defaults.
    3
  35. 3
  36. 3
  37. 3
  38. 3
  39. 3
  40. 3
  41. 3
  42.  @kuhluhOG  yes lifetimes exist in every language, but rust seems to need them to be known at api definition time for every variable mentioned in those apis, unlike just about every other language. when you put this constraint on another language just to get your type system to work, it does not come for free. i do not doubt that it is possible to write kernel code in rust, nor do the c maintainers, in fact some people are writing a kernel in rust, and good luck to them. people are also writing code for the kernel, but when you can't even get a stable compiler to work and need to download the nightly builds to get the code to even compile (according to one of the compiler devs st a rust conference), and you need an unspecified amount of extra work to even be able to start doing work on the unstable internal apis, there naturally arises the question of how much work and who has to do it. as to the problem with the drm subsystem, i've not seen the thread, so i don't know if it was done as a discussion around "this variable is used this way here and an incompatible way there", or if they just went and fixed the more problematical one for them to work in the way that made it easier for the rust developers, and then did a big code dump with little prior discussion. if it is the second case, it is the same issue of not playing well with others except on your own terms, and the resulting problems are deserved. if it is the first, then the issue should be raised on the usual channels, and initial patches proposed, with ongoing work to figure out if those patches are the best fix to the identified problems, just like with any other c developer. i just don't have enough context to determine if it was the rust devs, the c maintainers, or both talking past each other, and thus do not have a right to an opinion on the details i do not know.
    3
  43. 3
  44. 3
  45. 3
  46. 3
  47. 3
  48. 3
  49. 3
  50. 3
  51. 3
  52. 2
  53. 2
  54. there are a number of people in these comment making statements that are wrong in this case. first, he is a us citizen due to being born of 2 us parents. he has the document to prove it, his birth certificate. the problem here is that they are not accepting it due to the 77 year old valid document not being a modern canadian birth certificate. then there are all the people saying they had the citizen born abroad paperwork. when he came to the us 76 years ago, it was not required, and definitely not proactively encouraged, so it was not granted. getting such documents after you leave is a nightmare. people also suggest that they should not legally have cancelled his driving license, but some here have pointed out that some states require real id for driving licenses, and others don't. some suggest he fly to canada to get things resolved, but as has already been pointed out, he can't even fly within the us, and in any case he does not have a passport. this makes it impossible. on the comments on the original news report, it is pointed out that due to backlogs and the same sort of beaurocratic nonsense, the delay can be over a year, even with all the right documents, and he has no guarantee that he won't have the same issues there. the main problem here is that there are 2 types of beurocrat, one looks for a reason to deny you your entitlements, and having found one either refuses to reconsider, or doubles down on their decision. the other looks hard for any way to help sort out the problem within the rules, often going above and beyond what their job needs to do so. most of the other solutions suggested either require papers he never needed before, or driving out of state to get something done.
    2
  55. 2
  56.  @julianbrown1331  partly it is down to the training data, but the nature of how they work does not filter them by quality either before, during, or after training, so lots of these systems are producing code which is as bad as that produced in the average training data, most of which is produced by newbies learning either the languages or the tools. also, you misrepresent how copyright law works in practice. when someone claims you are using their code, they only have to show that it is a close match. to avoid summary judgement against you, you have to show that it is a convergent solution from the constraints of the problem space, and that there was no opportunity to copy the code. given that there have been studies showing that for edge cases with very few examples they have produced identical code snippets right down to the comments in the code, good luck proving no chance to copy the code. just saying i got it from microsoft copilot does not relieve you of the responsibility to audit the origins of the code. even worse, microsoft cannot prove it was not copied either, as the nature of statistical ai obfuscates how got from the source data to the code they gave you. even worse, the training data does not even flag up which license the original code was under, so you could find yourself with gpl code with matching comments leaving you with your only choice being to release your proprietary code under the gpl to avoid triple damages and comply with the license. on top of that, the original code is usually not written to be security or testability aware, so it has security holes, is hard to test, and you can't fix it.
    2
  57. 2
  58. 2
  59. 2
  60. 2
  61. 2
  62. 2
  63. 2
  64. 2
  65. Absolutely right. Unit tests do automated regression testing of the public API of your code, asserting io combinations to provide an executable specification of the public API. When well named, the value of these tests are as follows: 1, Because they test only one thing, generally they are individually blindingly fast. 2, when named well, they are the equivalent of executable specifications of the API, so when it breaks you know what broke, and what it did wrong. 3, they are designed to black box test a stable public API, even if you just started writing it. Anything that relies on private API's are not unit tests. 4, they prove that you are actually writing code that can be tested, and when written before the code, also proves that the test can fail. 5, they give you examples of code use for your documentation. 6, they tell you about changes that break the API before your users have to. Points 4 and 6 are actually why people like tdd. Point 2 is why people working in large teams like lots of unit tests. Everyone I have encountered who does not like tests, thinks they are fragile, hard to maintain, and otherwise a pain, and who was willing to talk to me about why usually ended up to be writing hard to test code, with tests at to high a level, and often had code with one of many bad smells about it. Examples included constantly changing public API's, over use of global variables, brain functions or non deterministic code. the main output of unit testing are code that you know is testable, tests that you know can fail, and knowing that your API is stable. As a side effect of this, it pushes you away from coding styles which makes testing hard, and discourages constantly changing published public API's. A good suite of unit tests will let you completely throw away the implementation of the API, while letting your users continue to use it without problems. It will also tell you how much of the reimplemented code has been completed. A small point about automated regression tests. Like trunk based development, they are a foundational technology for continuous integration, which in turn is foundational to continuous delivery and Dev ops, so not writing regression tests fundamentally limits quality on big, fast moving projects with lots of contributes.
    2
  66. 2
  67. 2
  68. 2
  69. 2
  70. 2
  71. 2
  72. 2
  73. 2
  74. 2
  75. 2
  76. 2
  77. 2
  78. 2
  79. 2
  80. 2
  81. 2
  82. 2
  83. 2
  84. 2
  85. 2
  86. 2
  87. 2
  88. 2
  89. 2
  90. 2
  91. 2
  92. 2
  93. 2
  94. 2
  95. 2
  96. 2
  97. 2
  98. 2
  99. 2
  100. 2
  101. 2
  102. 2
  103. risc came about because we did not know how to write compilers properly. to get around some people looked at one of the currently available machines, and went through the instruction set, looking at each instruction and checking if it was worth the silicon used to implement it. how they did this was they took the instruction set as an api, and looked to see if they could replace the instruction in the assembler with a macro implementing that instruction using a subset of the instructions on the processor, and what that did to speed and space considerations for the final program. if the results turned out to be better, they stopped generating the instruction, and used the macro instead, hence reducing the size of the instruction set. these complex instructions had a number of issues, which give risc architectures an advantage. first, they are no easier to implement in hardware than in software. if you don't believe me you only have to look at the fact of how many intel processor versions you identify by recognising which instruction on the chip is in some way broken. second, they take up a lot of silicon, which could be being used for something better. this has a massive opportunity cost, even before you go multicore. third, they slow down your processor. not only do they typically take multiple clock cycles to run, but also due to the current emphasis on a global system clock, they slow every instruction down to the clock speed of the slowest part of any instruction. for these and many other reasons, there is great interest in moving from cisc to risc. in fact the advantages are so high that most cisc processors are actually implemented like a virtual machine running on an internal risc processor. it also turns out that risc designs can do the same work using less energy, which is extremely important for mobile (where the issue is battery life) and for data center (where cooling is the issue). luckily, the abusive wintel duopoly doesn't control most of computing any more, only the desktop, so it is not really slowing things down too much.
    2
  104. 2
  105. 2
  106. 2
  107. 2
  108. the issue of what makes code bad is important, and has to do with how much of the complexity of the code is essential vs accidental. obviously some code has more essential complexity than others, but this is exactly when you need to get a handle on that complexity. we have known since brooks wrote the mythical man month back in the 1970s that information hiding matters, and every new development in coding has reinforced the importance of this, which is why abstraction is important, as it enables this information hiding. oop, functional programming, tdd, and refactoring all build on top of this basic idea of hiding the information, but in different ways, and they all bring something valuable to the table. when you have been in the industry for a short while, you soon encounter a couple of very familiar anti patterns, spaghetti code, the big ball of mud, and the worst one is the piece of snowflake code that everyone is afraid to touch because it will break. all of these are obviously bad code, and are full of technical debt, and the way to deal with them is abstraction, refactoring, and thus testing. given your previously stated experience with heavily ui dependant untestable frameworks, therefore requiring heavy mocking, i can understand your dislike of testing, but that is due to the fact that you are dealing with badly designed legacy code, and fragile mocking is often the only way to start getting a handle on legacy code. i think we can all agree that trying to test legacy code sucks, as it was never designed with testing or lots of other useful things in mind. lots of the more advanced ideas in programming start indirectly from languages where testing was easier, and looked at what made testing harder than it needed to be, then adopted a solution to that particular part of the problem. right from the start of structured programming, it became clear that naming mattered, and that code reuse makes things easier, first by using subroutines more, then by giving them names, and letting them accept and return parameters. you often ended up with a lot of new named predicates, which were used throughout the program. these were easy to test, and by moving them into well named functions it made the code more readable. later this code could be extracted out into libraries for reuse across multiple programs. this lead directly to the ideas of functional programming and extending the core language to also contain domain specific language code. later, the realisation that adding an extra field broke apis a lot lead to the idea of structs, where there is a primary key field, and multiple additional field. when passed to functions, adding a new field made no difference to the api, which made them really popular. often these functions were so simple that they could be fully tested, and because they were moved to external libraries, those tests could be kept and reused. this eventually lead to opdyke and others finding ways to handle technical debt which should not break good tests. this came to be known as refactoring. when the test breaks under refactoring, it usually means one of 2 things: 1, you were testing how it did it, breaking information hiding. 2, your tools refactoring implementation is broken, as a refactoring by definition does not change the functional nature of the code, and thus does not break the test. when oop came along, instead of working from the program structure end of the problem, it worked on the data structure side, specifically by taking the structs, adding in the struct specific functions, and calling them classes and method calls. again when done right, this should not break the tests. with the rise of big code bases, and recognition of the importance of handling technical debt, we end up with continuous integration handling the large number of tests and yelling at us when doing something over here broke something over there. ci is just running all of the tests after you make a change to demonstrate that you did not break any of the code under test when you made a seemingly unrelated test. tdd just adds an extra refactoring step to the code and test cycle, to handle technical debt, and make sure your tests deal with what is being tested, rather than how it works. cd just goes one step further and adds acceptance testing on top of the functional testing from ci to make sure that your code not only still does what it did before, but has not made any of the non functional requirements worse. testing has changed a lot since the introduction of ci, and code developed using test first is much harder to write containing a number of prominent anti patterns.
    2
  109. 2
  110. 2
  111. 2
  112. 2
  113. 2
  114. 2
  115. 2
  116. 2
  117. 2
  118. 2
  119. 2
  120. 2
  121. 2
  122. ci came from the realisation that the original paper from the 70s saying that the waterfall development model, while common was fundamentaĺly broken, and agile realised that to fix it, you had to move things that appear late in the process to an earlier point, hence the meme about shift left. the first big change was to impliment continuouse backups, now refered to as version control. another big change was to move tests earlier, and ci takes this to the extreme by making them the first thing you do after a commit. these two things together mean that your fast unit tests find bugs very quickly, and the version control lets you figure out where you broke it. this promotes the use of small changes to minimise the differences in patches, and results in your builds being green most of the time. long lived feature branches subvert this process, especially when you have multiple of them, and they go a long time between merges to the mainline (which you say you rebase from). specifically, you create a pattern of megamerges, which get bigger the longer the delay. also, when you rebase, you are only merging the completed features into your branch, while leaving all the stuff in the other megamerges in their own branch. this means when you finally do your megamerge, while you probably don't break mainline, you have the potential to seriously break any and all other branches when they rebase, causing each of them to have to dive into your megamerge to find out what broke them. as a matter of practice it has been observed time and again that to avoid this you cannot delay merging all branches for much longer than a day, as it gives the other braches time to break something else resulting in the continual red build problem.
    2
  123. 2
  124. 2
  125. 2
  126. 2
  127. 2
  128. 2
  129. 2
  130. 2
  131. 2
  132. 2
  133.  @alst4817  my point about black box ai is not that it cannot be useful, but due to the black box nature, it is hard to have confidence that the answer is right, that this is anything more than coincidence, and the most you can get from it is a possibility value for how plausible the answer is. this is fine in some domains where that is good enough, but completely rules it out for others where the answer needs to be right, and the reasoning chain needs to be available. i am also not against the use of statistical methods in the right place. probabilistic expert systems have a long history, as do fuzzy logic expert systems. my issue is the way these systems are actually implemented. the first problem is that lots of them work in a generative manner. using the yast config tool of suse linux as an example, it is a very good tool, but only for the parameters it understands. at one point in time, if you made made any change using this tool, it regenerated every file it knew about from its internal database, so if you needed to set any unmanaged parameters in any of those files, you then could not use yast at all, or your manual change would disappear. this has the additional disadvantage that now those managed config files are not the source of truth, this is hidden in yasts internal binary database. it also means that using version control on any of those files is pointless as the source of truth is hidden, and they are now generated files. as the code is managed by those options in the config file, that should be in text format, version controlled, and any tools that manipulate them should update only the fields it understands, and only for files which have changed parameters. similarly, these systems are not modular, instead being implimented as one big monolithic black box, which cannot be easily updated. this project is being discussed in a way that suggests that they will just throw lots of data at it and see what sticks. this approach is inherently limited. when you train something like chatgpt, where you do not organise the data, and let it figure out which of the 84000 free variables it is going to use to hallucinate a plausible answer, you are throwing away most of the value in that data, which never makes it into the system. you then have examples like copilot, where having trained on crap code, it on average outputs crap code. some of the copilot like coding assistants actually are worse, where they replace the entire code block with a completely different one, rather than just fixing the bug, making a mockery of version control, and a .ot of tne time this code then does nit even pass the tests the previous code passed. then we have the semantic mismatch between the two languages. in any two languages either natural or synthetic, there is not an identity of function beteeen the two languages. somethings can't be done at all in the language, and some stuff which is simple in one language can be really hard in another one. only symbolic ai has the rich model needed to understand this. my scepticism about this is well earned, with lots of ai being ever optimistic to begin with, and then plateauing with no idea what to do next. i expect this to be no different, with it being the wrong problem, with a bad design, badly implemented. i wish them luck, but am not optimistic about their chances.
    2
  134. 2
  135. the data and power scaling issues are a real feature of the large language statistical ai models which are currently hallucinating very well to give us better bad guesses at things. unfortunately for the guy who wrote the paper, sabine is right, and the current best models have only gotten better by scaling by orders of magnitude. that is fundamentally limited, and his idea of using a perpetual motion system of robots created from resources mined by robots using the improved ai from these end product robots can't fix it. to get around this you need symbolic ai like expert systems, where the rules are known, and tie back to the specific training data that generated them. then you need every new level,of output to work by generating new data, with emphasis on how to recognise garbage and feed it back to improve the models. you just can't do that with statistical ai, as its models are not about being correct, only plausible, and only work in fields where it does not matter that you cannot tell which 20%+ of the output is garbage. the cyc project started generating the rules needed to read the internet and have common sense about 40 years ago, after about a decade, they realised their size estimates for the rule set were off by 3 or 4 orders of magnitude. 30 years after that, and it has finally got to the point where it can finally read all the information that isn't on the page to understand the text, and still it needs 10s of humans working to clarify what it does not understand about specific fields of knowledge., it then needs 10s more figuring out how to go from getting the right answer, to getting it fast enough to be useful. to get to agi or ultra intelligent machines, we need multiple breakthroughs to get their. trying to predict the timings of breakthroughs has always been a fools game, and there are only a few general rules about futurology: 1, prediction is difficult, especially when it concerns the future. 2, you cannot predict the timings of technological breakthroughs. the best you can do in hindsight is to say this revolution was waiting to happen from when these core technologies were good enough. it does not say when the person with the right need, knowledge and resources will come along. 3, we are totally crap at predicting the social consequences of disruptive changes. people predicted the rise of the car, but no one predicted the near total elimination of all the industries around horses in only 20 years. 4,you cannot predict technology accurately further ahead than about 50 years, due to the extra knowledge needed to extend the prediction being the same knowledge you need to do it faster. you also cannot know what you do not know that you do not know. 5,a knowledgeable scientist saying something is possible is more likely to be right than a similar scientist saying it is impossible. the latter do not look beyond their assumptions which lead them to their initial conclusions. it does not stop there from being some form of hidden limit you don't know like the speed of light or the second law of thermodynamics.
    2
  136. 2
  137. 2
  138. 2
  139. 2
  140. 2
  141. 2
  142. 2
  143. 2
  144. 2
  145. 2
  146. 2
  147. 1
  148. 1
  149. 1
  150. 1
  151. 1
  152. 1
  153. 1
  154. 1
  155. 1
  156. 1
  157. the comments contain a bunch of common fallacies. energy too cheap to meter has been promised ever since we started building national energy grids. always from the latest new and shiny energy source, and it has never arrived. the next new and shiny energy source (not just fusion) is always 30 years away, because that is how long it takes to go from proof of concept to commercial product, but only if you actually bother to properly fund the development stages to get there and get old regulations for other industries out of the way. we do not do this, so progress is slow. nobody has bothered to mention this, but it is always cheaper to not waste a kilowatt than it is to generate it. we have known this to be true since at least the 1980s, but i don't know of any country who has even seriously discussed putting rules in place to regulate the removal of the least efficient devices, let alone done it. energy storage is great, but again nobody sets the regulations up in such a way as to make the market make sense. you need the right rules and feed in tariffs to make it work. if you cannot plug a battery into the grid when there is a surplus, charge it up, and release it when there is a shortage, and still make financial sense, the rules are wrong. also funding for storage research has basically been on life support for decades. renewable energy is usually far cheaper than megagenerators simply because with distributed generation you get many more chances to reduce the cost of the equipment, but in the uk the national grid has been optimised for a small number of large generating sites in the middle of nowhere, making combined heat and power not an option. redeveloping the grid to rectify this takes a long time and the will to do it. small modular fission reactors make sense, but again regulation based on large reactors gets in the way. also in this context molten salt reactors have a good potential, but again old regulations for large reactors means all of the development work is being pushed to less regulated countries. another unmentioned technology is radio thermal generators, where you basically put safely stored nuclear waste in a container, and use the same techniques for harvesting solar to harvest the energy from the radiation. again this could be done on sites suitable for modular reactors, and the ame security concerns would apply, but neither the research nor the regulation are being properly addressed. fusion has a lot of potential, and purely from a cost benefit point of view it makes sense to continue the research, but again we have a silly policy on funding and regulation. as for the others like wave, tidal, ocean thermal energy conversion and space solar satellites, again the funding and regulatory challenges are not being addressed. for fossil fuels, the research and regulations on carbon capture and storage are not being done properly either. this all feeds into the failure to address the issue of global warming, with the cheaper option of prevention being ignored, with the more expensive option of dealing with the consequences being the only option being seriously left on the table.
    1
  158. 1
  159. 1
  160. 1
  161. 1
  162. 1
  163.  @silberwolfSR71  i wouldn't disagree in general, but it tends to coexist with lots of other red flags. for example, there are a large number of "qualified" programmers out there who not only cannot use any vcs, but also could not do fizzbuz without an ide, and have no idea if the build system generated by the ide can be used without the ide. i would suggest that a good programmer should be able to write a small program with only a text editor, create the automated build file, store it in a version control system, and then have the interviewer be able to check it out and build it on another machine. if the program was something small and simple like fizzbuz, this should only take about 15 minutes prior to the interview, and for most interviews you are waiting longer than that. think about the Israel passenger security vetting system as a comparison. anybody wanting to be able to go airside at any airport goes through security vetting. each level is designed to either determine you are not a risk, or moves you up another level for a more strict screening. by the time you are stopped at the airport for questioning you have already raised yourself to about level 15, and they are primarily asking to clear up points they could not dismiss without your active involvement. if you pass, you can then get on the plane. i had to help fill a post, and we got over 100 applicants, with full training given. most were completely unsuitable, but the top few still needed filtering, and things like attitude, and prior experience goes into that filtering. as theo said, if you get to that point and it is between an industry veteran with experience, and a new college grad with no knowledge of version control, you can guess who is higher up the list.
    1
  164. 1
  165. 1
  166. 1
  167. 1
  168. 1
  169. 1
  170. 1
  171. 1
  172. ada was commissioned because lots of government projects were being written in niche or domain specific languages, resulting in lots of mission critical software which was in effect write only code, but still had to be maintained for decades. the idea was to produce one language which all the code could be written in, killing the maintainability problem, and it worked. unfortunately exactly the thing which made it work for the government kept it from more widespread adoption. first, it had to cover everything from embedded to ai, and literally everything else. this required the same functions to be implimented in multiple ways as something that works on a huge and powerful ai workstation with few time constraints needs to be different from a similar function in an embedded, resource limited and time critical usage. this makes the language huge, inconsistent, and unfocused. it also made it a pain to implement the compiler, as you could not release it until absolutely everything had been finalised, and your only customers were government contractors, meaning the only way to recover costs was to sell it at a very high price, and due to the compiler size, it would only run on the most capable machines. and yes, it had to be designed by committee, due to the kitchen sink design requirement. the different use cases needed to fulfil its design goal of being good enough for coding all projects required experts on the requirements for all the different problem types, stating that x needs this function to be implemented like this, but y needs it to be implemented like that, and the two use cases were incompatible for these reasons. rather than implementing the language definition so you could code a compiler for ada embedded, and a different on for ada ai, they put it all in one badly written document which really did not distinguish the use case specific elements, making it hard to compile, hard to learn, and just generally a pain to work with. it also was not written with the needs of compiler writers in mind either. also, because of the scope of the multiple language encodings in the language design, it took way too long to define, and due to the above mentioned problems, even longer to implement. other simpler languages had already come along in the interim, and taken over a lot of the markets the language would cover, making it an also ran for those areas outside of mandated government work.
    1
  173. 1
  174. 1
  175.  @chudchadanstud  like ci, unit testing is simple in principle. when i first started doing it i did not have access to a framework, and every test was basically a stand alone program with main and a call, which then just returned a pass or fail, stored in either unittest or integrationtest directories, with a meaningful name so when it failed the name told me how it failed, all run from a makefile. each test was a functional test, and was run against the public api. i even got the return values i did not know by running the test to always fail printing the result, and then verifying that it matched the result next time. when a new library was being created because the code would be useful in other projects, it then had the public api in the header file, and all of the tests were compiled and linked against the library, and all had to pass for the library to be used. all of this done with nothing more than a text editor, a compiler, and the make program. this was even before version control took off. version control and a framework can help, but the important part is to test first, then add code to pass the test, then if something breaks, fix it before you do anything else. remember, you are calling your public api, and checking that it returns the same thing it did last time you passed it the same parameters. you are testing what it does, not how fast, or how much memory it uses, or any other non functional property.what you get in return is a set of self testing code which you know works the same, because it still returns the same values. you also get for free an executable specification using the tests and the header file, so if you wished you could throw away the library code and use the tests to drive the rewrite to the same api. but it all starts with test first, so that you don't write untestable code in the first place.
    1
  176. 1
  177. 1
  178. 1
  179. 1
  180. ​ @talicopow i'm afraid that is an implementation detail which varies by operating system, language, and data type. in some languages, especially interpreted ones, the only thing special about global variables is their scope, and the variables are placed on the heap, just like for any other routine. the difference being that the function in question is the main function, which does not get released until the program ends, and otherwise is managed just like any function. when the language uses compilation, and the operating system allows shared libraries, you end up with the code getting one section of immutable memory which can then be declared read only and shared between programs so you don't need to have multiple copies. in modern compilers, like llvm, it was realised that there is a subclass of global variables that also share this dynamic, and thus can be stored in another block treated the same way. this needs support in the operating system to be able to handle this immutable data block in shared libraries in this way. not every global variable is immutable, and in interpreted languages, the ability to determine this may be undecidable unless the language specifically supports declaring the variable to be this way. the same also applies to local variables. this becomes even worse if one of the main advantages of interpreted languages, the access to the evaluation function, is provided to the programmer, as at that point all memory access not declared to e immutable becomes dynamic, so the size of this block when you cannot declare a variable this way becomes zero. you then also have an additional problem that garbage collection has to be either on demand, or incremental. on demand halts all program execution until a fairly full sweep of memory has bean completed, which causes issues for real time programming, which requires the timings to be deterministic. the only solution to this is to do it incrementally, but the problem,with that is we have not been doing it for long, so the solutions are still at a fairly simplistic level in most implementations. also, while reflections and other methods of looking at internal detail of object oriented programs adds lots to the power of debugging in a language, it does so by breaking a lot of the guarantees needed to allocate the global variables to the immutable block. automated memory management is brilliant, but even when the programmer is granted the ability to tell the language to do things differently, it is very complicated. it requires systematic support at all levels from the bios up to the runtime execution of the specific program, or it does not work as well as the proponents hype it up to work. in the end, it it the programmer who knows best how the code is intended to be used, while the language should enable them to clarify that intent and enforce it with the help of lower level parts of the system.
    1
  181. variable handling in programs is complicated, and in functions it is even harder. this has to do with two key properties of a variable, scope and mutability. when you declare a global variable in the main function, you know the scope is going to be global. if it is a value like pi, you also know it is not supposed to change, so in a compiled language you can allocate it to an immutable data block with global scope, and hope that the library management code supports and enforces this being a single copy of read only data. when you then pass pi to your function, it can either be as a global variable, or as a parameter to thar function. if the language uses call by value, you then know that it will have a scope of the function lifetime, and it will not be returned to the same variable. as it is a copy of the data, any changes will only last the lifetime of the function, so you can store it on the stack. even better if nothing in the function writes to it, you can also decide at analysis time it is immutable. the same also applies with any other variable you explicitly declare to be local or immutable, so it can also go on the stack.anything that has to survive the termination of the function has to be treated by the function as global, but can be explicitly local to any calling function, or can be local to the library. depending on support within the language and the operating system, knowledge that something is global and immutable can be used to minimise total memory usage, and language support should be provided to enable the programmer to explicitly declare both scope and mutability, but most languages don't support that functionality. to the extent that this is not supported, the language has to guess, but currently the algorithms which exist to support such guessing are fairly primitive. also the way the guessing is done tends to be both language and implementation specific. the guessing gets even harder if the language is interpreted, and even more so if you have complex objects with variable length data types in them. in some languages like the original implementation of lisp, even the program code is not guaranteed to remain the same for the duration of the program, as a function is just another piece of data which can be changed by another function. anything which is correctly guessed to be read only and of fixed scope can go on the stack, which includes some global variables. literally everything else must go on the heap, at which point the problems of garbage collecting the heap get further complicated by the same problems. you can only remove something from the heap if you know it is not needed anymore. for simple but advanced data structures you can use reference counting to determine that it has no users, and rapidly collect that part of the heap back into free space. if you know that in the first part of the program you are going to load in some data, use it to build another data structure used by the second part, and is then no longer needed by the rest of the program, you could free off that part of the heap at that point, but only if you can tell the language it is safe to do so. otherwise it is back to guessing, and if you cannot be sure you have to keep it around. at the operating system level it is even worse, because it has to deal with every heap and every stack for every running program, and handle virtual memory for when you run out. memory management is not simple, and won^t be for a long time, and comes with large costs whatever solution you choose.
    1
  182. 1
  183. 1
  184. 1
  185. 1
  186. 1
  187. 1
  188. 1
  189. 1
  190. 1
  191. 1
  192. 1
  193. 1
  194. 1
  195. 1
  196. 1
  197. there are some minor misunderstandings of somethings in the threads which need to be a little clearer, so here goes. to produce an operating system from scratch, you need to either 1, write everything yourself, or 2, go online and download a bunch of stuff from people you don't know and hope you can trust the code. 1 does not work very well, apple and microsoft did not do it, neither did google or steam. it is slow and expensive. look at how long it took reactos to get to the point where people who were not kernel developers could work on it. (not criticising them, this stuff is hard). this only leaves you with the second option, which you solve by going to conferences and establishing a network of trust through key signing parties. as this requires the person to show the other person id, it is moderately secure against everyone but state actors who can just issue an id, and id thieves. all the network of trust does is produce a lot of people who can assert that the person you just met has been verified to them as being the person they say they are (for a certain level of verified). these people then commit code to version control, and once you get to centralised and distributed version control, you also have the person signing that they produced the work. this means if it later turns out that they were a problem, you can easily go back, and track what they touched and audit it if needed. it does not stop a bad actor like the xz maintainer, you need other processes for that. this gets you to the point were you can confirm the code you got was the same as the code they distributed (at least if it does cryptographic hashing like git) and the network of trust identifies all of the contributers. then you need to link the code together with libraries it depends on. the original paper that started the nix package manager, which lead to nixos, described the purpose to be to declaratively manage the exact version dependencies so that you could be confident that what you used to build it last time is the same as what you used to build it this time. effectively semantically versioning the build dependencies. it appears that the people behind nixos have extended this a bit, but the principle remains the same. if the dependencies change, then the key for the dependent packages will also change. guix did not like the nonclomenture, and thus decided to declare it using scheme, but otherwise they do the same thing. this gets you to the point where you can compile stuff and be confident where all the code came from, as you have a complete audit trail. reproducible builds go one step further, validating that the stuff you then compile will always produce the same patterns of bits in storage. this is non trivial for various reasons mentioned by others, and many others. declarative dependency management systems might also give you reproducible builds, but it is not what they were designed for. then you take the output of the reproducible build, put it in a package, and sign it. this gets you to the point where you as the person installing it can be confident that the binary packages you just installed are exactly the same as the stuff the original upstream contributers intended with a few tweaks from your distribution maintainers to make it work better together. and you can audit this all the way back to the original contributer to the upstream project if needed. none of this says anything about the quality of the code, or about the character of the contributers, you need other steps for that. as the sysadmin for your business, you can go one step further, and create versioned ansible install scripts to do infrastructure as code, but it does not add to the model, as your ansible scripts are just another repository you use. i hope this clarifies things a bit.
    1
  198. 1
  199. 1
  200. 1
  201. 1
  202. 1
  203. 1
  204. 1
  205. You call them requirements, which generally implies big upfront design, but if you call them specifications it makes things clearer. Tdd has three phases. In the first phase, you write a simple and fast test to document the specifications of the next bit of code you are going to write. Because you know what that is you should understand the specification well enough to write a test that is going to fail, and then it fails. This gives you an executable specification of that piece of code. If it doesn't fail you fix the test. Then you write just enough code to meet the specification, and it passes, proving the test good because it works as expected and the code good because it meets the specification. If it still fails you fix the code. Finally you refactor the code, reducing technical debt, and proving that the test you wrote is testing the API, not an implementation detail. If the valid refactoring breaks the test you fix the test, and keep fixing it until you get it right. At any point you can spot another test, make a note of it, and carry on, and when you have completed the cycle you can pick another test from your notes, or write a different one. In this way you grow your specification with your code, and is it incrementally to feed back into the higher level design of your code. Nothing stops you from using A.I. tools to produce higher level documentation from your code to give hints at the direction your design is going in. This is the value of test first, and even more so of tdd. It encourages the creation of an executable specification of the entirety of your covered codebase, which you can then throw out and reimplement if you wish. Because test after, or worse, does not produce this implementation independent executable specification it is inherently weaker. The biggest win from tdd is that people doing classical tdd well do not generally write any new legacy code, which is not something you can generally say about those who don't practice it. If you are generally doing any form of incremental development, you should have a good idea as to the specifications of the next bits of code you want to add. If you don't you have much bigger problems than testing. This is different from knowing all of the requirements for the entire system upfront, you just need to know enough to do the next bit. As to the issue of multi threading and micro services, don't do it until you have to and then do just enough. Anything else multiplies the problems massively before you need to.
    1
  206. 1
  207. 1
  208. 1
  209. 1
  210. 1
  211. 1
  212. 1
  213. 1
  214. 1
  215. 1
  216. There is a lot of talking past each other and marketing inspired misunderstanding of terminology going on here, so I will try and clarify some of it. When windows 95 was being written in 1992, every developer had a fork of the code, and developed their part of windows 95 in total isolation. Due to networking not really being a thing on desktop computers at the time, this was the standard way of working. After 18 months of independent work, they finally started trying to merge this mess together, and as you can image the integration hell was something that had to be seen to be believed. Amongst other things, you had multiple cases where the developer needed some code, and wrote it for his fork, while another developer did the same, but in an incompatible way. This lead to their being multiple incompatible implementations of the same basic code in the operating system. At the same time, they did not notice either the rise of networking, or the importance, so it had no networking stack, until somebody asked Bill Gates about networking in windows 95 at which point he basically took the open source networking stack from bsd Unix and put it into windows. This release of a network enabled version of windows and the endemic use of networking on every other os enabled the development of centralised version control, and feature branches were just putting these forks into the same repository, without dealing with the long times between integrations, and leaving all the resulting problems unaddressed. If you only have one or two developers working in their own branches this is an easily mitigated problem, but as the numbers go up, it does not scale. These are the long lived feature branches which both Dave and primagen dislike. It is worth noting that the hp laser jet division was spending 5 times more time integrating branches than it was spending developing new features. Gitflow was one attempt to deal with the problem, which largely works by slowing down the integration of code, and making sure that when you develop your large forks, they do not get merged until all the code is compatible with trunk. This leads to races to get your large chunk of code into trunk before someone else does, forcing them to suffer merge hell instead of you. It also promotes rushing to get the code merged when you hear that someone else is close to merging. Merging from trunk helps a bit, but fundamentally the issue is with the chunks being too big, and there being too many of them, all existing only in their own fork. With the rise in the recognition of legacy code being a problem, and the need for refactoring to deal with technical debt, it was realised that this did not work, especially as any refactoring work which was more than trivial made it more likely that the merge could not be done at all. One project set up a refactoring branch which had 7 people working on it for months, and when it was time to merge it, the change was so big that it could not be done. An alternative approach was developed called continuous integration, which instead of slowing down merges was designed to speed them up. It recognised that the cause of merge hell was the size of the divergence, and thus advocated for the reduction in size of the patches, and merging them more often. It was observed that as contributions got faster, manual testing did not work, requiring a move from the ice cream cone model of testing used by a lot of Web developers towards the testing pyramid model. Even so, it was initially found that the test suite spent most of its time failing, due to the amount of legacy code, and the fragility of code to test legacy code, which lead to a more test required and test first mode of working, which moves the shape of the code away from being shaped like legacy code, and into a shape which is designed to be testable. One rule introduced was that if the build breaks, the number one job of everyone is to get it back to passing all of the automated tests. Code coverage being good enough was also found to be important. Another thing that was found is that one you started down the route to keeping the tests green, there was a maximal delay you could have which did not adversely affect this, which turned out to be about once per day. Testing because increasingly important, and slow test times were deal with the same way slow build times were, by making the testing incremental. So you made a change, only built the bit which it changed, ran only those unit tests which were directly related to it, and one it passed, built and tested the bits that depended on it. Because the code was all in trunk, refactoring did not usually break the merge any more, which is the single most important benefit of continuous integration, it let's you much more easily deal with technical debt. Once all of the functional tests (both unit tests and integration tests), which shoukd happen within no more than 10 minutes, and preferably less than 5 minutes, you now have a release candidate which can then be handed over for further testing. The idea is that every change should ideally be able to go into this release candidate, but for some bigger features it is not ready yet, which is where feature flags come in. They replace branches with long lived unmerged code by a flag which hides the feature from the end user. Because your patch takes less than 15 minutes from creation to integration, this is not a problem. The entire purpose of continuous integration is to prove that the patch you submitted is not fit for release, and if so, it gets rejected and you get to have another try, but as it is very small, this also is not really a problem. The goal is to make integration problems basically a non event, and it works, The functional tests show that the code does what the programmer intended it to do. At this point it enters the deployment pipeline described in continuous delivery. The job of this is to run every other test need, including acceptance tests, whose job is to show that what the customer intended and what the programmer intended match. Again the aim is to prove that the release candidate is not fit to be released. In the same way that continuous delivery takes the output from continuous integration, continuous deployment takes the output from continuous delivery and puts it into a further pipeline designed to take the rolling release product of continuous delivery and put it through things like canary releasing so that it eventually ends up in the hands of the end users. Again it is designed to try it out, and if problems are found, stop them from being deployed further. This is where cloudstrike got it wrong so spectacularly. In the worst case, you just roll back to the previous version, but at all stages you do the fix on trunk, and start the process again, so the next release is only a short time away, and most of your customers will never even see the bug. This process works even at the level of doing infrastructure as a service, so if you think that your project is somehow unique, and it cannot work for you, you are probably wrong. Just because it can be released, delivered, and deployed, it does not mean it has to be. That is a business decision, but that comes back to the feature flags. In the meantime you are using feature flags to do dark launching, branch by abstraction to move between different solutions, and enabling the exact same code to go to beta testers and top tier users, just without some of the features being turned on.
    1
  217. 1
  218. 1
  219. 1
  220. 1
  221. 1
  222. 1
  223. 1
  224. 1
  225. 1
  226. 1
  227. 1
  228. 1
  229. 1
  230. 1
  231. 1
  232. 1
  233. 1
  234. 1
  235. 1
  236. 1
  237. 1
  238. 1
  239. 1
  240.  @phillipsusi1791  it is entirely about code churn. every branch is basically a fork of upstream (the main branch in centralised version control). the problem with forks is that the code in them diverges, and this causes all sorts of problems with incompatible changes. one proposed solution to this is to rebase from upstream, which is intended to sort out the problem of your branch not being mergable with upstream, and to an extent this works if the implicit preconditions for doing so are met. where it falls over is with multiple long lived feature branches which don't get merged until the entire feature is done..during the lifetime of each branch, you have the potential for code in any of the branches to produce incompatible changes with any other branch. the longer the code isn't merged and the bigger the size of the changes, the higher the risk that the next merge will break something in another branch. The only method found to mitigate this risk is continuous integration, and the only way this works is by having the code guarded by regression tests, and having everybody merge at least once a day. without the tests you are just hoping nobody broke anything, and if the merge is less often than every day, the build from running all the tests has been observed to be mostly broken, thus defeating the purpose of trying to minimise the risks. the problem is not with the existence of the branch for a long period of time, but with the risk profile of many branches which don't merge for a long time. also, because every branch is a fork of upstream, any large scale changes like refactoring the code by definition is not fully applied to the unmerged code, potentially breaking the completeness and correctness of the refactoring. this is why people doing continuous integration insist on at worst daily merges with tests which always pass. anything else just does not mitigate the risk that someone in one fork will somehow break things for either another fork, or for upstream refactorings. it also prevents code sharing between the new code in the unmerged branches, increasing technical debt, and as projects get bigger, move faster, and have more contributers, this problem of unaddressed technical debt grows extremely fast. the only way to address it is with refactoring, which is the additional step added to test driven development, which is broken by long lived branches full of unmerged code. this is why all the tech giants have moved to continuous integration, to handle the technical debt in large codebases worked on by lots of people, and it is why feature branching 8s being phased out in favour of merging and hiding the new feature behind a feature flag until it is done.
    1
  241. 1
  242. 1
  243. 1
  244. 1
  245. 1
  246. 1
  247. The best way to answer is to look how it works with linus Torvalds branch for developing the Linux kernel. Because you are using version control, your local copy is essentially a branch, so you don't need to create a feature branch. You make your changes in main, which is essentially a branch of Linus's branch, add your tests, and run all of the tests. If this fails, fix the bug. If it works rebase and quickly rerun the tests, then push to your online repository. This then uses hooks to automatically submit a pull request, and linus will getting a whole queue of them, which is then applied in the order in which they came in. When it is your turn, either it merges ok and becomes part of everyone else's next rebase, or it doesn't the pull is reverted, linus moves on to the next request, and you get to go back, do another rebase and test, and push your new fixes back up to your remote copy which will then automatically generate another pull request. Repeat the process until it merges successfully, and then your local system is a rebased local copy of upstream. Because you are writing small patches, rather than full features, the chances of a merge conflict are greatly reduced, often to zero if nobody else is working on the code you changed. It is this which allows the kernel to get new changes every 30 seconds all day every day. Having lots of small fast regression tests is the key to this workflow, combined with committing every time the tests pass, upstreaming with every commit, and having upstream do ci on the master branch.
    1
  248. 1
  249. 1
  250. 1
  251.  @fuseteam  you seem very set on the idea that every provider downstream of redhat is just a rebrand, which just is not true. there were whole classes of people who were only using redhat and their derivatives because redhat as part of their marketing said that if you need enterprise timescales, then use us as your stable base and do respins and derivatives based on us. that is what centos was. people are annoyed because redhat promissed 10 year support for centos 8, then ended it after only 1 year, while people were still migrating to it. even worse, they gave almost no warning. as to the derivatives, each exists for a specific reason, and supports customers redhat no longer wishes to support. clear linux is for an all intel hardware stack. rocky linux is for centos users where the move to rhel is not an option. scientific linux was a cantos derivative with extra software which was needed mainly in places like fermilab and cern. oracle linux needed specific optimisations which made running their databases better. others were used for embedded systems and infrastructure, or for alternative architectures. pretty much all of these use cases were at one time actively supported by redhat or centos, and are now prohibited under their dodgy eula. even the case where the city of Munich needed to create a respin specifically for their 15000 seat council rollout to include extra software only they needed is now banned. redhat used an opencore approach in order to grow, and a use us as upstream approach to enter markets that were not otherwise open to them. it had the added benefit of not fragmenting the enterprise linux market much. unfortunately for them, not everyone can suddenly switch to paying them lots of money on short notice, and even more cannot afford the rat on your boss tactic made disreputable by microsoft and their enforcement arm the business software alliance. when you run a business, you make a profit, and then decide how much of it to invest in research and innovation. the current management at redhat seems to think that it should work the other way around, where they decide what needs doing and how fast, and then tries to force people who never needed to pay with their blessing to make up the shortfall. the current fracturing of the enterprise market is a direct consequence of this attitude, as is the percentage of redhat customers looking for ways not to be held hostage by the next silly move they make. these people who have forked rhel had a total right to do so as redhat had encouraged them to do it. lots of them do testing for scenarios redhat does not support, and then pushes those changes both to stream, and to the primary upstream developers so that they do not have to keep large patchsets supported out of tree. these patches and extra bug finding are then made available to rhel from either upstream directly, through fedora, centos, or derectly as patches to redhat. this is fundamentally how open source works, someone finds a problem, develops a fix, and sends it upstream, and then the downstream users get to us it without necessarily having a need for support. when support is needed, then they find a company who is responsive to their support needs, which redhat increasingly is not. redhat has now become just another entitled proprietary software company who happens to use lots of open source software to try and keep the costs down, while the management has forgotten this fact and decided to stop playing well with others. this has already come back to bite them, and will continue to do so.
    1
  252. 1
  253. 1
  254. 1
  255. 1
  256. 1
  257. 1
  258. 1
  259. 1
  260. 1
  261. 1
  262. 1
  263. 1
  264. 1
  265. 1
  266. 1
  267. 1
  268. 1
  269. 1
  270. 1
  271. 1
  272. 1
  273. 1
  274. a lot of the comments made about universal basic income mirror those made about having a minimum wage. it is perfectly possible to have a universal benefit paid to everyone in the qualifying demografic, in fact we already have 2 of them, child benefit paid to the primary carer, and basic state pension paid to anyone over retirement age. in both cases it is a trivial test to see if you fit the demografic, and then the payment continues until you don't. the suggestion is that it might make sense to roll it out to other groups, and the evidence is already in that benefits that are not means tested are spectacularly cheaper to administer, so it makes a lot of sense to look at what other benefits could follow this route. we even have the universal credit single payment system already in place to make payments cheaper to administer. it is also true that it is harder to set up a universal basic income in countries where you have already got a raft of means tested social security benefits, but there has been lots of research done on the practicalities of such systems, and again, the evidence is already there to say that they are almost impossible to run in a way that doesn't result in numerous different types of counterintuitive outcomes including: irrecoverable overpayment - for example tax credits to pay for childcare underpayment - just look at how the disabled are not being paid what they are supposed to get massive beurocrasy to adm8nister all the paperwork required by means testing benefits traps - where you earn a little extra, and you lose huge amounts of benefits in response so you have to work within those limits ongoing benefits reform - trying to deal with all the other problems of means tested benefits and lots of other problems. in countries where it has been tried properly, especially those without massive social security systems, it was found that there were a lot of possitive outcomes as a direct result of not having other problems: the ill and carers saw lots of improvements due to not having to fight poverty resulting in better physical and mental health partners in couples who were not the primary breadwinner found it easier to escape from negative relationships, as they had their own independant source of income. people found it easier to start businesses, as they don't have to go from zero to a living wage almost instantly the elderly could cut back on the hours at jobs they liked, and still be in work, as any shortfall is covered there are a number of others as well. the stupidest objection here is about the unworthy poor. if you have not been caught as a criminal, there is no legal or moral reason to exclude you from benefits. if you have hard to prove medical conditions like backpain, it doesn't stop it from limiting what work you can do, so also should not limit your benefits. as to affordability, it can be brought in the same way minimum wage was, start at a pittance, and have above inflation raises until that catagory of people achieve the apropriate level. you can also bring it in like the right to vote was brought in, extending the franchise gradually as you work out who should be entitled, and at what level. remember, this does not stop you from taxing people. tax credits for the employed have a taper which gets gets deductions taken from your wages as part of paye according to recorded earnings. people who have to put in a tax return have limits above which they get taxed according to their reported incomes. both can be worked out by existing government departments as they are every year, they just have to take the new income into account like every other source of income. the remaining questions are who should be eligable, and at what rate, which can be determined in the same ways benefits reformers already do it. don't throw away the idea just because some idiot comes up with a specific scheme which is overgenerous and which has not yet had any thought as to how to fund it. universal does not have to mean everyone is entitled to the same amount, it just means that everyone in some qualifying and trivial to test catagory is entitled to some specific amount. basic just means that it is not means tested, and gets paid to you as of right. and of course income in this sense just means that if you fit the criteria, (like being a child or a pensioner) you actually get some money. none of this means it has to be non taxable. if i remember rightly, when child benefits go up, they just do not increase your tax limits by the same amount, so that if your family earns enough to pay tax, it is already revenue neutral. same with basic state pension.
    1
  275. 1
  276. 1
  277. 1
  278. 1
  279. 1
  280. 1
  281. 1
  282. 1
  283. 1
  284. 1
  285. 1
  286. 1
  287. 1
  288. 1
  289. 1
  290. 1
  291.  @RawrxDev  i would like to agree with you, but when i ask why they are sceptical, they don't have valid reasons, and just end up saying "because humans are special", and those who claim fake intelligence basically say "because there is something special about it when done by a human". i generally find that those people effectively remove themselves from the conversation, basically amounting to just the level,of noise from some random fool. i would love to discuss with genuine sceptics with actual reasons what mechanisms could get in the way of agi and asi, but they don't seem to show up. one example could be that when humans think, something in the underlying machinery does something quantum, but then you have the question of what is it about wetware which makes it the only viable method for getting the result, and anyway, how come these pesky expert systems can also get the same result and show the same chain of reas9iing as the expert. i would tend to say that llms and all other statistical ai and block box ai have the issue that they are basically blind alleys for anything but toy problems. there are a whole range of fields where even if they could be shown to produce the right results, their underlying model and the impossibility of fixing wrong answers and security holes just make them unsuitable for the job. agi needs symbolic ai, combined with multiple feedback cycles to figure out not only that the answer given was wrong, but why it was wrong, and what dan be done differently to avoid making the same mistake next time. generally i tend to believe that ai will get smarter, using symbolic ai, and that there is no predefined upper limit for how good it can get, but i would like those who turn up with opposing views to actually have some basis for them, and to actually be bothered to voice them so that actual discussion can occur, rather than just saying "because", and thinking that makes them automatically right.
    1
  292. 1
  293. 1
  294. 1
  295. 1
  296. 1
  297. 1
  298. 1
  299. 1
  300. A lot of people are getting the law wrong here. First, copyright is created automatically for anything which does not fall under some very narrow restrictions as to what can be copyrighted. Second, the copyright automatically goes to the author unless you have a naff clause in your employment contract giving it to your boss, or you are allowed to sign a contributer license agreement and do so. Third, when you contribute to a project without a contributer license agreement, you retain your copyright, but license the project to distribute your code under the applicable license at the time you contributed. This cannot be changed without your consent. Fourth, this has been tested in court. In the usa it was found that the author and copyright holder retained copyright, and granted permission to use it under the applicable license. By rejecting the license by trying to change it, you are not complying with the license, and are distributing it without permssion, which is piracy. In a seperate case, it was found that when the company tried to enforce its copyright, and included code it did not own without an appropriate license grant, they had unclean hands, and therefore were not allowed to enforce their copyright until after they had cleaned up their own act. This leaves any company not complying with previous licenses with a serious problem unless all contributions are under a copyright licence agreement transfering the copyright to the company, and always has been unless they track down every contributer and get consent for the license chenge for every single contributer. If they cannot get that consent for any reason, then they have to remove the code of that contribute in order to distribute the software nder the new license.
    1
  301. 1
  302. 1
  303. 1
  304. 1
  305. 1
  306. 1
  307. 1
  308. 1
  309. 1
  310. The reason it says do it more often is that a lot of the pain depends on the size of the change, and thus the difficulty in finding what to fix. By increasing the frequency you reduce the size of the change, and thus find the bit that caused the problem in a much smaller change. Doing version control more often reduces the size of the change, but having long lived feature branches causes merge hell. Integrating at least daily to mainline gets rid of merge hell, but amplifies the problem of lack of regression tests. Specifically that you tend to write hard to test code. Adding the regression tests with the new code, and having a ratchet on the code coverage so you cannot commit if it drops the percentage gradually fixes the regression test problem. This also gives you continuous integration if you run all the tests with every commit. However it increases your vulnerability to worthless and flaky tests and to the test suite being slow. By writing the tests first, you prove that they fail, and then when you add the new code, you prove that they pass, getting rid of worthless tests. Because this speeds up your progress, you have a bigger problem with technical debt. By adding regression testing to the cycle, you deal with the technical debt, which gives you the red, green, refactor cycle of test driven development. When most people start writing regression tests, they start with a codebase with one test, does it compile? Then they tend to work from user interface and end to end tests, having lots of trouble because such legacy code is not designed with testing in mind, and thus is often either hard, fragile or just plain impossible to add. This leads to a lot of opposition to testing. The solution to this is to teach people what real unit tests are before you add the regression testing requirement.
    1
  311. 1
  312. 1
  313. 1
  314. 1
  315. 1
  316. ​​ @travisSimon365 hurd would not have won if history was different. i was around at the rime, and everyone was looking to work from bsd code, but the at&t vs berkley case had a chilling effect. the other possibility was that lots of people wanted to extend minix to be more useful, but andrew tenenbaum not only wanted to keep it simple as a teaching tool, but also refused to allow others to maintain a set of patches in the same way as was done for the ntsc mosaic server, wjh8ch was h9w we got apache, literally a patchy servrr due the sequence of patches on patches on patches which were maintained at the time. also remember that it was not until someone wanted to support a third architecture that the default changed from forking the codebase, then getting it to work on the new architecture and instead got the code needed for arch architecture to be brought into the mainline kernel, managed with configuration flags. so you had 386bsd being slowed down by at&t, minix expansi9n being actively opposed by tenenbaum, gnu's kernel being delayed by indecisiveness over how it should work, and multiple commercial unixes just being too expensive for students. then along comes linus, who like a lot of students wanted a unix workalike and happened to be going to university 8n a country with a funding model that did not exist anywhere else. he even used the monolithic design which he thought was worse, for speed. it was not that linus was cleverer, withmsomemgrand plan, just that everyone else could not stop shooting themselves in the foot. also your alternatives at the time were either a primitive version if dos, or cpm.
    1
  317. 1
  318. 1
  319. 1
  320.  @Xehlwan  the truth has now come out as to what happened. they created a file with a proprietary binary format. they ran it through a validator designed to pass and only to fail known bad versions, then when it passed, immediately pushed it to everyone with no further testing. what should have happened it this: create a readable file in a text format which can be version controlled, test it, and commit it to version control. generate the binary file from the text file, with a text header at the start (like everyone has been doing since windows 3.11), and immediately create a signature file to go with it. have the validator compiled as a command line front end around the code used in the driver, designed to fail unless it is known to be good. this checks the signature, then looks for the text header (like in a gif file), then uses that header to decide which tests to run on the file, only passing it if all,of the tests pass. run the validator as part of your continuous integration system. this tells you the signature matches, the file is good, and all,other tests of the file and the driver passed, so it is ready for more testing. build the deliverable, and sign it. this pair of files is what gets sent to the customer. check the signature again, as part of continuous delivery, which deploys it to some test machines, which report back a successful full windows start. if it does not report back, it is not releasable. then do a release to your own machines. if it screw up there, you find out before your customers see it and you stop the release. finally, after it passes all tests, release it. when installing on a new machine, ask if it can be hot fixed by local staff. use the answer to split your deployment into two groups. when updating only let the fixable machines install it. the updater should again check the signature file. then it should phone home. if any of the machines don't phone home, stop the release. only when enough machines have phoned home does the unfixable list get added, as it is more important they stay up than that they get the update a few minutes earlier. if any of this had happened, we would not have even heard about it.
    1
  321. 1
  322. 1
  323. 1
  324. 1
  325. 1
  326. 1
  327. 1
  328. 1
  329. 1
  330. 1
  331. 1
  332. 1
  333. 1
  334. 1
  335. 1
  336. 1
  337. 1
  338. 1
  339. it is not business ethics which require the shift your company policy, but the resiliency lessons learned after 9/11 which dictate it. many businesses with what were thought to be good enough plans had them fail dramatically when faced with the loss of the data centers duplicated between the twin towers, the loss of the main telephone exchange covering a large part of the city, and being locked out of their buildings until the area was safe while their backup diesel generators had their air intake filters clog and thus the generator fail due to the dust. the recovery times for these businesses for those it did not kill were often on the order of weeks to get access to their equipment, and months to get back to the levels they were at previously, directly leading to the rise of chaos engineering to identify and test systems for single points of failure and graceful degradation and recovery, as seen with the simian army of tools at netflix. load balancing against multiple suppliers across multiple areas is just a mitigation strategy against single points of failure, and in this case the bad actors at cloudflare were clearly a single point of failure. with a good domain name registrar, you can not only add new nameservers, which i would have done as part of looking for new providers, but you can shorten the time that other people looking up your domain cache the name server entries to under an hour, which i would have also done as soon as potential new hosting was being explored and trialed. as long as your domain registrar is trustworthy, and you practice resiliency, the mitigation could have been really fast. changing the name server ordering could have been done as soon as they received the 24 hour ransom demand, giving time for the caches to move and making the move invisible for most people. not only did they not do that, or have any obvious resiliency policy, but they also built critical infrastructure around products from external suppliers without any plan for what to do if there was a problem. clearly cloudflare's behaviour was dodgy, but the casino shares some of the blame for being an online business with insufficient plans for how to stay online.
    1
  340. 1
  341. 1
  342. 1
  343. 1
  344. 1
  345.  @jamesross3939  by its nature, diverse software stacks have some level of source incompatibility. just look at the problems in making the same program work across multiple versions of windows. as regards multiple distributions, we don't live in a world where everyone has identical needs so naturally at some point you start to get divergence. this even applies with windows where you have different releases for servers, desktops, oems, embedded, and others. these divergences naturally make it so that you cannot guarantee that a given program will work the same way, or at all on the different platforms, and the only way to deal with that is lots of testing. true binary compatibility requires a level of control by the vendor which results in large groups of people being ignored (causing divergence through making new systems which address their needs), or severe levels of bloat (to accommodate needs most users do not have). often it does both. in particular, you would need every variant to use exactly the same versions of every library on exactly the same compiler, all the way down. good luck getting blind people to move to wayland which currently has no support for them. the best mitigation we have at the moment is flatpacks, which package non interacting code with their needed library versions to produce cross distribution packages of user space applications. most distributions get created because their users have a need not covered by the mainstream ones, and a lot of them are extremely niche. their use case often will never become part of mainstream distributions, and their effect on the ecosystem as a whole is negligible. for others, the minority use case gradually becomes more important getting mainstream adoption as the work by these niche distributions becomes available in software used outside those distributions, and the niche distribution either becomes irrelevant, or remains as a testbed which feeds back into the wider ecosystem. this is what happened with the real time linux distroibutions, and as more of their work made it upstream, less of their users needed the full real time systems.
    1
  346. 1
  347. 1
  348. 1
  349. 1
  350. 1
  351. 1
  352. 1
  353. 1
  354. 1
  355. 1
  356. 1
  357. 1
  358. 1
  359. 1
  360. the whole area of a.i. generated content is a minefield for everyone involved, especially with the way the current crop of a.i. tools work. at the moment, the best systems work by using absolutely huge amounts of data with no quality filtering to produce large statistical models with tens to hundreds of thousands of parameters to do what is in effect the same type of predictive word completion as is done when typing a text message on your mobile phone. there are a number of issues with this. first, it has no model of the underlying information that the sentences are based on, so it cannot use this missing model to check that it is not producing statistically plausible garbage, so to make use of the output, you need to fact check it yourself. second, nobody knows how it came up with the answer, not the user, programmer, or even the program, so if you get sued, you cannot use how it works in detail to defend yourself against the claims, because for all you know, the system could be using statistical methods to do exactly what is claimed. third, as you narrow down the context, the information it was trained on becomes less of a creative work, and closer and closer to only having one or only a few examples upon which to train for that specific output, leading to extremely close approximations to the original text as the hallucinated output. the extreme example of this is git copilot, trained on a massive collection of open source projects, which regularly produces code which is of dubious quality, due to being at no more than the average quality of all the code in all the software put into git by every programmer who ever used github to store the code they used to learn to program. even worse, code is so constrained that when you get it to create the code for you it will often come up with the exact text that it was trained upon, which you then add to your code with no idea as to if it had a compatible licence, potentially leaving you liable to infringement claims. even worse, the programmer using it will not have given the blame to the program, leaving him to have to remember years later if a particular modification he added to the code was written by him, or by some tool, and the more productive the programmer, the more modifications he will have made. even worse, both the amount of data you need to feed in, and the length of time you need to train these models scale exponentially, meaning that as currently imagined, these systems are interesting, but a dead end as far as continuing to improve the output is concerned. to make significant progress in improving a.i., you need it to actually model the knowledge, not just the form, and be able to tell you what steps it took in order to give you the information it gave, and what steps it took to avoid generating plausible garbage. the current state of a.i. is such that there are some things it can do better than human experts, but we cannot find out how, or which corner cases were not in its training data, and thus where it will get things wrong. imagine a self driving car that will happily drive you of the end of a dock, into a river, or over a cliff, or even into a crowd of people. who then has the legal and financial liability for the consequences?
    1
  361. 1
  362. 1
  363.  @qwesx  sorry, but that is just wrong. the licence gives you the right to modify, distribute, and run the code. what compiling locally does is moves the legal liability from the distribution to individual user, who is usually not worth going after. as regards testing in court, the options are relatively few, and apply the same no matter what the license is. if you claim not to agree to the license, then the code defaults back to proprietary, and you just admitted in court to using proprietary code without a license. if the licences are incompatible, your only choice is to get one side or the other to relicense their code under a compatible license for your usage, which us usually somewhere between unlikely and nearly impossible with most projects, meaning that again you do not have a valid license for the derivative work, and you just admitted to it in court. with zfs, the problem is even worse, as you have oracle who sued google for having a compatible api, which was then resolved to be fair use, but only after costing millions to defend, and taking many years. because of this the linux kernel community will not take code providing any oracle apis without a signed statement from oracle that they will not sue, not because they do not think they will win, but because they cannot afford the problems that will occur if they do sue. individual distributions shipping zfs would face the same potential consequences, which is why most do not ship it. this leaves you back at moving the liability from the kernel, to the distribution, to the end user, where the benefits to suing most of them are just not worth it. as to trying it in court, there are lots of licenses, and lots of people either being too silly to check the licenses properly, or trying clever things to skirt the edges of legality because they think they have found a loophole. there are also lots of places to sue, and as floss is a worldwide effort, you have to consider all of them at once, which is why it is a really bad idea to try and write your own license. in america, people have tried the trick of not accepting the license, and have failed every time. the same is true in germany under a mix of european and german law. this covers the two biggest markets, and can thus be considered settled. what happens in every case, is that the license forms a contract for you to use the otherwise proprietary code under more liberal terms, and when you reject it it reverts back to the proprietary case, where you then have to prove why you are not using the software without a license. trying to be clever has also been tried, and while the law is less settled than for rejecting the license, you need every judge in every venue to agree with your interpretation of the license, which normally does not happen, so you are back at being in breach of the license, and hoping to get a friendly judge who does not look to give punitive damages for trying to be too clever. the insurance risk usually is not worth it. the only other option is to try and comply with the license, but when you have multiple incompatible licenses this is not a valid option.
    1
  364. 1
  365. ​ @lapis.lazuli. from what little info has leaked out, a number of things can be said about what went wrong. first, the file seems to have had a big block replaced with zeros. if it was in the driver, it would be found with testing on the first machine you tried it on, as lots of tests would just fail, which should block the deployment. if it was a config file, or a signature file, lots of very good programmers will write a checker so that a broken file will not even be allowed to be checked in to version control. in either case, basic good practice testing should have caught it, and then stopped it before it even went out of the door. as that did not happen, we can tell that their testing regime was not good. then they were not running this thing in house. if it was, the release would have been blocked almost immediately. then they did not do canary releasing, and specifically the software did not include smoke tests to ensure it even got to the point of allowing the system to boot. if it had, the system would have disabled itself when the machine rebooted the second time and had not set a simple flag to say yes it worked. it could then have also phoned home, flagging up the problem and blocking the deployment. according to some reports, they also made this particular update ignore customer upgrade policies. if so, they deserve everything thrown at them. some reports even go as far as to say that some manager specifically said to ship without bothering to do any tests. in either case, a mandatory automatic update policy for anything, let alone some kernel module is really stupid.
    1
  366. 1
  367. 1
  368. not really, we all know how hard it is to fix bad attitudes in bosses. in the end it comes down to the question of which sort of bad boss you have. if it is someone who makes bad choices because they don't know any better, train them by ramming home the points they are missing at every oportinity until they start to get it. for example if they want to get a feature out the door quick, point out that by not letting you test, future changes will be slower. if they still don't let you test, point out that now it is done, we need to spend the time to test, and to get it right, or the next change will be slower. if they still did not let you test, when the next change comes along, point out how it will now take longer to do, as you still ha e to do all the work you were not allowed to do before, to get it into a shape where it is easy to add the new stuff. if after doing that for a while, there is still no willingness to let you test, then you have a black boss. with a black boss, their only interest is climbing the company ladder, and they will do anything to make themselves look good in the short term to get the promotion. the way to deal with this is simply to get a paper trail of every time you advise him of why something is a bad idea, and him forcing you to do it anyway. encourage your colleagues to do the same. eventually one of the inevitable failures will get looked into, and his constantly ignoring advice and trying to shift blame to other will come to light. in the worst case, you won't be able to put up with his crap any more, and will look for another job. when you do, make sure that you put all his behaviour in the resignation letter, and make sure copies go directly to hr and the ceo, who then will wonder what is going on and in a good company will look to find out.
    1
  369. 1
  370. 1
  371. 1
  372. 1
  373.  @h7hj59fh3f  upon doing a little more research, the us and uk maintain your position, but many other countries don't. so putting the grant of public domain statement makes it public domain in all countries which recognise it, and including the cc0 license grants the closest equivalent in those countries which don't. intellectual property rules are country specific, and follow a pattern in how they are introduced. first, they don't exist, as the country has no domestic industries which need them, this allows domestic industries to form, copying the ip from other countries. the most obvious example of this is book publishing, where foreign books are copied on an industrial scale to develop a local consumer base for the ip. second, local ip starts being produced, so rules get introduced to protect the creator and licensee from other local (and later foreign) companies from continuing to do what has been standard practice, as the local market needs enough revenue to help the local creators to be able to continue to create. third, they want to sell and license to foreign companies, so they have to sign up to international treaties providing either mutual recognition of each others rules, or a standard set of working practices. the first is way better for too many reasons to go into right now. fourth, at some point in this ip recognition process, 2 things happen as the country realises that ip protection needs to be time limited. the idea of public domain ip is accepted, with recognition upon what terms cause it to expire, providing massive bonuses to the public from company abuses of old ip content, and they realise that different industries and different forms of ip have different timescales for return on investment for ip, and need different expiry rules, after which it returns to the public domain. this pr9tects the companies from other companies. 5rade dress (does it look like a mcdonalds) needs instant protection, for the duration of existence of the company to prevent anyone else from pretending to be them. drug manufacturing can take 20 years and a lot of money to get to market, with a lot of products failing before it gets here, so it needs relatively long timescales for exclusivity to recoup those expenses. books on the other hand make most of their income in the first few years, and almost never get a second round of popularity after their initial release, so much smaller timescales should be involved.. and of course, sometimes creators create something for the public good, and want to put it straight into the public domain. due to the american political system being particularly vulnerable to lobbying, they are still not very far along with the public protection side of this, while being very aggressive with the company protecti9n side. however these two sides need to balance for the good of everyone. some other countries are further along or better balanced than others, due to local circumstances. this difference in speed of evolution of the rules is just the most obvious reason why mutual recognition is better than forcing standard rules, but there are many others.
    1
  374. 1
  375. 1
  376. we now know what should have happened, and what actually happened, and they acted like amateurs. first, they generated the file, which went wrong. then they did the right thing, and ran a home built validator against it, but not as part of ci. then after passing the validation test they built the deliverable. then they shipped it out to 8.5 million mission critical systems with no further testing whatsoever which is a level of stupid which has to be seen to be believed. this then triggered some really poor code in the driver, crashing windows, and their setting it into boot critical mode caused the whole thing to go into the boot loop. this all could have been stopped before it even left the building. after validating the file, you should then continue on with the other testing just like if you had changed the file. this would have caught it. having done some tests, and created the deployment script, you could have installed it on test machines. this also would have caught it. finally, you start a canary release process, starting with putting it on the machines in your own company. this also would have caught it. if any of these steps had been done it would never have got out the door, and they would have learned a few things. 1, their driver was rubbish and boot looped if certain things went wrong. this could then have been fixed so it will never boot loop again. 2, their validator was broken. this could then have been fixed. 3, whatever created the file was broken. this could also have been fixed. instead they learned different lessons. 1, they are a bunch of unprofessional amateurs. 2, their release methodology stinks. 3, shipping without testing is really bad, and causes huge reputational damage. 4, that damage makes the share price drop of a cliff. 5, it harms a lot of your customers, some with very big legal departments and a will to sue. some lawsuits are already announced as pending. 6, lawsuits hurt profits. we just don't know how bad yet. 7, hurting profits makes the share price drop even further. not a good day to be cloudstrike. some of those lawsuits could also target microsoft for letting the boot loop disaster happen, as this has happened before, and they still have not fixed it.
    1
  377. 1
  378. 1
  379. 1
  380. 1
  381.  @marcogenovesi8570  if you mean the rust developers expecting the maintainers to go and hunt up a lot of extra semantic information not neded in c just to comply with rusts expensive typing system, and calling it documentation, that is one aspect of it. when you choose to work in a different language, which has tighter requirements, you make building the language api bindings harder. that is fine, but then you have to be prepared to do the work to find that estraminformation, and only after you think you have got it right do you get to call requesting confirmation documentation. this happened with the ssl project in debian, where the person who initially wrote the code was not the person who provided the clarification, resulting in a major security hole, but the patch developers did the work and asked is it case a or case b, and got the wrong answer back because the answer is not always obvious. this is why the c maintainers push back at the claims that it is just documenting the api, and it is cheap, when it is neither. like with kent, and some of the systemd developers, the issue is not the language the code is being developed in, but the semantic mismatch between the information needed by the existing work, and potential ambiguities relating to how people want to use the existing apis in a different way to how they are currently being used, which might require disambiguation, which might require digging around in the code base and mailing lists to see if a close enough use case came up in potentially thousands of posts in the discussion to clarify the new semantics for the previously unconsidered use case. the time for them to do this is at merge time if there is an issue, not all upfront because it is just documentatiron. the general principal in most code bases is that if you want to create new code, go for it, but when you want to merge it with the existing mainline code base, do it in small separate chunks and be prepared to do the extra work to not just get it working, but to move it to a shape that is compatible with the main code base, and if it is more than a drive by bug fix, expect to stick around and be prepared to do a lot of the maintainance yourself. this goes double if you are working in a different language than the maintainers. otherwise, it eventually gets treated as unmaintained code, and deprecated prior to removal. again, it comes down to respecting the existing development process, being willing to work within it, and if you need minor changes to that process to make the work easier for both sides, working within the existing change process to gradually move the standard way of doing things in the desired direction, while bearing in mind that in the kernel there are 15000 other people whose direction does not necessarily match yours. kent does not get this, so i see his code getting booted unless someone who does steps up to maintain it. the systemd guys did not get it either, which is why kdbus went nowhere, after getting a lot of push back from lots of kernel maintainers. a significant subset of the rust in the kernel developers don't seem to get it either, harming their case and making things harder for their codevelopers. this problem is not confined to the kernel. middleware developers like systemd, gtk, wayland, and others seem to forget that it is not just their pet project, and that in the case of middleware they not only have the same problems as the kernel, with new developers having to play nice with their rules, but as someone with other communities involved, they also need to play nice with those below them, and not cause too many problems for those above them in the stack.
    1
  382.  @alexisdumas84  i am not suggesting that every rust dev wants the maintainers to do everything, only that those who don't are being conspicuous in their absense with dissenting opinions or are failing to see how their additional semantic requirements to get the type system to work cause a semantic mismatch between what information is needed to do the work, and when. for c, it comes when the patch is finished and you try and upstream it, at which time any such problems result in considerable rework to get from working code to compatible code. this is why the real time patch set took nearly 20 years to get fully integrated into the mainline. for rust, all this work seems to need to be done upfront to get the type system to work in the first place. this is a major mismatch, and the language is too new and unstable for the true costs of this to be well known and understood. rust might indeed be as great as the early adopters think, with minimal costs for doing everything through the type system as some suggest, but there is an element of jumping the gun in the claims due to how new the language is. python 3 did not become good enough for a lot of people until the .4 release, and for others until the .6 release. as you maintain your out of tree rust kernel, with any c patches needed to make it work, have fun, just make sure that when it comes time to upstream the maintainers need to be able to turn on a fresh install of whatever distro they use, do the equivalent of apt get install kernel tools, and then just build the kernel with your patches applied. it is not there yet, and thus some code will stay in your out of tree branch until it is.
    1
  383. 1
  384. 1
  385. 1
  386.  @robertasm20  the unix wars were not about minor incompatibilities between vendors, but were about lots of mutually incompatible extentions to "differentiate" the products to the extent that it made it impossible to write programs on one system which would even compile properly on other vendors products. on linux the situation is completely different. most programs you can download and compile on any distribution, and be confident you will get working software. while we do have some issues with different basic tooling around packaging and the extent to which not every distribution is in thrall to systemd, most of that can be engineered around with the use of tools like ansible. this is because most of the tools in question were trying to solve near identical problems, and thus came up with very similar solutions, but with different tool names and flag names. this makes translation between them fairly easy. most of the desktop level issues are resolvable using flatpack packages. most of the remaining issues are due to library providers not doing versioning properly, and to things like the timings and support for switching over from one major subsystem to its replacement, where some distributions ship it before it is ready, and others are forced to wait too long, as the maintainers of the new subsystem cannot seem to understand that while it is done for them, it is somewhere between unusable and dev complete for whole other categories of users. i don't really have a good solution to either problem, but that does not make the issue go away.
    1
  387.  @robertasm20  The only system where the measure of compatability is binary compatability is windows, as that was often the only choice. even there, you have the issue that backward compatability is often only supported for a few years, requiring the programmer to do a significant rewrite to get a new version working on the next version of windows, which because it needs to use the new framework, often won't work on the previous version of the os. pretty much every other operating system in modern times has been based upon source code compatability, often requiring just copying it across and recompiling. on these systems, binary fiĺes and packages are usually either about efficiency, or caching. in any event, this is usually done by your distribution maintainers, and is thus not an issue unless you want something unusual, in which case you probably have to compile it yourself anyway. as to the size of flatpacks, the issue there is that not every program is packaged by either the developer or the distributor for every specific distribution. flatpacks, snaps, and appimages were designed to overcome this problem until the distributions could get their act together and make things mostly work for most packages. The usual approach for all of these systems is to bundle the specific library versions needed in with the applications. for most people, neither the bandwidth nor the disk space are in short enough supply to force the distribution maintainers to fix the library versioning problem, so you just have to put up with it. this has been the standard for apple binaries for decades, storing every library in the same directory as the application. windows ships the same way, but ends up dumping the libraries in a common windows library directory, with mandatory library versioning, so that you can have version 1 for the programs that need it, and later add version 2 for programs that need that without breaking all the programs which need version 1. unfortunately, most library programmers on linux have not got that message, so lots of programs break when you break semantic versioning, with major versions keeping the same name, so you have to choose which programs to dump. until you fix that, the windows solution is not an option, so you have to fall back to the apple solution.
    1
  388. 1
  389. 1
  390. 1
  391. 1
  392. 1
  393. 1
  394. 1
  395. 1
  396. 1
  397. 1
  398. 1
  399.  @vallejomach6721  yes they have multi party systems, and with it they have weak coalition government which is apt to fall apart at any time is someone raises the wrong issue. The UK has a 2 party first past the post system, which is vulnerable to weak oppositions, and really sidelines the votes of any group trying to become the third party, and if it successfully transitions to a three party system, it would change to have the above mentioned problems of coalition government's. Benign dictatorships tend to be the most stable, with strong government, right up until they are not, and all dictatorships tend to get really unstable at transition time, due to the lack of a good system for choosing the next dictator. This applies to pure monarchies as well. Constitutional monarchies tend to have evolved from pure monarchies, with a gradual devolution of power, but have the advantage when done well of preventing the slide into becoming a banana Republic. As I say, each system has it's flaws, and it's strengths, but currently we don't have the evidence to determine which is best, only some evidence from history which clearly states that some systems especially dictatorships are much worse than others. Yes America currently has a part of it's population trying to subvert the system, but all systems are subject to this in one form or another, and it will either get a lot better or a lot worse in the next decade or so, but it is too early to tell which way it will go. What we do know is that the actions of trump and maga have made it a lot weaker than it used to be. I am not particularly promoting or defending any particular system, and am mainly only criticising dictatorships for their catastrophic failure modes. This alone I think makes them one of the weakest and least suitable systems out there, but for the rest it is swings and roundabouts.
    1
  400. 1
  401. 1
  402. 1
  403. 1
  404. Yet another video rehashing someone elses misunderstanding of what universal basic income is, but with a nice section dealing with the problems of advanced automation. Every video and speech dealing with ubi covers the silly idea of replacing all social security benefits with one single payment made to every adult at a large value, then fails to deal with how you would pay for it. Unfortunately while this simplistic model makes it easy to talk about, it does so at the price of ignoring what ubi actually is, and just discredits the whole idea due to not thinking about how to fund it. What ubi actually is, is an alternative to means tested benefits in the social security system. Means tested benefits can be complicated to work out, difficult to comply with the qualification criteria, and subject to lots of variability in personal circumstances resulting in an expensive to run system with significant levels of over and under payments, vilification of the poor who make innocent mistakes, and heavy handed efforts to claw back any overpayment. In contrast, a basic state pension is easy to work out. Have you reached retirement age, and are you still alive? If yes, then pay the money. It doesn't need rechecking every few weeks like means tested benefits do, only when the circumstances change, ie you reach retirement age or you die, so cheap to administer. If you don't like paying it to the rich, use the money you save from administration costs to do a one time evaluation of how much you would have to move the starting points for the tax brackets to make it revenue neutral and make it taxable. If you then decide to pay another ubi, like child benefit, it also has simple qualifying criteria. Are you between the ages of zero and when you decide to cease entitlement, alive and resident in the country? If yes then pay the custodial parent. Also make it taxable and raise the tax levels by the amount needed to make it revenue neutral like we already do in the uk. However nothing says you have to pay the child the same figure as the pensioner. If you then decide that you want to pay everyone a fixed amount on top of other benefits, you can make it payable either to every resident, or every citizen, and bring it in like we brought in minimum wage in the uk, start it at pathetically low amounts, and then give above inflation increments until it reaches the level you can afford. Again make it taxable and revenue neutral and set it to be at an increasing percentage of the total tax take as and when you can afford it. So you could take the usa population of 200 million, and give them all a social security number from birth. Then each individual gets a government bank account where the benefit is paid into, and you pay them 1 dollar per year at a cost of 200 million taxable dollars per year. Every other benefit either ubi or means tested can then be paid into the account keyed to the social security number, saving administration costs for other benefits. If the individual is vulnerable, then their registered primary carer can draw on it for that person. Now you have your model implemented, for what is basically a rounding error on your multi trillion dollar usa budget. Going back to child benefit, you can reduce it by the general ubi amount, and once it reaches zero, just delete it. Same with basic state pension, or unemployment benefit, or disability benefits. You get an even bigger benefit with means tested benefits, as every time you increase the general ubi amount, you move more people out of needing them, but more importantly it makes it easier to find a generic qualifying criteria for those who are left to move to another cheaper ubi, saving another fortune in admin costs.
    1
  405. ​​ @FutureAzA I just rewatched the video, and your funding proposal is basically to either live in a country with a massive resource dividend, which is not most countries, or use some combination of cost reduction through automation combined with some form of robot tax. Even worse, you expect a government which is by your own statements so dysfunctional that it took ages to decide to implement something as easy as a do not call list to suddenly get their act together to organise a pump priming robot innovation scheme and corresponding bot tax system. You asked in the video what did you get wrong, and please tell you in the comments, so that is what I did. First, like everyone else doing videos about ubi, you had a fundamental misunderstanding as to what ubi actually is, and confused that with the proposals for a generalised ubi. I therefore clarified what it is, and why the general understanding of ubi used in videos is not the right way to talk about ubi. You then proposed a funding model for this misunderstood version of ubi which was not politically practical, so I proposed an alternative way of doing it which is viable, and how to implement it. You also overestimated both the competence and speed of roll out of artificial intelligence which is at the heart of your "let bots do it" model. The speed of innovation needed for your model just is not there. We have been working on A.I. since the 50s, and while the progress is finally getting worth talking about, it is nowhere near the level needed to implement your bots plan, and that is just on the technical level. Using copilot as an example, technically it is little more than a slightly more advanced version of autocorrect as seen on mobile phones. Even worse, the model was made with a data set which did not include any tagging as to how good or bad the code was, it just slurped up the entirety of all open source software, ignoring that a lot of the stuff is written by people learning the languages and the tools, so the quality of what is produced is generally pretty dreadful. Finally, you have its biggest problem, it did not take into account the fact that all of that code used to train it belongs to other people, so what it outputs is usually in breach of both the software license, and the copyright of the originating author, so anyone using it and writing it is just asking to get sued back to the stone age. That is just the problems with creative A.I., if you then move on to safety critical A.I. you then get the additional problem of when it gets it wrong and someone gets injured or dies, who is liable for paying compensation? using your self driving car example, if it kills a kid, is it the driver for not overriding the software, the car maker for including the software, the software company for it not avoiding the kid, or someone else? All of those legal questions matter, and it will take decades of law suits to get to a position where the answers are matters of settled law, and it will be settled on expensive law suit at a time, because as you pointed out the politicians have neither the competence to figure it out, nor the will to even look at it. So yes, the previous post was long, because your approach provided a simplistic and impractical solution to a complex problem, and was riddled with bad assumptions, and that always requires complex answers. But in my defense, you did ask us to tell you if you got anything wrong, so I did. I also targeted you as the original author of the video as the intended audience, while trying to keep the explanations simple enough that other interested individuals can still understand it. As to do I expect people to read a long post, it is social media, which means that because the audience of all social media is below average, you cannot expect people to read anything more complex than click bait on simple subjects, and have to have even lower expectations as to their levels of understanding of complex problems. Having said that the only solution to that problem is to try and provide them with the tools and information so that they can get better with time, and who knows, the person reading it might actually be able to understand it and get some value from it. As to the specific post, I only expect 1 person to read it, the person who asked to be educated as to any mistakes he made. I also expect that as someone deciding to provide an opinion on the subject you will have done enough background research to at least be able to comprehend the feedback at a level that is slightly higher than your average netizen, due to having already got that background information. However I do not expect you to accept that I am any more right in my views than you are, only that you consider the possibility that some of what I said might be useful when you decide to do your next video on the subject, and then only because you ask for corrections for any mistakes. If anyone else finds it useful and informative and possibly thought provoking, that is just a bonus.
    1
  406. 1
  407. 1
  408. 1
  409. 1
  410. 1
  411. 1
  412. 1
  413. 1
  414. 1
  415. the main problem here is that prime and his followers are responding to the wrong video. this video is aimed at people who already understand 10+ textbooks worth of stuff with lots of agreed upon terminology, and is aimed at explaining to them why the tdd haters don't get it, most of which comes down to the fact that the multiple fields involved build on top of each other, and the haters don't actually share the same definitions for many of the terms, or of the processes involved. in fact in a lot of cases, especially within this thread, the definitions the commentators use directly contradict the standard usage within the field. in the field of testing, testing is split into lots of different types, including unit testing, integration testing, acceptance testing, regression testing, exploratory testing, and lots of others, if you read any textbook on testing, a unit test is very small, blindingly fast, does not usually include io in any form, and does not usually include state across calls or long involved setup and teardown stages. typically a unit test will only address one line of code, and will be a single assert that when given a particular input, it will respond with the same output every time. everything else is usually an integration test. you will then have a set of unit tests that provide complete coverage for a function. this set of unit tests is then used as regression tests to determine if the latest change to the codebase has broken the function by virtue of asserting as a group that the change to the codebase has not changed the behaviour of the function. pretty much all of the available research says that the only way to scale this is to automate it. tdd uses this understanding by asserting that the regression test for the next line of code should be written before you write that line of code, and because the tests are very simple and very fast, you can run them against the file at every change and still work fast. because you keep them around, and they are fast, you can quickly determine if a change in behaviour in one place broke behaviour somewhere else, as soon as you make the change. this makes debugging trivial, as you know exactly what you just changed, and because you gave your tests meaningful names, you know exactly what that broke. continuous integration reruns the tests on every change, and runs both unit tests and integration tests to show that the code continues to do what it did before, nothing more. this is designed to run fast, and fail faster. when all the tests pass, the build is described as being green. when you add the new test, but not the code, you now have a failing test, and the entire build fails, showing that the system as a whole is not ready to release, nothing more. the build is then described as being red. this is where the red-green terminology comes from, and it is used to show that the green build is ready to check in to version control, which is an integral part of continuous integration. this combination of unit and integration tests is used to show that the system does what the programmer believes the code should do. if this is all you do, you still accumulate technical debt, so tdd adds the refactoring step to manage and reduce technical debt. refactoring is defined as changing the code in such a way that the functional requirements do not change, and this is tested by rerunning the regression tests to demonstrate that indeed the changes to the code have improved the structure without changing the functional behaviour of the code. this can be deleting dead code, merging duplicate code so you only need to maintain it in one place, or one of hundreds of different behaviour preserving changes in the code which improves it. during the refactoring step, no functional changes to the code are allowed. adding a test for a bug, or to make the code do something more happens at the stsrt of the next cycle. continuous delivery then builds on top of this by adding acceptance tests which confirm that the code does what the customer thinks it should be doing. continuous deployment builds on top of continuous delivery to make it so that the whole system can be deployed with a single push of a button, and this is what is used by netflix for software, hp for printer development, tesla and spacex for their assembly lines, and lots of other companies for lots of things. the people in this thread have conflated unit tests, integration tests and acceptance tests all under the heading of unit tests, which is not how the wider testing community uses the term. they have also advocated for the deletion of all regression tests based on unit tests. a lot of the talk about needing to know about the requirements in advance is based upon this idea that a unit test is a massive, slow, complex thing with large setup and teardown, but it is not how it is used in tdd. there you are only required to understand how to write the next line of code well enough that you can write a unit test for that line what will act as a regression test. this appears to be where a lot of the confusion seems to be coming from. in short, in tdd you have three steps: 1, understand the needs of the next line of code well enough that you can write a regression test for it, write the test, and confirm that it fails. 2, write enough of that line that it makes the test pass. 3, use functionally preserving refactorings to improve the organisation of the codebase. then go around the loop again. if during stages 2 and 3 you think of any other changes to make to the code, add them to a todo list, and then you can pick one to do on the next cycle. this expanding todo list is what causes the tests to drive the design. you do something extra for flakey tests, but that is ouside the scope off tdd, and is part of continuous integration. it should be pointed out that both android and chromeos both use the ideas of continuous integration with extremely high levels of unit testing. tdd fits naturally in this process, which is why so many companies using ci also use tdd, and why so many users of tdd do not want to go back to the old methods.
    1
  416. 1
  417. 1
  418. 1
  419. 1
  420. 1
  421. 1
  422. 1
  423. 1
  424. 1
  425. 1
  426. 1
  427. 1
  428. 1
  429. 1
  430. 1
  431.  @scifirealism5943  as opposed to means tested benefits which creates massive problems for the poor even if they want to work. stories abound about disabled people who cannot try going back to work to see if they can cope with it because they would immediately lose their benefits, and would then have to fight to get them back if they could not cope. similar stories exist about parents who get an extra couple of hours work, and lose their help with childcare, meaning that they would have to pay for their childcare in full, more than offsetting any benefit they would gain from doing the extra work. even worse, often it is up to the person to notice that they have gone over the arbitrary limit in a timely fashion, which can result in thousands of pounds being clawed back from people who already spent it on childcare, and thus do not have it to repay, any then might have a fraud investigation for a simple mistake. similar issues exist with most means tested benefits, and the only way to mitigate this is to introduce further complicate the system with taper reliefs which needs even more micromanaging of the benefit requirements, often on a monthly or even weekly basis, which further raises the administration costs. this is why most developing countries are looking at means testing as a last resort when it comes to implementing social security systems. are you suggesting that we should deliberately disempower the poor, even when they are trying to do the right thing and then raise their standard of living? we have tried that, and it is called the social security system, and comes under constant criticism for not helping those it was designed to help, massive levels of overpayment, which is not split up between deliberate fraud and innocent mistakes causing massive missrepresentation of the levels of fraud in the system. due to this it is then much more expensive to run, and looks like it is being milked by fraudsters, causing demands for almost continuous reform which then needs additional costs and complexity to not make it look like they are trying to punish the existing claimants. as to the problem of looking like it rewards the lazy, most people do something constructive when they are no longer under time and financial pressure. fathers might be able to see their young children more, rather than having to spend all their time at work. the disabled might be able to try helping in a charity shop for a few hours a week to see how much they can do, or actually have free time and money to take better care of their health. others use the free time to learn, either hobbies or skills which make them morem employable. at the same time you are not spending lots of money on administering the benefits, or doing fraud investigations for innocent mistakes. also, most of the discussion seems to revolve around the idea that you would implement it as a single universal benefit, but we do not do that with means testing, so why would we for benefits which remove means testing.
    1
  432. 1
  433. 1
  434. 1
  435. 1
  436. 1
  437. 1
  438. 1
  439. 1
  440. just do it right the first time is the best approach, but only when you have precise knowledge of the inputs, outputs, and the transforms you need to use in order to get from one to the other, and what other constraints like speed and memory usage apply. unfortunately while this may work for things like low level drivers, over 60% of projects don't know this stuff until you try and write it, and find out it was exactly what the customer did not want. tdd and ci work by having the developer write functional unit and integration regression tests to determine if the developer wrote what they thought was needed. acceptance tests are typically written when it is presented to the customer to confirm that the developer actually had a clue what the customer requirements were, and as the customer does not know this on 60% of projects, it cannot be done at tdd and ci time. instead, they are written when the customer is available, and then executed as part of the deployment pipeline which checks that it provides both the functional and non functional aspects needed by the customer. a few of these tests can be added to the ci system, but mostly they do a different job. tdd creates relatively clean testable code with tests that you know can fail. ci takes these tests and highlights any code which no longer meets the executable specification provided by the tests and thus does not do what was needed. cd and acceptance tests deal with how it works, is it fast enough, uses a small enough set of resources, anything which should stop you deploying the code. monitoring and chaos engineering checks things that can only be discovered in production, ie how does it scale under load.
    1
  441. 1
  442. 1
  443. yes your typical symbolic ai will have a fragile system, because it is only using shallow knowledge in the rules. as pointed out in this interview, statistical systems like chat gpt have an even bigger problem with this issue, as they only know what statistically plausible, with nothing constraining that, and as also pointed out in the interview, when looking at the statistical data in the hospital example, most of it was coincidence, and thus noise, which is why such systems produce hallucinations so often. the shallow knowledge problem was the reason that the cyc project was set up, as lenat kept encountering problems where after the initial sucess, you needed to be able to drill down to the deeper underlying reasons for something to make further progress, and that extra, deeper level knowledge was not just lying about in order to be able to just dump it into the system so you could make more progress, so he decided to start collecting it. current ai, especially black box statistical ai, excels in areas where good enough most of the time is beneficial, and total garbage the rest of the time does not really matter. for literally every other type of ai problem you need layer on layer of feedback telling the lower levels that the answer it contributed was wrong, and preferably what the right answer was so that it can get it right next time. this requires white box symbolic ai, as do various legal issues like copilot being an automated copyright infringement machine, or the issues of who is legally liable when the ai kills someone.
    1
  444. Every branch is essentially forking the entire codebase for the project, with all of the negative connotations implied by that statement. In distributed version control systems, this fork is moved from being implicit in centralized version control to being explicit. When two forks exist (for simplicity call them upstream and branch), there are only two ways to avoid having them become permanently incompatible. Either you slow everything down and make it so that nothing moves from the branch to upstream until it is perfect, which results in long lived branches with big patches, or you speed things up by merging every change as soon as it does something useful, which leads to continuous integration. When doing the fast approach, you need a way to show that you have not broken anything with your new small patch. The way this is done is with small fast unit test which act as regression tests against the new code, and you write them before you commit the code for the new patch and commit them at the same time, which is why people using continuous integration end up with a codebase which has extremely high levels of code coverage. What happens next is you run all the tests, and when they pass, it tells you it is safe to commit the change, this can then be rebased, and pushed upstream, which then runs all the new tests against any new changes, and you end up producing a testing candidate which could be deployed, and it becomes the new master. When you want to make the next change, as you have already rebased before pushing upstream, you can trivially rebased again before you start, and make new changes. This makes the cycle very fast, and ensures that everyone stays in sync, and works even at the scale of the Linux kernel, which has new changes upstreamed every 30 seconds. In contrast, the slow version works not by having small changes guarded by tests, but by having nothing moved to upstream until it is both complete and as perfect as can be detected. As it is not guarded by tests, it is not designed with testing in mind, which makes any testing slow and fragile, further discouraging testing, and is why followers of the slow method dislike testing. It also leads to merge hell, as features without tests get delivered with a big code dump all in one go, which may then cause problems for those on other branches which have incompatible changes. You then have to spend a lot of time finding which part of this large patch with no tests broke your branch. This is avoided with the fast approach as all of the changes are small. Even worse, all of the code in all of the long lived braches is invisible to anyone taking upstream and trying to do refactoring to reduce technical debt, adding another source of breaking your branch with the next rebase. Pull requests with peer review add yet another source of delay, as you cannot submit your change upstream until someone else approves your changes, which can take tens to hundreds of minutes depending on the size of your patch. The fast approach replaces manual peer review with comprehensive automated regression testing which is both faster, and more reliable. In return they get to spend a lot less time bug hunting. The unit tests and integration tests in continuous integration get you to a point where you have a release candidate which does all of the functions the programmer understood was wanted. This does not require all of the features to be enabled by default, only that the code is in the main codebase, and this is usually done by replacing the idea of the long lived feature branch with short lived (in the sense of between code merges) branches with code shipped but hidden behind feature flags, which also allows the people on other branches to reuse the code from your branch rather than having to duplicate it in their own branch. Continuous delivery goes one step further, and takes the release candidate output from continuous integration and does all of the non functional tests to demonstrate a lack of regressions for performance, memory usage, etc and then adds on top of this a set of acceptance tests that confirm that what the programmer understood matches what the user wanted. The output from this is a deployable set of code which has already been packaged and deployed to testing, and can thus be deployed to production. Continuous deployment goes one step further and automatically deploys it to your oldest load sharing server, and uses the ideas of chaos engineering and canary deployments to gradually increase the load taken by this server while reducing the load to the next oldest server until either it has moved all of the load from the oldest to the newest, or a new unspotted problem is observed, and the rollout is reversed. Basically though all of this starts with replacing the slow long lived feature branches with short lived branches which causes the continuous integration build to almost always have lots of regression tests always passing, which by definition cannot be done against code hidden away on a long lived feature branch which does not get committed until the entire feature is finished.
    1
  445. 1
  446. 1
  447. 1
  448. 1
  449. 1
  450. 1
  451. The reason the disabled are scepticalis because we have seen it before and it always goes the same way. They start of declaring a war on fraud and benefits scroungers, when the combined fraud and error overpayment is only a couple of percent (currently 3.6% combined). They then do nothing to identify the 1.8% of fraud and error underpayment, or the 3.6% overpayment, and instead just start looking for excuses to slash the benefits budget, resulting in lots of people who need help getting thrown under the bus. Then because these people who genuinely need help end up struggling, they either get forced into illegality, or end up getting mental health problems. Because of the increase in these problems, and in the scandals comming out about stopping benefits for people in genuine need, they have to try and patch up the system, but because they never have joined up thinking just make the problems worse. Finally they end up having to move back towards something similar to what that had before, but vastly inferior, causing more distress and suffering. Then they wait a couple of decades and start it all over again. You already have quaraplegics.being thrown of benefits because they don't need any help, and double amputees being told they don't have any problem walking, even while they are on the waiting list for some legs. And it will get worse before it gets better. You do not fix a system with problems due to error by adding extra causes of error on top of them. You fix them by identifying the causes of the errors, engineering improvements to the system to spot the errors early not punishing people for genuine errors, and only then do you use the information gathered during this process to improve the process to catch fraud earlier. Adding people with lower levels of training to override health care specialists does not stop the people being ill. Also this country does suck at retraining the unemployed.
    1
  452. 1
  453. 1
  454. The big mistake I see in all ubi videos is that they don't actually understand the idea, and constrain it to mean a benefit paid to everyone in the country equally, which while being one definition of ubi is not the only one. If you look at the UK, you will see enough examples of variation to blow most of the criticism out of the water. If you take the idea of a national minimum wage as an example, all the same arguments were made about how to bring it in, but it was done. They did it by reframing the definitions to ones that make sense. First, they realized that you do not have to make it the same for everyone, so young people who would be better served by getting higher education are paid a lower rate. The other thing they did was to bring it in at poverty pay levels, and then have above inflation increases until it helped make work pay. This gave employers time to react. Ubi has the same misunderstandings. You could have a national income paid equally to everyone, and linking it to gdp seems like a good way of managing it, but it is not the only way to do it. This is due to a basic misinterpretation of the word universal. It means applying to every qualifying individual, not that every individual needs to qualify equally. In the UK we have a number of benefits which qualify. Child benefit is paid for qualifying children, to the custodial parent. Basic state pension is paid to every elderly person. But these benefits are paid at different rates. The things they have in common are that the criteria are easy to assess, are not rapidly changing like with mean tested benefits and thus can be checked occasionally, and they pay the same amount to everyone who qualifies. If you split your ubi this way you get all of the benefits mentioned, but without the major downsides covered in these videos.
    1
  455. 1
  456. 1
  457. it clearly stated that the first email was saying there was a problem affecting the network, and when they turned up it was a meeting with a completely d8fferent department, sales, and that there was no problem. also no mention as to the enterprise offering being mandatory. at that point i would return to my company and start putting resiliency measures in place with the intent to min8mise exposure to cloudflare with the intent to migrate, but the option to stay if they were not complete dicks. the second contact was about was about potential issues with multiple national domains, with a clear response that it is due to differing national regulations requiring that. the only other issue mentioned was a potential tos violation which they refused to name, and an immedia5e attempt to force a contract with a 120k price tag with only 24 hours notice and a threat to kill your websites if you did not comply. at this point i would then have immediately triggered the move. on the legal view, they are obviously trying to force a contract, which others have said is illegal in the us where cloudflare has its hardware based. it is thus subject to those laws. by only giving 24 hours from the time that they were informed it was mandatory, they are clearly guilty of trying to force the contract, and thus likely to win. if they can win on that, then their threat to pull the plug on their business on short notice in pursuit of an illegal act also probably makes them guilty of tortuous interference, for which they would definitely get actual damages, which would cover loss of business earnings, probably get reputational damages, probably get to include all the costs for having to migrate to new providers, and legal costs. when i sued them, i would also go after not only cloudflare, but the entire board individually, seeking to make them jointly and severally liable, so that when they tried to delay payment, you could go after them personally. the lesson is clear, for resiliency, always have a second supplier in the wings which you can move to on short notice, and have that move be a simple yes or no decision that can be acted upon immediately. by virtue of this, don't get overly relient on external tools to allow the business to continue to be able to work to mitigate the disaster if it happens. also keep onsite backups of any business critical information. m9st importantly, make sure you test the backups. at least one major business i know of did everything right including testing the backup rec9very process, but kept the only copy of the recovery key file on the desktop of one machine in one office, with the only backup of this key being inside the encrypted backups. th8s killed the business.
    1
  458. 1
  459. 1
  460. 1
  461. 1
  462. 1
  463. 1
  464. 1
  465. 1
  466. 1
  467. 1
  468. 1
  469. 1
  470. 1
  471. 1
  472. 1
  473. 1
  474. 1
  475.  @ansismaleckis1296  the problem with branching is that when you take more than a day between merges, it becomes very hard to keep the build green and pushes you towards merge hell. the problem with code review and pull requests is that when you issue the pull request and then have to wait for code review before the merge, it slows everything down. this in turn makes it more likely that the patches will get bigger, which take longer to review, making the process slower and harder, and thus more likely to miss your 1 day merge window. the whole problem comes from the question of what is the purpose of version control, and it is to do a continuous backup against every change. however this soon turned out to be of less use than expected because most backups ended up in a broken state, sometimes going months between releasable builds. this made most of the backups to be of very little value. the solution to this turned out to be smaller patches merged more often, but the pre merge manual review was found not to scale well, so a different solution was needed, which turned out to be automated regression tests against the public api, which guard against the next change breaking existing code. this is what continuous integration is, running all those tests to make sure nothing broke. the best time to write the tests was before you wrote the code, as then you have tested that the test can fail and pass. this tells us that the code does what the developer intended it to do. tdd adds refactoring into the cycle, which further checks the test to make sure it does not depend on the implementation. the problem with not merging often enough is that it breaks refactoring. either you cannot do it, or the merge for the huge patch needs to manually apply the refactoring to the unmerged code. continuous delivery takes the output from continuous integration, which is all the deployable items, and runs every other sort of test against it trying to prove it unfit for release.if it fails to find any issues, then it can then be deployed. the deployment can then be done using canary releasing, with chaos engineering being used to test the resilience of the system, performing a roll back if needed. it looks too good to be true, but is what is actually done by most of the top companies in the dora state of devops report.
    1
  476. 1
  477. alpha beta and releasable date back to the old days of pressed physical media, and their meanings have changed somewhat in the modern world of online updates. originally, alpha software was not feature complete, and was also buggy as hell, and thus was only usable for testing what parts worked, and which parts didn't. beta software occurred when your alpha software became feature complete, and the emphasis moved from adding features to bug fixing and optimisation, but it was usable for non business critical purposes. when beta software was optimised enough, with few enough bugs, it was then deemed releasable, and sent out for pressing in the expensive factory. later, as more bugs were found by users and more optimisations were done you might get service packs. this is how windows 95 was developed, and it shipped with 4 known bugs, which hit bill gates at the product announcement demo to the press, after the release had already been printed. after customers got their hands on it the number of known bugs in the original release eventually went up to 15,000. now that online updates are a thing, especially when you do continuous delivery, the meanings are completely different. alpha software on its initial release is the same as it ever was, but now the code is updated using semantic versioning. after the initial release, both the separate features and the project as a whole have the software levels mentioned above. on the second release, the completed features of version 1 have already moved into a beta software mode, with ongoing bug fixes and optimisations. the software as a whole remains in alpha state, until it is feature complete, and the previous recommendations still apply, with one exception. if you write code yourself that runs on top of it, you can make sure you don't use any alpha level features. if someone else is writing the code, there is no guarantee that the next update to their code will not depend on a feature that is not yet mature, or even implemented if the code is a compatability library being reimplemented. as you continue to update the software, you get more features, and your minor version number goes up. bug fixes don't increase the minor number, only the patch number. in general, the project is moving closer to being feature complete, but in the meantime, the underlying code moves from alpha to beta, to maintainance mode, where it only needs bug fixes as new bugs are found. thus you can end up with things like reactos, where it takes the stable wine code, removes 5 core libraries which are os specific, and which it implements itself, and produces something which can run a lot of older windows programs at least as well as current wine, and current windows. however it is still alpha software because it does not fully implement the total current windows api. wine on the other hand is regarded as stable, as can be seen from the fact that its proton variant used by steam can run thousands of games, including some that are quite new. this is because those 5 core os specific libraries do not need to implement those features, only translate them from the windows call to the underlying os calls. the software is released as soon as that feature is complete, so releasable now does not mean ready for for an expensive release process, but instead means that it does not have any major regressions as found by your ci and cd processes. the software as a whole can remain alpha until feature complete, which can take a while, or if you are writing something new, it can move to beta as soon as you decide that enough of it is good enough, and when those features enter maintainance mode, it can be given a major version increment. this is how most projects now reach their next major version, unless they are a compatability library. so now code is split into 2 branches, stable and experimental, which then has code moved to stable when ci is run, but it is not turned on until it is good enough, so you are releasing the code at every release, but not enabling every feature. so now the project is alpha (which can suddenly crash or lose data), beta (which should not crash but might be slow and buggy) or stable (where it should not be slow, should not crash, and should have as few bugs as possible). with the new way of working, alpha software often is stable as long as you don't do something unusual, in which case it might lose data or crash. beta software now does not usually crash, but can still be buggy, and the stable parts are ok for non business critical use, and stable software should not crash, lose data or otherwise misbehave, and should have as few known bugs as possible, thus making it usable for business critical use. a different way of working with very different results.
    1
  478. 1
  479. 1
  480. 1
  481. 1
  482. 1
  483. 1
  484. 1
  485. There are a lot of unrelated issues being mixed up in this thread, some of which have simple but politically unpopular, some of which are complex but the politicians want simplistic solutions, which don't work. First is migration. You can solve this in an easy but unpopular way, just let everyone into the country, except those on criminal databases shared with other countries. At that point you have got rid of all of the visa beaurocracy and dealt with most of the illegals, who will now be legal. Totally unpopular, but proven to work as part of a broader system. Then you have legal migrants, of which you have many more. If they want come for a holiday for six months or a year and can afford it, they can already do that now. You just make them give details of their country of origin when entering the country, and bounce them if they cannot, just as you do now. At this point you have a database of every legal migrant in the country, and where they came from. If they commit crimes, and get convicted, they get deported, the same as now, but easier, as you know where they came from. This deals with the foreign crooks. At this point your database has a list of foreign nationals in good standing, and they can start earning entitlements. You start with emergency care, for all people in the country, which gets rid of the need to ask silly questions about their insurance status while you are trying to save their life. It doesn't work anyway. If they are just being tourists, they pay vat, tax on fuel either for a hire car, or indirectly through the use of taxis, and other services, so they are contributing. If they decide they want to work in the service industry for extra spending money to help pay for luxuries, let them, but force their boss to register them for a national insurance number, and let them have one. They will start on the emergency rate, which is high, and gradually migrate to a rate correct for the work they are doing, and in the meantime can be getting national insurance contributions to entitle them for gp care. Their contributions can be used to decide which services they can get free, and which they need to pay for, and can go towards their right to become a citizen. After ten years of work, you should have built up enough good standing that you can just become a citizen by passing the language requirements and a few other necessary things. At this point you have converted all the migrants both legal and illegal into productive members of the community and deported the crooks who got caught, and most crooks get caught eventually. As for people already living here, you can give them points for spending years here as a child which will then entitle them to services. If you make it clear in their national insurance communications that they are on a tempory foreign national insurance number and let them apply to convert to citizenship at any time after they have built up enough points, you automatically eliminate scandals like the windrush deportations where they just did not know they were not citizens. If you are foreign and become out of work, and do not have enough points, you have to go home,nand in the mean time you can deduct money from their emergency national insurance contributions to build up a bond to pay for them to get sent home even if they run out of money. If they go home by themselves, it reverts to a fund to top up the bonds of those who have not yet been here long enough. This also means that crooks with a job end up paying for their own deportation. As to health tourists, you can get them to go home as soon as they are well enough to travel. At this point, no illegals, any illegals are crooks by definition, but can convert to residents in good standing simply by getting a job, and can become a citizen simply by having a job for long enough. You can deal with foreign wives by changing national insurance from the employee, to the couple, and both partners contributions pay for both partners entitlements. This also fixes the problem of the high paid husbands divorcing the wives near retirement and leaving the wife with no entitlements. Now you get to those people who become ill. Let doctors make the decsions, as they have to in order to provide treatment, but deal with it differently. At the moment, if i come down with something which affects my ability to drive, the doctor has to notify the dvla, who then make a choice based upon the real data. Just extend it to cover conditions which affect your ability to work and notify the dwp. This gets rid of firms like atos milking the system so the government can punish the ill. You can then implement a system like working tax credit, but done properly, where you get the points as your circumstances change, and then they pay you the right amount. This gets rid of the 1.8% of underpayments due to fraud and error. It also can get rid of overpayments if you require the person on benefit to notify of a change in curcumstance like you have to with most benefits now. The overpayments are at 3.6%, due to fraud and error, and as you have half that amount gettting underpayed, you can assume that have of the over payment is due to error, as most poeple will not commit fraud to get less money. At this point you are getting most of your benefits without means testing, which is provably a lot cheaper. At this point you have implemented universal basic income, as a side effect of dealing with the mess that migration and benefits is currently in, and it is affordable, as you can make it taxable like basic state pension and child benefit, and just adjust the allowances to make it revenue neutral, just like was the case with working tax credit. By this point, people accrue entitlements by working or being in education, married couples end up with fair contributions to their pensions, sickness benefits are decided fairly by medical need and awarded automatically, and you have entitlements accruing gradually based on contributions. Of course you do have to fund health care properly, so that people get treated in a timely fashion and don't just keep getting sicker, and you have to fund reeducation for meaningfull local skills shortages for the unemployed and those being released from prison, but the whole thing will work better then the current system, without the stigma,mand mostly for less money.
    1
  486. 1
  487. 1
  488. 1
  489. 1
  490. 1
  491. 1
  492. 1
  493. 1
  494. 1
  495. 1
  496. 1
  497.  @NoEgg4u  it was replying to your comment. there were a few of the 8.5 million boxes where the operator could go into safe mode, do the trivial fix, and restart the machine. after the machine was restarted, standard recovery options would work, and even cloudstrike would autoupdate with the correct fix, once they started shipping it after 90 minutes. when there is no operator because your arrivals board is 20 feet in the air, the system has to spot the buggy driver, disable it, and reboot without it. windows generally does not do this. to fix it, you have to get someone there to fix it, they have to be capable and trustworthy enough to undo the security which is there for a reason, do the trivial fix, and then put all the security back. this is why the disaster took so long to fix. this patch went out as a live patch to the configuration files of every version of this software, with minimal testing which then did not work. this means running 1 or two releases behind won't stop it. the only way to stop it is for os vendors to put something in place to catch and isolate buggy kernel drivers, so it won't boot loop, and for corporate to insist on the vendor not doing a mandatory live push of configuration updates, often over the complaints of their insurance companies. the lesson you learned in your post was that the machines were down because they had no backups, but this was not the issue. they were down because the update crashed the kernel, and then got stuck in a boot loop. the crash was down to crowdstrike. the boot loop was down to microsoft not doing something to prevent it even after having seen it happen in the wild multiple times.
    1
  498. 1
  499. 1
  500.  @NoEgg4u  again, which you do not seem to be hearing, these were locked down and or inaccessible machines. their defense plan, often a the insistence 9f their insurance companies was to have a competent company use best practice procedures to do antivirus style protection of their machines without shipping garbage. a lot of them even had these machines running the previous versions of the driver on these machines until testing and live operations proved the driver was not rubbish. what cloudstrike did was ship a binary signature file with no effective testing done prior to release, as a live patch to all versions which then did not test them either, thereby subverting all the mitigation policies the companies had in place. as this is contrary to even standard practice in the 1990s, nobody expected a company operating at kernel level to do this. hence the outrage. when machines have to either be locked down, or are inaccessible, for which there are multiple use cases, the problem is getting out of the boot loop. after 90 minutes, the corrected patch was available which fixed it, if you could get the machine to boot. they discovered how to manually fix it a while later, but that only worked on standard desktops and servers, where "just boot into safe mode" was an option. your preferred choice of restore from a backup has two problems. 1, it ignores the fact that boot to safe mode just does not work with these types of use cases. 2, it is massive overkill which does not address any of the issues arising out of failing to be able to address 1 for hours, like united airlines planes and pilots not being where they needed to be for the next scheduled flights due to the cancelled flights. these machines are locked down for a reason. the amount of wilful blindness and basic lack of care needed at cloudstrike to even get to this point probably meets the level of gross negligence, let alone the lower standard of negligence to make their weasel words in the terms of service invalid. there is a case that as this was not the first instance of a boot loop problem reported to microsoft, but they did not fix it then they too could be on the hook for damages. the comparison for your stance is like going to a resteraunt and expecting the customers to check that the chef did not pee on the fish just before handing it to the waiter. customers at this level of service and the prices involved have a reasonable expectation of a minimum standard of care prior to shipping a patch. cloudstrike, by their own statements had less care than someone shipping a free web app, which is a level which leaves everyone with even a minor claim to competence saying what the ####.
    1
  501.  @NoEgg4u  so you are saying that machine like the airport flight information screens high in the air so that people can see them, and card payments readers like in my local pub should need to have a keyboard and mouse to hand so that some random person can take them into safe mode. what about the e-passport machines at the airport, the car parking machines in the street, the hole in the wall cash machines, all the embedded windows devices at the hospital, etc for all of these there are valid reasons why you won't or cannot plug in a keyboard, video and mouse (where would you even do it on a card reader), or why you want to limit access (health and safety for the flight information screens, security for the hole in the wall cash machines and where do you even start for embedded medical devices). saying that they should just be a globally accessible windows desktop just does not work in those cases (who is going to walk around with a windows desktop under their arm for 24 hours to monitor their heart condition for diagnostics). the examples i have given are just some of the cases i have personally seen information about being hit with this problem, but there are many more equally valid cases if you go looking for them. while i question the suitability of windows for some of these applications given its well documented list of broken updates, the fact is that it is what the manufacturers of those devices have decided to provide. then in a lot of cases the insurance company has mandated the use of software like cloudstrike due to the sector they are working in. for a lot of the companies involved, the entire reason for using someone like cloudstrike is so that these machines do not go down. they also run the previous driver version on these machines as a means of implementing a n-1 or n-2 update policy. by pushing a completely untested mandatory live update which crashed the kernel, cloudstrike subverted and bypassed many of the defense policies these companies use to prevent exactly such problems. by running the previous driver so you don't get killed by a bad update, not deploying the next driver until it has gone through testing, and been run by others for a while, only doing updates in times and places where it makes sense to do so, all these policies serve to maximise resilience, and by pushing an untested broken mandatory live update, cloudstrike broke all of them. this is why there will be lots of lawsuits incoming, and given what they have already said publically, i am sure that they will lose a lot of them for gross negligence, which their terms of service cannot exempt them from. as for the idea you have that you should just keep around enough skilled, and vetted engineers doing nothing just in case someone decides to act as stupidly as cloudstrike, resulting in taking down 40,000 machine networks that are designed to not go down, that is just not economically viable. no business can afford to keep around more than a few percent of expensive experts who are in limited supply just on the off-chance that you might need them for a day or two once every few years. you have to manage your business for the average day, planning to minimise the need for extra services. lots of these companies did, and cloudstrike invalidated those plans with their mandatory broken live update.
    1
  502. 1
  503. 1
  504. 1
  505. 1
  506. 1
  507. 1
  508. 1
  509. 1
  510. 1
  511. 1
  512. 1
  513. 1
  514. 1
  515. 1
  516. 1
  517. labour did not win anything, everyone else had their votes catastrophically collapse. any other viewpoint from labour will just get them kicked out when everyone else gets their acts together. labour heartlands are cities and areas normally controlled by nationalist parties, and their failure to address this, combined with toxic policies and politicians is what gave us 14 years o tory government. nationalist parties have over promised and under delivered, combined with some corruption scandals, leading them to unelectability. similar problems doomed the tories in this election. changing the electoral system won't fix anything, as all governmental systems suffer from major issues and occasional perverse results, which occur when the situation is exactly wrong. you just change which problem you prefer to tollerate. first past the post gives strong government, but under represents minority views, and is weak when the second party is weak. proportional representation better reflects the popular vote, but in practice produces shorttermist coalition governments which often collapse before the term ends. single transferable vote makes every vote count, but tends to give power to the least disliked. everything else breaks the link with the constituencies, giving power to party lists, and making politicians even more removed from the electorate. that does not even cover the problem of getting enough people to agree as to what to move to, as opposed to just agreeing what they dislike about the current system.
    1
  518. 1
  519. 1
  520. 1
  521.  @noblebearaw  it used all the points in all the images to come up with a set of weighted values which together enabled a curve to be drawn with all the images in one set on one side of the curve, and all the images in the other set on the other side of the curve. that is the nature of statistical ai, it does not care about why it comes to the answer, only that the answer fits the training data. the problem with this approach is that you are creating a problem space with as many dimensions as you have free variables, and then trying to draw a curve in that phase space, but there are many curves that fit the historical data, and you only find out which is the right one when you provide additional data which varies from the training data. symbolic ai works in a completely different way. because it is a white box system, it can still use the same statistical techniques to determine the category which the image falls into, but this acts as the starting point. you then use this classification as a basis to start looking for why it is in that category, wrapping the statistical ai inside another process, which takes the images fed into it, and uses humans to spot where it got it wrong, and look for patterns of wrong answers which help identify features within that multi dimensional problem space which are likely to match one side of the line or the other. this builds up a knowledge graph analogous to the structure of the statistical ai, but as each feature is recognised, named, and added to the model, it adds new data points to the model, with the difference being that you can drill down from the result to query which features are important, and why. this also provides extra chances for extra feedback loops not found in statistical ai. if we look at compiled computer programs as an example, using c and makefiles to keep it simple, you would start of by feeding the statistical ai with the code and makefile, and feed it the result of the ci / cd pipeline, determining if the change just made was releasable or not. eventually, it might get good at predicting the answer, but you would not know why. the code contains additional data implicit within it which provides more useful answers. each step in the process gives usable additional data which can be queried later. was it a change in the makefile which stopped it building Correctly? did it build ok, but segfault when it was run? how good is the code coverage of the tests on the code which was changed? does some test fail, and is it well enough named that it tells you why it failed? and so on. also a lot of these failures will give you line numbers and positions within specific files as part of the error message. if you are using version control, you also know what the code was before and after the change, and if the error report is not good enough, you can feed the difference into a tool to improve the tests so that it can identify not only where the error is, but how to spot it next time. basically, you are using a human to encode information from the tools into an explicit knowledge graph which ends up detecting that the code got it wrong because the change in line 75 of query.c returns the wrong answer to a specific function when passed specific data because a branch which should have been taken to return the right answer was not taken because the test on that line had 1 less = sign than was needed ad position 12, making it an assignment statement rather than a test, making the test never pass. it could then also suggest replacing the = with == in the new code, thus fixing the problem. none of that information could be got from the statistical ai, as any features in the code used to find the problem are implicit in the internal model, but it contains none of the feedback loops needed to do more than identify that there is a problem. going back to the tank example, the symbolic ai would not only be able to identify that there was a camouflaged tank, but point out where it was hiding, using the fact that trees don't have straight edges, and then push the identified parts of the tank through a classification system to try and recognise the make and model of the tank, this providing you with the capabilities and limitations of the identified vehicle as well as the presence and location. often when it gets stuck, it resorts to the fallback option of presenting the data to the human and saying "what do you know in this case which i don't", adding that information explicitly into the know,edge graph, and trying again to see if it altered the result.
    1
  522. 1
  523. the problem with the idea of using statistical ai for refactoring is that the entire method is about producing plausible hallucinations that conform to very superficial correlations. to automate refactoring, you need to understand why the current code is wrong in this context. this is fundamentally outside the scope of how these systems are designed to work, and no minor tweaking can remove the lack of understanding from the underlying technology. the only way around this is to use symbolic ai, like expert systems or the cyc project, but that is not where the current money is going. given the current known problems with llm generated code, lots of projects are banning it completely. these issues include: exact copies of the training data right down to the comments, leaving you open to copyright infringement. producing code with massive security bugs due to the training data not being written to be security aware. producing hard to test code, due to the training data not being written with testing in mind. the code being suggested being identical to code under a different license, leaving you open to infringement claims. when the code is identified as generated, it is not copyrightable, but if you don't flag it up it moves the liability for infringement to the programmer. the only way to fix generating bad code is to completely retrain from scratch, which does not guarantee fixing the problem and risks introducing more errors. these are just some of the issues of statistical methods, there are many more.
    1
  524. 1
  525. 1
  526. There is some confusion about branches. Every branch is essentially a fork of the entire codebase from upstream. In centralized version control, upstream is the main branch, and everyone working on different features has their own branch which eventually merges back into the main branch. In decentralized version control who is the main branch is a matter of convention, not a feature of the tool, but the process works the same. When you clone upstream, you still get a copy of the entire codebase, but you do not have to bother creating a name for your branch, so people work in the local copy of master. They then write their next small commit, add tests, run them, rebase, and assuming the tests pass push to an online copy of their local repository and generate a pull request. If the merge succeeds, when they next rebase the local copy will match upstream which will have all of their completed work in it. At this point, you have no unsynchronized code in your branch, and you can delete the named branch, or if distributed, the entire local copy, and you don't have to worry about it. If later you need to make new changes you can either respawn the branch from main / upstream, or clone from upstream and you are ready to go with every upstream change. If you leave the branch inactive for a while, you have to remember to do a rebase before you start your new work to get to the same position. It is having lots of unsynchronized code living for a long time in the branch which causes all of the problems, because by definition anything living in a branch is not integrated and so does not enjoy the benefits granted by being merged. This includes not having multiple branches making incompatible changes, and finding out that things broke because someone did a refactoring and your code was not covered, so you now get to fix that problem.
    1
  527. 1
  528. 1
  529. ​ @ContinuousDelivery this is exactly the correct analogy to use. In science what you are doing is crowd sourcing the tests based upon existing theories and data, and using the results to create new tests, data and theories. Peer review is then equivalent of running the same test suite on different machines with different operating systems and library versions to see what breaks due to unspecified assumptions and sensitivity to initial conditions. This then demonstrates that the testing is robust, and any new data can be fed back into improving the theory. And like with science, the goal is falsifiability of the initial assumptions. Of course the other problem is that there is a big difference between writing code and explaining it, and people are crap at explaining things they are perfectly good at doing. Testing is just explaining it with tests, and the worst code to learn the skill on is legacy code with no tests. So people come along and try to fit tests to legacy code only to find that the tests can only be implemented as flaky and fragile tests due to the code under test not being designed for testability, which just convinces them that testing is not worth it. What they actually need is to take some tdd project which evolved as bugs we're found, delete the tests, and compare how many and what types of bugs they find as they step through the commit history. If someone was being really nasty they could delete the code, and reimplement it with a bug for every test until they got code with zero passes, and then see what percentage of bugs they found when they implemented their own test suite.
    1
  530. ​ @sarthakdash3798 it might be, but the point it makes comes straight from the 1970s ibm research on how most cisc chips contain instructions which are often both slow and buggy, and how optimising the compiler to generate fewer of these instructions and thereby only use a smaller part of the instruction set actually produced better, faster and less buggy code. cisc came about because we did not have either the knowledge to build, or the resources to run such advanced compilers. risc came about because cisc is a nightmare from so many different angles that people thought it a good idea to try a different approach, and it worked. the gpu issue is different. both cisc and risc use a single stream of instructions working on a single stream of date. sisd for short. gpus still use the single stream of instructions, but every point has different data, or simd, which has advantages for some workloads. then you have the third case, multiple instruction streams with multiple data streams, which was researched by danny hillis and others in the 1980s. this is basically multicore with advanced inter core communications, and cisc is really bad at it compared to risc just due to the extra size and power needs per core, which is why things like thread rippers need something that sounds like a jet engine on top to stop it overheating. again, smp works well for some workloads, not so well for others, which is why cisc designers are making chips with a mixture of slow efficient cores and fast power hungry ones, an approach not needed with risc.
    1
  531. 1
  532. 1
  533. 1
  534. 1
  535. 1
  536. 1
  537. Tdd comes with a number of costs and benefits, and so does not doing tdd or continuous integration. The cost of doing tdd is that you move your regression tests to the front of the process, and refactor as you go and it can cost up to 35 percent extra in time to market.. What you get back is an executable specification anyone can run to reimplement the code in the form of tests, a set of code designed to be testable with very few bugs, and the combination is optimized for doing continuous integration. You also spend very little time on bug hunting. it also helps with areas that are heavily regulated as you can demonstrate on an ongoing basis that it meets the regulations. All of this helps with getting customers to come back for support, and for repeat business. Not doing tdd also comes with benefits and costs. The benefit Is mainly that your initial code dump comes fast, giving a fast time to market. The costs are significant. As you are not doing incremental testing, the code tends to be hard to test and modify. It also tends to be riddled with bugs which take a long time to find and fix. Due to the problem of being hard to modify, it is also hard to extend, and if they have to get someone else to fix it it can sometimes be quicker to just reimplement the whole thing from scratch. This tends to work against getting support work and repeat business. As for the snowflake code no one will touch, it will eventually break, at which point you end up having to do the same work anyway, but on an emergency basis with all the costs that implies. Testing is like planting a tree, the best time to do it is a number of years ago, the second best time is now. The evidence for incremental development with testing is in, in the dora reports. Not testing is a disaster. Test after gives some advantages initially, while costing more, but rapidly plataus. Test first cost a very little more than comprehensive test after, but as more code I covered you get an ever accelerating speed of improvements and ease of implementation of those improvements, and it is very easy for others to come along and maintain and expand the code, assuming they don't ask you to do the maintenance and extensions.
    1
  538. 1
  539. 1
  540. 1
  541. 1
  542. 1
  543. 1
  544. 1
  545.  @coversine479  no, he meant both. the idea of doing the study was fine, but there are a number of ethical and technical steps that should be taken prior to starting it which they completely failed to even consider. the first of which is should we even do it, and if so what rules should we set up? the standard way to do this is for the university to look at the size of the project, and see if it is big enough to absorb any potential harm caused by the study, and to document the potential harm prior to beginning the study so as to minimise it when setting the rules of engagement for the study. they did not do this. as this was a code study, the next step should have been to find someone connected to the project who did not do code review who could be a point of contact and potentially could have a full audit trail of all the submissions. they did not take either step as far as i have been able to discern. this is what pissed off the devs, because having discovered someone looking like a bad actor, and tracing them back to the university, it was then impossible for a while to determine if it was student or faculty, and if this was a one off or systematic. this is what caused the fallout. yes they blocked the gmail account, but they should then have been able to ask the developer what was going on, and got a reply of here is what we were doing, these people knew about it, and here is every patch involved. they could not do any of that, so that got the university blocked until that information could be independently created and confirmed, at which time the University got unblocked. they implemented the study protocols so badly that they were not only technically bad, end ethically questionable, but due to hacking being illegal to some extent in most countries their behaviour skirted around being criminal. all of these problems would have been caught if a proper review was done by the university legal and ethics board prior to starting the project. not doing so not only slimed themselves, but brought the University into disrepute for allowing it to happen.
    1
  546. 1
  547. 1
  548. I doubt it, but you do not need them. If you look at history you can see multitudes of examples of new tech disrupting industries, and project that onto what effect real ai will have. Specialisation lead us away from being surfs, automation removed horses as primary power sources, and changed us from working near 18 hour days seven days per week towards the current 40 hour 5 day standard. Mechanisation also stopped us using 98 percent of the population for agriculture, moving most of them to easier, lower hour, better paying work. This lead to more office work, where wordprocessors and then computers killed both the typing pool and the secretarial pool, as bosses became empowered to do work that used to have to be devolved to secretaries. As computers have become more capable they have spawned multiple new industries with higher creative input, and that trend will continue, with both ai and,additive manufacturing only speeding up the process. The tricky bit is not having the industrial and work background change, but having the social, legal and ethical background move fast enough to keep up. When my grandfather was born, the majority of people still worked on the land with horses, we did not have powered flight, and the control systems for complex mechanical systems were cam shafts and simple feedback systems. When I was born, we had just stepped on the moon, computers had less power than a modern scientific calculator app on your smartphone, and everyone was trained at school on the assumption of a job for life. By the time I left school, it became obvious that the job for life assumption was on it's way out from the early seventies, and we needed to train people in school for lifelong learning instead, which a lot of countries still do not do. By the year 2000, it became clear that low wage low skilled work was not something to map your career around, and that you needed to continually work to upgrade your skills so that when you had to change career after less than 20 years, you had options for other, higher skilled and thus higher paid employment. Current ai is hamstrung by the fact that companies developing it are so pleased by the quantity of available data to train them with that they ignore all other considerations, and so the output is absolutely dreadful. If you take the gramarly app or plug in, it can be very good at spotting when you have typed in something which is garbage, but it can be hilariously bad at suggesting valid alternatives which don't mangle the meaning. It also is rubbish at the task given to schoolchildren to determine things like if you should use which or witch, or their, there or the're. Copilot makes even worse mistakes, as you use it wanting quality code, but the codebases it was trained upon have programmers with less than 5 years experience, due to the exponential growth of programming giving a doubling of the number of programmers every 5 years. It also does nothing to determine the license the code was released under, thereby encouraging piracy and similar legal problems, and even if you could get away with claiming that it was generated by copilot and approved by you, it is not usually committed to version control that way, leaving you without an audit trail to defend yourself. To the extent you do commit it that way, it is not copyrightable in the us, so your companies lawyers should be screaming at you not to use it for legal reasons. Because no attempt was made as a first step to create an ai to quantify how bad the code was, the output is typically at the level of the average inexperienced programmer, so again, it should not be accepted uncritically, as you would not do so from a new hire, so why let the ai contribute equally bad code? The potential of ai is enormous, but the current commercial methodology would get your project laughed out of any genuinely peer reviewed journal as anything but a proof of concept, and until they start using better methods with their ai projects there are a lot of good reasons to not let them near anything you care about in anything but a trivial manner. Also as long as a significant percentage of lawmakers are as incompetent as you typical magazine republican representative we have no chance of producing a legal framework which has any relationship to the needs of the industry, pushing development to less regulated and less desirable locations, just like is currently done with alternative nuclear power innovations.
    1
  549. 1
  550. 1
  551. 1
  552. 1
  553. 1
  554. 1
  555. like any country, they have the right to make their own decisions based on their own values, along with the need to take responsibility for the consequences of those choices. you want to lock people up for 6 years for refusing to deny their sexual orientation, that is fine, but then you have to cope with the more developed countries warning their own citizens about it not being safe to go there. on top of that you have to cope with the fact that at least 10 percent of the population in the parts of the world where it is safe to talk about are open about it. these 10 percent will then not come to your country to be doctors, nurses, teachers, engineers, or to do any of the jobs that you want them to do to develope your country, and neither will their family members who might face 6 years in jail for not denying the status of a family member. this will affect your speed of development which is the price for your choice. it might not be the priority of your politicians now to deal with the problems caused those from abroad who have expertise you need, but how long will it be before the side effects of these choices come back to bite you in your wallets? how much slower than your neighbours can you afford to be and for how long before you are forced by need to be a little less strict about those born abroad. if you need convincing of the price of not playing well with others, just look at the financial impact of russia trying to say we can continue perfectly well without everyone else. look at the cost to china because their courts bend over backwards to favour locals over foreign businesses. we live in an interconnected world, and it is no longer possible to ignore this fact.
    1
  556. 1
  557. 1
  558. 1
  559. 1
  560. 1
  561. 1
  562. 1
  563. 1
  564. 1
  565. 1
  566. 1
  567. 1
  568.  @Me__Myself__and__I  no, i was not using it to show my ignorance, but to give a clear example of how the black box nature of the system leaves you vulnerable to the problem that you cannot know how it got the result, and that functional equivalents of the same issue are inherent to the black box nature of the solution. almost by definition, llms specifically, and black box ai more generally have the issue that literally the only way to handle the system getting wrong answers is to surround it with another system designed to recognise previous wrong answers, and return the result it should have returned in the first place, thereby bypassing the whole system for known queries with bad answers, but removing all mechanisms to update the system to get smarter so as to not only avoid the known bad, but reduce the number of unknown bad. it also has an issue of the results being poisoned by bad training data, but my point is the difficulty of detecting when this has happened, combined with the inability to fix the issues fundamentally compromises the usefulness of such systems for any problems which really matter, as in those problems typically you need to know no only that it is right, but why it is right, and you need to know it fast enough for it to make a difference. while i am a fan of ai done well, too often it is not. not only do you need the right type of ai for the right problem, but for non trivial problems it needs to be able to give and receive feedback about what worked and what did not. black box ai leaves you with the only answer to why being because the authority in the form of the ai said so. i don't think that is a good enough answer for most problems, and it really is not for any number of jobs where you might later need to justify not only what you did, but why.
    1
  569. 1
  570. 1
  571. 1
  572. 1
  573. 1
  574. 1
  575. 1
  576. 1
  577. 1
  578. the problem is not with the learning. if it was using the same input to generate rules for a symbolic ai like an expert system, then used the rules to write code that would be fine. that is not how it works. with statistical ai, it creates plausible generated code, and as your specification gets close enough, the training set approximates towards a single sample. this results in a 1 to 1 copy. if you think this is a spurious argument, multiple book authors are suing for exactly this case. the problem with violation is that it applies everywhere, and the ai has no audit trail to prove it is not guilty. this leaves both the user and the ai owner with potentially huge liabilities which they cannot defend, where they could be sued anywhere. the only significant defense for software is the obviousness defense, where near identical code implements basically the same function, but it is not collecting that data either. in the end, the ai copyright infringement issue will not generally be solved with software, but with books, audio, and video, and then the licencing issue will be an addition on top of all that. think of it like how microsoft got away with blatant monopoly abuse in the us, but then had to mitigate their behaviour expensively in the eu because they did not implement rules as silly as the ones in the us. also, remember that the movie alien nearly could not be released due to the script being almost an exact copy of one of the stories in a e van vogt's voyage of the space beagle. it was only able to be released because the author liked the way the director made the film, and both sides w3re willing to talk in good faith.
    1
  579. 1
  580. 1
  581. 1
  582. 1
  583. 1
  584. 1
  585. 1
  586. 1
  587.  @mandy2tomtube  true, life started out with no language, and no models of the environment, and really rubbish decision making. which is all irrelevant. black box ai has a number of fatal flaws in the basic design, which fundamentally cap the level to which it can go, and the rolls where it can be applied. this is due to the facts that it has no model of the problem space it is working on, and thus gets minimal feedback, and the fact that for man rated systems, you need to be able to ask not just if it got it wrong, but how it got it wrong, so you can determine how to fix it, and apply the patch. at the moment we cannot know how, we can only wrap the system in a conventional program, spot examples it has got wrong in the past, and return the right answer. unfortunately this does not stop it getting nearly identical cases wrong. you also have no method with which to fix it, which is especially important as the latest research has found the majority of the models to be full of security holes. the only way to resolve that is to stop using statistical ai as anything but a learning accelerator, and move to white box symbolic ai instead, which is what cyc does. we don't limit the options for flight to man powered flight, nor transport in general to how fast your horse can run, so how we got here does not matter much, it is how we get from just before here to just after here that matters, and statistical ai is just not up to the job. for anything else, you need models, which are either mathematical, or expressed in language.
    1
  588. 1
  589. 1
  590. ​ @lucashowell7653 the tests in tdd are unit tests and integration tests that assert that the code does what it did the last time the test was run. These are called regression tests, but unless they have high coverage and are run automatically with every commit you have large areas of code where you don't know when something broke. If the code was written before the tests, especially if the author isn't good at testing, it is hard to retrofit regression tests, and to the extent you succeed they tend to be flakier and more fragile. This is why it is better to write them first. Assuming that the code was written by someone who understands how to write testable code, you could use A.I. to create tests automatically, but then you probably would not have tests where you could understand easily what the test failing meant due to poor naming. When you get as far as doing continuous integration the problem is even worse, as the point of the tests is to prove that the code still does what the programmer understood was needed and document this, but software cannot understand this yet. If you go on to continuous delivery, you have additional acceptance tests whose purpose is prove that the programmer has the same understanding of what is needed as the customer, which requires an even higher level of understanding of the problem space, and software just does not understand either the customer or the programmer that well either now or in the near future. This means that to do the job well, the tests need to be written by humans to be easily understood, and the time which makes this easiest is to write one test, followed by the code to pass the test. For acceptance tests the easiest time is as soon as the code is ready for the customer to test, adding tests where the current version does not match customer needs. Remember customers don't even know what they need over 60% of the time.
    1
  591. 1
  592. 1
  593. 1
  594. 1
  595. 1
  596. 1
  597. 1
  598. 1
  599. 1
  600. 1
  601. 1
  602. 1
  603. 1
  604.  @echorises  i agree version control is to important and useful to only be used for programming. i would much rather have a repository of useful txt files handled with version control, instead of having microsoft word trying to mishandle multiple copies of a binary word document which has been modified by multiple people. git is just the best version control client we have. unfortunately, higher education has little to do with generating new knowledge. it is mostly a certificate mill used to generate enough income to pay for teachers and administrators to have a job. even worse, in higher level education a certain amount of teaching is forced upon post doctoral students without them being g8ven any teacher training, while professors are jumping through hoops trying to get external funding to pay for a very limited amount of research, with most of the time being used with students and funding hunts. worse still, until you get tenure, and thus don't need to worry about having a job next year, your actual research wil be constrained by the university to those non controversial bits of the subject that will help you get tenure. only after getting tenure are you free within the funding constraints to actually do any research you want in what little free time you are given. with the possible exception of japan, no country has yet produced a system where there is a part of the university which takes the pure research, funds getting it to the point where it is usable by industry, and then licenses the technology to industry to generate revenue to fund the part which takes the pure research and develops it. at that point, your tenured professors would actually be being paid to do pure research combined with developing existing research into stuff usable by industry, while the untenured ones could use the university development fund to find research which would be funded by the university, would help towards tenure, and would be passing knowledge to students. the post doctoral students would still split the time doing work which the professors had got funded combined with teaching. i would say it should not be possible to get your degree without having to get a teaching qualification as part of it, as so much of the time of professors and post docs is forced to be spent on teaching. as to producing students fit for industry, that has never been part of the goals of universities. with the exception of Germany, no country has a system of general education which is not designed with the intent of filtering out those not fit for an academic career, and basicaly throwing away the rest. germany does actually have a second path, dealing with some vocational qualifications. however most education is designed to take those unsuitable for academia and turn them into nice quiet sheeple, which we just cannot afford any longer.
    1
  605. 1
  606. 1
  607. 1
  608. 1
  609. 1
  610. 1
  611. 1
  612. 1
  613.  @amyshoneye5455  i am not claiming that the immigration levels, and this countries failure to address them properly are not causing issues, they are. most notably, the act of some minorities to self ghettoise by only working for, shop at, rent property from,and get entertainment from people of the same community. we actually ended up with riots due to that, and the cause was people moving here for money, and then trying to live exactly like they did in the old country. migration only works well if you integrate with the people in the new country, whether that is the windrush generation moving to britain, or brits moving to spain. that is a different problem than if we are full, as regards that, there are a number of western countries with higher population densities than ours, so it is not down to density, but down to distribution and provisioning. provisioning is a major problem. for a lot of services, the level of provisioning is determined by the returned figures from the uk census, only taken every 10 years, and widely recognised as not being fit for this purpose. there are a number of things that could be done to replace this, like requiring you to submit the essential information in order to register at doctors, dentists, and schools, but no government has bothered to do it. the other major problem is housing. we have not been building enough houses, and definitely not the right sort. we have known this since the falklands war, and no government has bothered to change the planning rules enough to make a difference, which needs fixing. the other issue is where the migrants go, and how they are treated. skilled economic migrants have to go where they are needed, but hit the provisioning problems mentioned above. this group always ends up being of benefit to the country, and the local economy, but too many politicians treat them like illegal migrants so they can score short term political points, which helps no-one, not even their own party. low skilled economic migrants are largely getting excluded now that eu free movement does not apply, and we move to a more australian points based system, so this is a problem which will be self correcting over time, but mostly they also are of value, just not as much as the skilled economic migrants, and rule changes are helping here a lot. genuine asylum seekers eventually end up going into one of the two above categories, but the rules and the people treat them all like illegals, which seriously hurts their interactions with everyone else, and encourages the sort of ghetto forming mentioned above due to nobody else giving a dam to help them. lastly, you have the illegal asylum seekers and other illegals. the only policy we have seems to be "illegals bad, boo", and basically we need to actually come up with some policies which look at the problems surrounding asylums seekers and illegals, actually come up with some policies which address those problems. at the moment, we don't, and until we do, this will continue to cause massive problems. first we need to deal with the provisioning issue. i gave some possibilities earlier for how to address that. then we need to stop requiring asylum seekers to cluster at the point of arrival, and allow them to work. this will spread out their populations, and remove most of the artificially induced stress points for these communities, easing integration. it will also ease their integration into the legal migrants groups during the long time it takes to deal with their eventual status. as for the true illegals, they need to live somewhere, and work somewhere, so we need to improve how the rules make landlords and employers interact with them, while being flexible enough to accept that nobody is perfect and sometimes they will genuinely get it wrong. the rules need to be about compliance, and the penalties need to be about persistent non compliance.. this will not solve all the problems, but will make a major difference, and the remaining problems will become clearer, and can also be addressed.
    1
  614. 1
  615. 1
  616. 1
  617. 1
  618. 1
  619. 1
  620. 1
  621. 1
  622. 1
  623. 1
  624. 1
  625. 1
  626. 1
  627. 1
  628. 1
  629. 1
  630. 1
  631. 1
  632. 1
  633. 1
  634. 1
  635. 1
  636. 1
  637. ​ @georganatoly6646 this is where the ci and cd distinction comes in useful. using c for illustrative purposes, you decide to write mylibrary. this gives you mylibrary.h which contains your public api, and mylibrary.c which contains your code which provides an implementation of that public api. to the extent your tests break this separation, they become fragile and implementation dependant. this is usually very bad. by implementing your unit and integration tests against the public api in mylibrary.h, you gain a number of benefits, including: 1, you can throw away mylibrary.c and replace it with a total rewrite, and the tests still work. to the extent it does not, you have either broken that separation, or you have not written the code to pass the test that failed. 2, you provide an executable specification of what you think the code should be doing. if the test then breaks, your change to mylibrary.c changed the behaviour of the code, breaking the specification. this lets you be the first one to find out if you do something wrong. 3, your suite of tests gives lots of useful examples of how to use the public api. this makes it easier for your users to figure out how to use the code, and provides you with detailed examples for when you write documentation. finally, you use the code in myprogram.c, and you have only added the functions you need to the library (until someone else starts using it in theirprogram.c, where the two programs might each have extra functions the other does not need, which should be pushed down into the library when it becomes obvious that the code should be there instead of in the program). you then use ci to compile and test the program, at which point you know that the code behaves as you understand it should. this is then passed to cd, where further acceptance tests are run, which determine if what you understood the behaviour matches what your customer understood the behaviour to be. if there is a mismatch found, you add more acceptance tests until it is well enough documented, and go back and fix the code until it passes the acceptance tests as well. at this point not only do you know that the code does what you expect it to do, but that this matches with what the customer expected it to do, in a way that immediately complains if you get a regression which causes any of the tests to fail. in your example, you failed because you did not have a program being implemented to use the code, so it was only at the acceptance test point that it was determined that there were undocumented requirements.
    1
  638. ​ @deanschulze3129 there are reasons behind the answers to some of your questions, and I will try and address them here. First, the reason tdd followers take automated regression testing seriously is that a lot of the early advocates came from experience with large teams writing complex software which needed long development times. in that context, regression tests are not optional, as lots of people are making lots of changes to different parts of the code that they don't know very well. This led to the development of continuous integration, where code coverage for regression testing was essential. Tdd later came along after the development of continuous integration, with the added awareness of technical debt to add refactoring to the continuous integration cycle. You don't seem to understand just how recent the understanding of how to do regression testing is. Even the idea of what a unit test is was not present in the 2012 version of the book "the art of software testing", but it forms the base of the testing pyramid at the heart of regression testing. Also, automated regression testing cannot work unless you get management buy in to the idea that code needs tests, and broken tests are the most important code to fix, which is even harder to get quickly, but all of the tech giants do exactly that. You cannot do continuous integration without it. Even worse, you cannot learn good test practices trying to fit tests to code written without being tested in mind. The resulting tests tend to have to depend on implementation details and are often flakey and fragile, further pushing against the adoption of regression testing. As to statistics, the dora metrics produced from the annual state of Dev ops report clearly indicated that no testing produces the worst results, test after initially provides better results than no testing, but only up to a certain point due to the previously mentioned problems with retrofitting regression tests to code not designed for it, and test first produces ever faster production of code of higher quality than either of the other two. The methodology surrounding the report is given in detail in the accelerate book, by the authors of the state of Dev ops report as they got fed up of having to explain in detail to every new reader they encountered. Bear in mind, the number of programmers doubles every five years, so by definition most programmers have less than five years experience in any software development methodology, let alone advanced techniques. Those techniques are often not covered in training courses for new programmers, and sometimes are not even well covered in all degree level courses.
    1
  639. 1
  640. 1
  641.  @trignals  not really. the history of programming has been to migrate away from hard to understand, untestable, clever code which almost nobody can understand, towards code which better models the problem space and the design goals needed to get something good to do the job, which is easier to maintain due to the costs moving away from the hardware, then the initial construction, till most of the cost is now in the multi year maintainence mode. there are lots of people in lots of threads on lots of videos about the subject who seem to buy the hype that you can just throw statistical ai at legacy code, it will suddenly create massive amounts of easy to understand tests, which you can then throw at another ai which can just trivially create wonderful code which will replace that big ball of mud with optimum code behind optimum tests, where the whole system is basically ai generated tests and code, but built by systems which fundamentally can never reach the point of knowing the problem space and the design options as they fundamentally do not work that way. as some of those problems are analogous to the halting problem, i am fundamentally sceptical of the hype which goes on to suggest that if there is not enough data to create enough superficial correlations, then we can just go ahead and use ai to fake up some more data to improve the training of the other ai systems. as you can guess, a lot of these assumptions just do not make sense. a system which cannot model the software cannot then use the model it does not have to make high level design choices to make the code testable, it cannot then go on to use the analysis of the code it does not do to figure out the difference between bad code and good code, or to figure out how to partition the problem. finally, it cannot use that understanding it does not have to decide how big a chunk to regenerate, and if the new code is better than the old code. for green field projects, it is barely plausible that you might be able to figure out the right tests to give it to get it to generate something which does not totally stink, but i have my doubts. for legacy code, everything depends on understanding what is already there, and figuring out how to make it better, which is something these systems basically are not designed to be able to do.
    1
  642. 1
  643. 1
  644. 1
  645. 1
  646. there is nothing you can do to stop a bad driver from causing the kernel to crash. there are lots of things you can do to stop the boot loop, which is what might leave microsoft on the hook as well. first you have windows write a flag to storage as soon as it is able to say it started booting. then you have it write over that flag which driver it is starting. then when it finishes booting, you write over the flag that it finished booting. then the kernel crashes and the system reboots. the windows system then knows that it crashed because the flag does not say it completed. it also knows which driver broke it, and can disable it. it can also treat the boot start flag as a request, and have an internal table of the few drivers like the filesystem which can't be disabled. after the crash it can downgrade the boot start flag internally so that when it crashes again, it can be disabled. if the driver recovers, it can be re-enabled on next boot. this gives the driver the chance to recover on reboot. they can automatically add drivers to the internal essential drivers list during certification by simply replacing the driver with a return statement, and seeing if it fails to boot. if it does, it cannot be blocked and is added to the list. they can then disable the driver on reboot, or second reboot if it is boot start, and put i huge warning on the screen that the broken driver was disabled, causing the customer to question why the broken driver was released. this could have been done by microsoft or any other os vendor after any of the previous high profile boot loop issues, but they did not. and the eu thing is just more microsoft misinformation.
    1
  647. 1
  648.  @tma2001  cloudstrike have made a number of dubious statements, some of which are obvious lies or the person saying them is clueless. take your statement about the update file. cloudstrike said it basically had nothing to do with the issue, but if you remove it, the problem goes away. both cannot be true. then there is the issue of it not containing all zeros, but lots of it guys have looked at the contents before deleting it and found it only had zeros. giving them the benefit of the doubt which their own statements say they don't deserve, even if the file contained a header, they obviously were not bothering to have the updater validate it prior to putting the file in place, nor having the kernel driver do so before blindly trying to read it. both are standard practice. similarly, their own statements make it clear that their only filter to shipping was running it against an obviously badly designed validator, and then skipping any other testing. for something running in kernel mode, every change should go through the entire test suite every time, and shipping it how they did should not even have been possible. even their public statement of what they intend to do to make it less likely in the future basically left people saying why were you shipping at all if you were not doing those things already. nothing about the information coming from cloudstrike makes them look good, from a single developer being able to live patch 8.5 million machines without testing, to a validator which is designed to pass everything unless it recognises specific things to be broken, to a minimal testing environment for the full driver, to not doing canary releasing. non of it makes them look good, and then having their idea of compensation for causing millions in damages being a generic 10 dollar gift voucher with uber eats, which promptly got cancelled because it looked like fraud because they did not talk to uber eats, it just makes you ask how much longer until they do anything right.
    1
  649. 1
  650. 1
  651. 1
  652. 1
  653. 1
  654. 1
  655. 1
  656. 1
  657. 1
  658. 1
  659. 1
  660. 1
  661. 1
  662. 1
  663. 1
  664. 1
  665. 1