Comments by "" (@grokitall) on "Theo - t3․gg"
channel.
-
1
-
1
-
so is yours.
in a mature process, the driver is signed, then put through a validator designed to fail it as part of the ci process leading to a release, which includes installing it on test machines.
the signature says that the file the validator checked is the one you are about to ship. they can't have done that, as the zero file would not match the result in the signature file, and the client side update program would not have installed it if it did not match.
then your validator should be designed to fail unless everything is found to be ok. then a new template like the one used could not even have been put where it could be used until the validator could pass it. this validator was designed to pass unless it spotted a known failure, which is not how you write them.
then their custom binary file format did not contain a signature block at the start, despite this being standard practice since windows 3.11, and before the first web browser.
then having passed the broken validator, they bypassed all other testing and shipped straight to everyone. obviously nothing can go wrong, which misses the entire point of testing, which exists not to prove you got it right, but to catch you when you get it wrong.
1
-
1
-
1
-
1
-
@sarthakdash3798 it might be, but the point it makes comes straight from the 1970s ibm research on how most cisc chips contain instructions which are often both slow and buggy, and how optimising the compiler to generate fewer of these instructions and thereby only use a smaller part of the instruction set actually produced better, faster and less buggy code.
cisc came about because we did not have either the knowledge to build, or the resources to run such advanced compilers. risc came about because cisc is a nightmare from so many different angles that people thought it a good idea to try a different approach, and it worked.
the gpu issue is different. both cisc and risc use a single stream of instructions working on a single stream of date. sisd for short. gpus still use the single stream of instructions, but every point has different data, or simd, which has advantages for some workloads.
then you have the third case, multiple instruction streams with multiple data streams, which was researched by danny hillis and others in the 1980s. this is basically multicore with advanced inter core communications, and cisc is really bad at it compared to risc just due to the extra size and power needs per core, which is why things like thread rippers need something that sounds like a jet engine on top to stop it overheating.
again, smp works well for some workloads, not so well for others, which is why cisc designers are making chips with a mixture of slow efficient cores and fast power hungry ones, an approach not needed with risc.
1
-
1
-
the problem is not with the learning. if it was using the same input to generate rules for a symbolic ai like an expert system, then used the rules to write code that would be fine.
that is not how it works. with statistical ai, it creates plausible generated code, and as your specification gets close enough, the training set approximates towards a single sample. this results in a 1 to 1 copy.
if you think this is a spurious argument, multiple book authors are suing for exactly this case.
the problem with violation is that it applies everywhere, and the ai has no audit trail to prove it is not guilty. this leaves both the user and the ai owner with potentially huge liabilities which they cannot defend, where they could be sued anywhere.
the only significant defense for software is the obviousness defense, where near identical code implements basically the same function, but it is not collecting that data either.
in the end, the ai copyright infringement issue will not generally be solved with software, but with books, audio, and video, and then the licencing issue will be an addition on top of all that.
think of it like how microsoft got away with blatant monopoly abuse in the us, but then had to mitigate their behaviour expensively in the eu because they did not implement rules as silly as the ones in the us.
also, remember that the movie alien nearly could not be released due to the script being almost an exact copy of one of the stories in a e van vogt's voyage of the space beagle. it was only able to be released because the author liked the way the director made the film, and both sides w3re willing to talk in good faith.
1
-
1
-
1
-
1
-
1
-
@echorises i agree version control is to important and useful to only be used for programming. i would much rather have a repository of useful txt files handled with version control, instead of having microsoft word trying to mishandle multiple copies of a binary word document which has been modified by multiple people. git is just the best version control client we have.
unfortunately, higher education has little to do with generating new knowledge. it is mostly a certificate mill used to generate enough income to pay for teachers and administrators to have a job. even worse, in higher level education a certain amount of teaching is forced upon post doctoral students without them being g8ven any teacher training, while professors are jumping through hoops trying to get external funding to pay for a very limited amount of research, with most of the time being used with students and funding hunts. worse still, until you get tenure, and thus don't need to worry about having a job next year, your actual research wil be constrained by the university to those non controversial bits of the subject that will help you get tenure.
only after getting tenure are you free within the funding constraints to actually do any research you want in what little free time you are given. with the possible exception of japan, no country has yet produced a system where there is a part of the university which takes the pure research, funds getting it to the point where it is usable by industry, and then licenses the technology to industry to generate revenue to fund the part which takes the pure research and develops it.
at that point, your tenured professors would actually be being paid to do pure research combined with developing existing research into stuff usable by industry, while the untenured ones could use the university development fund to find research which would be funded by the university, would help towards tenure, and would be passing knowledge to students. the post doctoral students would still split the time doing work which the professors had got funded combined with teaching.
i would say it should not be possible to get your degree without having to get a teaching qualification as part of it, as so much of the time of professors and post docs is forced to be spent on teaching.
as to producing students fit for industry, that has never been part of the goals of universities. with the exception of Germany, no country has a system of general education which is not designed with the intent of filtering out those not fit for an academic career, and basicaly throwing away the rest. germany does actually have a second path, dealing with some vocational qualifications.
however most education is designed to take those unsuitable for academia and turn them into nice quiet sheeple, which we just cannot afford any longer.
1
-
1
-
1
-
1
-
there is nothing you can do to stop a bad driver from causing the kernel to crash.
there are lots of things you can do to stop the boot loop, which is what might leave microsoft on the hook as well.
first you have windows write a flag to storage as soon as it is able to say it started booting.
then you have it write over that flag which driver it is starting.
then when it finishes booting, you write over the flag that it finished booting.
then the kernel crashes and the system reboots.
the windows system then knows that it crashed because the flag does not say it completed.
it also knows which driver broke it, and can disable it.
it can also treat the boot start flag as a request, and have an internal table of the few drivers like the filesystem which can't be disabled.
after the crash it can downgrade the boot start flag internally so that when it crashes again, it can be disabled. if the driver recovers, it can be re-enabled on next boot. this gives the driver the chance to recover on reboot.
they can automatically add drivers to the internal essential drivers list during certification by simply replacing the driver with a return statement, and seeing if it fails to boot. if it does, it cannot be blocked and is added to the list.
they can then disable the driver on reboot, or second reboot if it is boot start, and put i huge warning on the screen that the broken driver was disabled, causing the customer to question why the broken driver was released.
this could have been done by microsoft or any other os vendor after any of the previous high profile boot loop issues, but they did not.
and the eu thing is just more microsoft misinformation.
1
-
1
-
@tma2001 cloudstrike have made a number of dubious statements, some of which are obvious lies or the person saying them is clueless.
take your statement about the update file.
cloudstrike said it basically had nothing to do with the issue, but if you remove it, the problem goes away. both cannot be true.
then there is the issue of it not containing all zeros, but lots of it guys have looked at the contents before deleting it and found it only had zeros.
giving them the benefit of the doubt which their own statements say they don't deserve, even if the file contained a header, they obviously were not bothering to have the updater validate it prior to putting the file in place, nor having the kernel driver do so before blindly trying to read it. both are standard practice.
similarly, their own statements make it clear that their only filter to shipping was running it against an obviously badly designed validator, and then skipping any other testing. for something running in kernel mode, every change should go through the entire test suite every time, and shipping it how they did should not even have been possible.
even their public statement of what they intend to do to make it less likely in the future basically left people saying why were you shipping at all if you were not doing those things already.
nothing about the information coming from cloudstrike makes them look good, from a single developer being able to live patch 8.5 million machines without testing, to a validator which is designed to pass everything unless it recognises specific things to be broken, to a minimal testing environment for the full driver, to not doing canary releasing. non of it makes them look good, and then having their idea of compensation for causing millions in damages being a generic 10 dollar gift voucher with uber eats, which promptly got cancelled because it looked like fraud because they did not talk to uber eats, it just makes you ask how much longer until they do anything right.
1