Comments by "" (@grokitall) on "Theo - t3․gg"
channel.
-
18
-
4
-
3
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
1
-
@silberwolfSR71 i wouldn't disagree in general, but it tends to coexist with lots of other red flags. for example, there are a large number of "qualified" programmers out there who not only cannot use any vcs, but also could not do fizzbuz without an ide, and have no idea if the build system generated by the ide can be used without the ide.
i would suggest that a good programmer should be able to write a small program with only a text editor, create the automated build file, store it in a version control system, and then have the interviewer be able to check it out and build it on another machine. if the program was something small and simple like fizzbuz, this should only take about 15 minutes prior to the interview, and for most interviews you are waiting longer than that.
think about the Israel passenger security vetting system as a comparison. anybody wanting to be able to go airside at any airport goes through security vetting. each level is designed to either determine you are not a risk, or moves you up another level for a more strict screening. by the time you are stopped at the airport for questioning you have already raised yourself to about level 15, and they are primarily asking to clear up points they could not dismiss without your active involvement. if you pass, you can then get on the plane.
i had to help fill a post, and we got over 100 applicants, with full training given. most were completely unsuitable, but the top few still needed filtering, and things like attitude, and prior experience goes into that filtering. as theo said, if you get to that point and it is between an industry veteran with experience, and a new college grad with no knowledge of version control, you can guess who is higher up the list.
1
-
1
-
the author of that video is an idiot.
he claims being adware makes it a scam, when it was a common way to get funding from freely available software at the time.
he claims being nagware to update to the full version makes it a scam.
he uses his ignorance of the well known problems with the registry to claim it makes a registry cleaner a scam by definition. crud left behind leaves a raft of problems.
he claims that a common shareware antivirus was a scam, just because it was adware.
he admits that dave was running a shareware market place, but then says all of the shareware was by dave, and his previously misidentified scams make dave a scammer, despite that not being how a market place works.
finally, dave got a nuisance lawsuit, and like most companies, it was cheaper to settle than to fight. but he claims that by settling dave admitted to everything in the claim, despite the settlement agreement which ended the lawsuit not saying that, and then claims that because of that when he says the lawsuit was mostly meritless in his autobiography, which agrees with the settlement agreement, he is trying to hide being a scammer.
i am nit saying he could not be a scammer, but literally none of the claims in the video actually pass basic fact checking.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@John_Smith__ i largely agree, but it is not just mobile where risc wins due to power, but the data center too. the issue here is that cisc uses a lot of power, and produces a lot of heat. moving to a larger number of slower processors which use less power overall, and do not produce anywhere near as much heat saves the data center a fortune. as most of these loads are platform neutral, and you can just recompile linux for arm on the server, it does not have the windows lock in.
even worse, as more devices go risc, the software moves to apps and websites, further killing the need for windows. and of course for a lot of tasks, the linux desktop is already good enough, so you have a triple threat.
i cannot wait to see the companies panic as they realise that people outside the west cannot upgrade to 11 and 12 due to costs being too high, and scramble to find some other solution.
1
-
1
-
1
-
A lot of people are getting the law wrong here.
First, copyright is created automatically for anything which does not fall under some very narrow restrictions as to what can be copyrighted.
Second, the copyright automatically goes to the author unless you have a naff clause in your employment contract giving it to your boss, or you are allowed to sign a contributer license agreement and do so.
Third, when you contribute to a project without a contributer license agreement, you retain your copyright, but license the project to distribute your code under the applicable license at the time you contributed. This cannot be changed without your consent.
Fourth, this has been tested in court. In the usa it was found that the author and copyright holder retained copyright, and granted permission to use it under the applicable license. By rejecting the license by trying to change it, you are not complying with the license, and are distributing it without permssion, which is piracy.
In a seperate case, it was found that when the company tried to enforce its copyright, and included code it did not own without an appropriate license grant, they had unclean hands, and therefore were not allowed to enforce their copyright until after they had cleaned up their own act.
This leaves any company not complying with previous licenses with a serious problem unless all contributions are under a copyright licence agreement transfering the copyright to the company, and always has been unless they track down every contributer and get consent for the license chenge for every single contributer. If they cannot get that consent for any reason, then they have to remove the code of that contribute in order to distribute the software nder the new license.
1
-
1
-
@Xehlwan the truth has now come out as to what happened. they created a file with a proprietary binary format. they ran it through a validator designed to pass and only to fail known bad versions, then when it passed, immediately pushed it to everyone with no further testing.
what should have happened it this:
create a readable file in a text format which can be version controlled, test it, and commit it to version control.
generate the binary file from the text file, with a text header at the start (like everyone has been doing since windows 3.11), and immediately create a signature file to go with it.
have the validator compiled as a command line front end around the code used in the driver, designed to fail unless it is known to be good. this checks the signature, then looks for the text header (like in a gif file), then uses that header to decide which tests to run on the file, only passing it if all,of the tests pass.
run the validator as part of your continuous integration system. this tells you the signature matches, the file is good, and all,other tests of the file and the driver passed, so it is ready for more testing.
build the deliverable, and sign it. this pair of files is what gets sent to the customer.
check the signature again, as part of continuous delivery, which deploys it to some test machines, which report back a successful full windows start. if it does not report back, it is not releasable.
then do a release to your own machines. if it screw up there, you find out before your customers see it and you stop the release.
finally, after it passes all tests, release it.
when installing on a new machine, ask if it can be hot fixed by local staff. use the answer to split your deployment into two groups.
when updating only let the fixable machines install it. the updater should again check the signature file. then it should phone home.
if any of the machines don't phone home, stop the release.
only when enough machines have phoned home does the unfixable list get added, as it is more important they stay up than that they get the update a few minutes earlier.
if any of this had happened, we would not have even heard about it.
1
-
1
-
@BenLewisE i am sure someone will make that argument, but the real tradeoff today is between more, faster cache, and more cores. due to the relatively huge die sizes for cisc, they have to optimise for cache, whereas risc designs also get the option of having more cores and less cache.
as this option is only available on risc, we need to wait and see which will be better in practice, but risc has a lot of other advantages, so in the long term, risc is going to win, the same way x86-64 beat x86.
1
-
@CodingAbroad tdd indeed does test first, for the simple reason that if you don't do regression testing that way, you don't know that your test will fail.
then when you write your code, you know that the test will pass, so the test has been validated for both cases. you also know that the code you wrote was testable, which is way more important.
then when you refactor you test that it is actually a regression rest. if it breaks, you are testing implementation, not the stable public api, so it is not a regression test.
as to code coverage, you can get near 100% when testing api's as long as you write testable code.
the purpose of the regression tests is so that you spot you broke the api before its users do, which is why it matters.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@h7hj59fh3f upon doing a little more research, the us and uk maintain your position, but many other countries don't. so putting the grant of public domain statement makes it public domain in all countries which recognise it, and including the cc0 license grants the closest equivalent in those countries which don't.
intellectual property rules are country specific, and follow a pattern in how they are introduced.
first, they don't exist, as the country has no domestic industries which need them, this allows domestic industries to form, copying the ip from other countries. the most obvious example of this is book publishing, where foreign books are copied on an industrial scale to develop a local consumer base for the ip.
second, local ip starts being produced, so rules get introduced to protect the creator and licensee from other local (and later foreign) companies from continuing to do what has been standard practice, as the local market needs enough revenue to help the local creators to be able to continue to create.
third, they want to sell and license to foreign companies, so they have to sign up to international treaties providing either mutual recognition of each others rules, or a standard set of working practices. the first is way better for too many reasons to go into right now.
fourth, at some point in this ip recognition process, 2 things happen as the country realises that ip protection needs to be time limited. the idea of public domain ip is accepted, with recognition upon what terms cause it to expire, providing massive bonuses to the public from company abuses of old ip content, and they realise that different industries and different forms of ip have different timescales for return on investment for ip, and need different expiry rules, after which it returns to the public domain. this pr9tects the companies from other companies.
5rade dress (does it look like a mcdonalds) needs instant protection, for the duration of existence of the company to prevent anyone else from pretending to be them.
drug manufacturing can take 20 years and a lot of money to get to market, with a lot of products failing before it gets here, so it needs relatively long timescales for exclusivity to recoup those expenses.
books on the other hand make most of their income in the first few years, and almost never get a second round of popularity after their initial release, so much smaller timescales should be involved..
and of course, sometimes creators create something for the public good, and want to put it straight into the public domain.
due to the american political system being particularly vulnerable to lobbying, they are still not very far along with the public protection side of this, while being very aggressive with the company protecti9n side. however these two sides need to balance for the good of everyone. some other countries are further along or better balanced than others, due to local circumstances.
this difference in speed of evolution of the rules is just the most obvious reason why mutual recognition is better than forcing standard rules, but there are many others.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1