Comments by "Mikko Rantalainen" (@MikkoRantalainen) on "ThePrimeTime"
channel.
-
225
-
49
-
39
-
34
-
33
-
24
-
22
-
18
-
17
-
16
-
11
-
11
-
10
-
10
-
9
-
8
-
7
-
7
-
7
-
6
-
6
-
6
-
6
-
6
-
5
-
5
-
5
-
5
-
4
-
4
-
4
-
4
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
3
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
The C/C++ short, int and long are always integers that have defined minimum size and the actual size is whatever the hardware can support with maximum performance. If some hardware can process 64 bit integers faster than 16 bit or 32 bit integers, short, int and long could all be 64 bit integers. That was the theory anyway. In practice, due historical reasons compilers must use different sizes as explained in the article.
The reason we have so many function call conventions is also performance. For example, x64-64 sysv calling interface is different from x86-64 MSVC calling convention because Microsoft interface has a bit worse performance because it cannot pass equally much data in registers.
And because we need to have backwards compatibility as an option, practically every compiler must support every calling convention ever made, no matter how stupid the convention was from technical viewpoint.
It would be trivial to declare that you use only packed structures with little endian signed 64 bit numbers but that wouldn't result in highest possible performance.
And C/C++ is always about highest possible performance. Always.
That said, it seems obvious in hindsight that the only sensible way is to use types such as i32, i64, u128 and call it a day. Even if you have intmax_t or time_t somebody somewhere will depend it being 64 bit and you can never ever change the type to be something else but 64 bit. It makes much more sense to just define that the argument or return value is i64 and create another API if that ever turns out to be bad decision.
The cases where you can randomly re-compile a big program in C/C++ and it just works even if short, int, long, time_t and intmax_t change sizes is so rare that it's not worth making everything a lot more complex. The gurus that were able to make it all work with objects that change sizes depending on underlying hardware will be able to make it work with a single type definition file that codes optimal size for every type they really want to use.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
@RobBCactive Sure, the only way in long run is to have accurate API definition in machine readable form. Currently if you use the C API, you "just have to know" that it's your responsibility to do X and Y if you ever call function Z. Unless we have machine readable definition (be it in Rust or any other markup) there's no way to automate the verification that the code is written correctly.
It seems pretty clear that many kernel developers have taken the stance that they will not accept machine readable definitions in Rust syntax. If so, they need to be willing to have the required definitions with at least some syntax. As things currently stand, there are no definitions for lots of stuff and other developers are left guessing if a given part of the existing implementation is "the specification" or just a bug.
If C developers actually want that the C implementation is literally the specification, that is, the bugs are part of the current specification, too, they just need to say that aloud.
Then we can discuss if that idea is worth keeping in long run.
Note that if we had machine readable specification in whatever syntax, the C API and Rust API could be automatically generated from that specification. If that couldn't be done then that specification is not accurate enough. (And note that such specification would only define the API, not the implementation. But such API definition would need to define responsibilities about doing X or Y after calling Z which C syntax cannot do.)
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
10:20 I think this viewpoint is simply false. Since good IDEs can show last commit that modified each line, you can nowadays have line accurate description of why each line exists in the source code without having human written comments in the source code!
However, if you fail to write proper commit messages (documenting why the code is needed), you can never achieve this level. And if you write proper atomic commits with proper commit messages, you always rebase and never merge your own code and everything will be fine. And if you're pulling remote branch and it can be merged conflict free you can do real merge if you really want. If there's a conflict, do not even try to make a merge but tell the submitter to rebase, test again and send another pull request.
The single biggest issue remaining after Git is handling huge binary blobs. If you want to have all the offline capabilities that Git has, you cannot do anything better but just copy all the binary blobs to every repository and if you have lots of binary blobs, you'll soon run out of storage. If you opt to having binary blobs on server only, you cannot access those in offline situations or when the network is too slow to be practical for given binary blob.
12:20 This wouldn't be a source control system, it's just a fancy backup system. The problem discussed here is total skill issue only. I personally use Git with feature branches for even single developer hobby projects and I spend maybe 10–20 seconds extra per branch total.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1