Comments by "Mikko Rantalainen" (@MikkoRantalainen) on "ThePrimeTime" channel.

  1. 225
  2. 49
  3. 39
  4. 34
  5. 33
  6. 24
  7. 22
  8. 18
  9. 17
  10. 16
  11. 11
  12. 11
  13. 10
  14. 10
  15. 9
  16. 8
  17. 7
  18. 7
  19. 7
  20. 6
  21. 6
  22. 6
  23. 6
  24. 6
  25. 5
  26. 5
  27. 5
  28. 5
  29. 4
  30. 43:10 The days or even weeks I'm spending in state "I'm thinking more than writing the code" is when there are no good solutions given the existing infrastructure and the task at hand. Only multiple options to proceed but each with various obvious con-sides. In practice, in that case "thinking" is about going through existing programs (searching for something similar and looking for results with pros and cons any given solution had), implementations (typically reading code of some open source libraries to understand how they handle the problematic edge cases), research papers, writing some test code etc. That's the "research" in R&D. I have trouble imagining a coder that just sits there meditating and coming up with a good solution they will finally write. Some call this maintaining a legacy system but I think it also covers making any complex changes to any big system, not matter how old or new the code is. Legacy systems are just typically bigger than newly created (toy?) projects. And you get old hairy legacy systems as a result if you repeatedly try to skip the thinking and research part and always go for the most simple solution you can think of without thinking about the con-sides. Basically: how much technical debt your next change is creating to the whole system? If you ignore the debt, making changes is faster but it will bite your ass later for sure. On the other hand, you don't want to waste time trying to create perfect solution either because perfect is enemy of good and it requires insane amounts of time to create perfect solutions.
    4
  31. 4
  32. 4
  33. 3
  34. 3
  35. 3
  36. 3
  37. 3
  38. 3
  39. 3
  40. 3
  41. 3
  42. 3
  43. 3
  44. 2
  45. 2
  46. 2
  47. 2
  48. 2
  49. 2
  50. 2
  51. 2
  52. 2
  53. 2
  54. 2
  55. 2
  56. 2
  57. 2
  58. 2
  59. 2
  60. 2
  61. 2
  62. 2
  63. 2
  64. 2
  65. 2
  66. 2
  67. 2
  68. 2
  69. 2
  70. 2
  71. 2
  72. 2
  73. 2
  74. 2
  75. 2
  76. 2
  77. 1
  78. 1
  79. 1
  80. 1
  81. 1
  82. 1
  83. 1
  84. 1
  85. 1
  86. 1
  87. 1
  88. 1
  89. 1
  90. 1
  91. 1
  92. 1
  93. 1
  94. 1
  95. 1
  96. 1
  97. 1
  98. 1
  99. 1
  100. 1
  101. 1
  102. 1
  103. 1
  104. 1
  105. 1
  106. 1
  107. 1
  108. 1
  109. 1
  110. 1
  111. 1
  112. 1
  113. 1
  114. 1
  115. 1
  116. 1
  117. 1
  118. 1
  119. 1
  120. 1
  121. 1
  122. 1
  123. 1
  124. 1
  125. 1
  126. 1
  127. 1
  128. The C/C++ short, int and long are always integers that have defined minimum size and the actual size is whatever the hardware can support with maximum performance. If some hardware can process 64 bit integers faster than 16 bit or 32 bit integers, short, int and long could all be 64 bit integers. That was the theory anyway. In practice, due historical reasons compilers must use different sizes as explained in the article. The reason we have so many function call conventions is also performance. For example, x64-64 sysv calling interface is different from x86-64 MSVC calling convention because Microsoft interface has a bit worse performance because it cannot pass equally much data in registers. And because we need to have backwards compatibility as an option, practically every compiler must support every calling convention ever made, no matter how stupid the convention was from technical viewpoint. It would be trivial to declare that you use only packed structures with little endian signed 64 bit numbers but that wouldn't result in highest possible performance. And C/C++ is always about highest possible performance. Always. That said, it seems obvious in hindsight that the only sensible way is to use types such as i32, i64, u128 and call it a day. Even if you have intmax_t or time_t somebody somewhere will depend it being 64 bit and you can never ever change the type to be something else but 64 bit. It makes much more sense to just define that the argument or return value is i64 and create another API if that ever turns out to be bad decision. The cases where you can randomly re-compile a big program in C/C++ and it just works even if short, int, long, time_t and intmax_t change sizes is so rare that it's not worth making everything a lot more complex. The gurus that were able to make it all work with objects that change sizes depending on underlying hardware will be able to make it work with a single type definition file that codes optimal size for every type they really want to use.
    1
  129. 1
  130. 1
  131. 1
  132. 1
  133. 1
  134. 1
  135. 1
  136. 1
  137. 1
  138. 1
  139. 1
  140. 1
  141. 1
  142. 1
  143. 1
  144. 1
  145. 1
  146. 1
  147. 1
  148. 1
  149. 1
  150. Coding is already ultimately just prompt engineering. The current "AI" system we have to actually create the software are typically called compilers and the prompt is called source code. And because existing systems are so primitive, prompting those to output an usable software is really hard, hence the need for pro software developers. Future AI-based compilers may be able to understand instructions that are at or near the level of average human communication. And if such future AI can generate the resulting software rapidly and for cheap, it doesn't even matter if normal people fail to communicate their needs at first because rewriting pieces of software will be so cheap that it doesn't matter if there are misunderstandings and creating software that will be thrown away immediately when it has been made. The reason great human software developers work so hard to truly understand the needs of the end user before writing the code is because they want to avoid wasting work. If work is next to free, normal people can just iterate the full software and generate the spec by telling AI to replace the incorrectly guessed parts until the resulting software is deemed good enough for them. It all boils down to communication. The party with money is trying to communicate what they want and the current way to creating software is definitely a compromise because software development is so expensive right now. And most software ever done is broken in every imaginable way and just barely works well enough to be usable. Before AI, I was thinking that there will be always programming work available because we cannot ever fix even all the existing software for real.
    1
  151. 1
  152. 1
  153. 1
  154. 1
  155. 1
  156. 1
  157. 1
  158. 1
  159. 1
  160. 1
  161. 1
  162. I do write answers to Stackoverflow and I expect the new users to RTFM. However, I rarely downvote bad questions unless it's spam or something else obviously malicious, I simply ignore them. If I downvote a question, I always write situation specific explanation for it to help the human being asking a bad question to understand the problem. Linking to generic "wrong type of question" page is just bad in my books unless the content is obviously malicious. And even in that case, the correct action is to flag the question for admins, not to downvote it. That said, if you cannot bother to read the documentation provided to you before you ask the first question, do not expect other people to bother to interact with you either. The instructions clearly explain that you should tell what you've already done and how it has failed. The intent of the site is not to do stuff for you – you have to demonstrate your work first and then ask other to help with the mistake you fail to see. If you don't want to bother to read or work, do not expect free support. I participate to StackOverflow because I believe in teaching things to other people and it makes me better communicator in general. I feel that I can explain things better to my collegues once I have learned to teach other people on StackOverflow. My reputation is around 15k which should give some idea how much I've used it. And I think StackOverflow does have too strict rules about what kind of questions are acceptable. I've been downvoted for real questions as being too opinion based. Most programming tasks are some kind of compromise and it would be valueable to explain what each developer would choose and why. Even if the why is opinion, not a perfect statistical fact.
    1
  163. 1
  164. 1
  165. 1
  166. 1
  167. 1
  168. 1
  169. 1
  170. 1
  171. 1
  172. 1
  173. 1
  174. 1
  175. 1
  176. 1
  177. 1
  178. 1
  179. 1
  180. 1
  181. 1
  182. 1
  183. 1
  184. 1
  185. 1
  186. 1
  187. 1
  188. 1
  189. 1
  190. 1
  191. 1
  192. 1
  193. 1
  194. 1
  195. 1
  196. 1
  197. 1
  198. 1
  199. 1
  200.  @RobBCactive  Sure, the only way in long run is to have accurate API definition in machine readable form. Currently if you use the C API, you "just have to know" that it's your responsibility to do X and Y if you ever call function Z. Unless we have machine readable definition (be it in Rust or any other markup) there's no way to automate the verification that the code is written correctly. It seems pretty clear that many kernel developers have taken the stance that they will not accept machine readable definitions in Rust syntax. If so, they need to be willing to have the required definitions with at least some syntax. As things currently stand, there are no definitions for lots of stuff and other developers are left guessing if a given part of the existing implementation is "the specification" or just a bug. If C developers actually want that the C implementation is literally the specification, that is, the bugs are part of the current specification, too, they just need to say that aloud. Then we can discuss if that idea is worth keeping in long run. Note that if we had machine readable specification in whatever syntax, the C API and Rust API could be automatically generated from that specification. If that couldn't be done then that specification is not accurate enough. (And note that such specification would only define the API, not the implementation. But such API definition would need to define responsibilities about doing X or Y after calling Z which C syntax cannot do.)
    1
  201.  @RobBCactive  Do you agree that if we have a function like iget_locked() and after calling that function you MUST do something with the data or the kernel will enter in corrupted state, this behavior is part of the kernel API? Now, do you agree that C cannot represent this requirement? If you agree both previous points, then you must also agree that there cannot be even in theory a compiler that can catch a programming error where the programmers fails to follow this API (assuming we only have C code as machine readable input data). Rust people are trying to say that quality of the kernel would improve if we had a compiler that can catch errors of this kind. And Rust compiler can already do this if you encode the API information in Rust syntax using types to represent the API behavior. And it seems that Linus agrees and that's why Rust was accepted into kernel despite the fact that the Rust syntax has much higher learning curve than C. The C developers that want to downplay Rust are basically arguing that either (1) there's no need to catch programming errors automatically, or (2) having to write the exact requirements would be too expensive so it's not worth the effort. Which camp do you belong to? I'm personally definitely against (1) because I see kernel level security vulnerabilities and driver crashes way too often. And about (2) I'm not that sure. I think it's worth the effort to try it to improve the quality of the kernel. And the reason to try to use it for filesystem interfaces is because the filesystems are so critical for data safety. If your GPU crashes every now and then, that's non-optimal. If your filesystem randomly corrupts data when threads access the shared data incorrectly or some piece of code does double free, that's a really bad day and hopefully you had backups.
    1
  202. 1
  203. 1
  204. 1
  205. 1
  206. 1
  207. 1
  208. 1
  209. 1
  210. 1
  211. 1
  212. 1
  213. 1
  214. 1
  215. 1
  216. 1
  217. 1
  218. 1
  219. 10:20 I think this viewpoint is simply false. Since good IDEs can show last commit that modified each line, you can nowadays have line accurate description of why each line exists in the source code without having human written comments in the source code! However, if you fail to write proper commit messages (documenting why the code is needed), you can never achieve this level. And if you write proper atomic commits with proper commit messages, you always rebase and never merge your own code and everything will be fine. And if you're pulling remote branch and it can be merged conflict free you can do real merge if you really want. If there's a conflict, do not even try to make a merge but tell the submitter to rebase, test again and send another pull request. The single biggest issue remaining after Git is handling huge binary blobs. If you want to have all the offline capabilities that Git has, you cannot do anything better but just copy all the binary blobs to every repository and if you have lots of binary blobs, you'll soon run out of storage. If you opt to having binary blobs on server only, you cannot access those in offline situations or when the network is too slow to be practical for given binary blob. 12:20 This wouldn't be a source control system, it's just a fancy backup system. The problem discussed here is total skill issue only. I personally use Git with feature branches for even single developer hobby projects and I spend maybe 10–20 seconds extra per branch total.
    1
  220. 1
  221. 1
  222. 1
  223. 1
  224. 1
  225. 1
  226. 1
  227. 1
  228. 1
  229. 1
  230. 1
  231. 1
  232. 1
  233. 1
  234. 1
  235. 1
  236. 1
  237. 1
  238. 1
  239. 1
  240. 1
  241. 1
  242. 1
  243. 1
  244. 1
  245. 1
  246. 1
  247. 1
  248. 1
  249. 1
  250. I think gamification is good for figuring out which accounts are real users and which accounts are bots or spammers. The "reputation" (which I consider karma in reality) on StackOverflow gets you more admin tools once you demonstrate enough sensible behavior. For example, with my current reputation I could go around the site and modify the descriptions of all the tags and mess things out seriously bad. So I understand why those actions are not available to any random spammer. But I think it's a really good idea to give more admin-like powers to users of the site as long as they demonstrate behavior that aligns with the sites objectives. Other than the ability to do actions that are not allowed for newly created users or users with bad karma (e.g. spammers or bots) I don't really care about how much reputation I have. If some future employer were ever interested in that kind of statistics, sure, it would be nice to be able to show that I have this much reputation on StackOverflow. I still feel that the reputation is thanks to sensible behavior, not because I've gamed the system. Gamification results in some users primarily answering only easy questions that get lots of traffic via Google. I don't like that but I understand that it's good for the site because Google gives more value to StackOverflow pages if lots of users looking for simple answers click the StackOverflow page in the results. So even though I don't personally think those answers are worth a lot, I understand why even that kind of gamification benefits the whole site.
    1