Comments by "" (@diadetediotedio6918) on "ThePrimeTime" channel.

  1. 331
  2. 247
  3. 67
  4. 56
  5. 54
  6. 47
  7. 34
  8. 33
  9. 33
  10. 33
  11. 32
  12. 25
  13. 21
  14. 18
  15. 18
  16. 17
  17. 17
  18. 16
  19. 15
  20. 15
  21. 13
  22. 13
  23. 12
  24. 12
  25. My sincere view, as someone who has been programming since 12, is that hard work pays off, but only if it's something you want to aim for. I'm not talking in terms of being something that you necessarily like, but of aiming for something bigger than yourself and working hard to achieve that goal, the other alternative is to work for something that you consider to be your calling. Every day I code 9-13 hours, sometimes more than that (it used to be more until my employers told me to stop for some reason), and when I'm not coding I'm reading about coding, I don't do that because I necessarily I'm looking for perfection (but of course, I'm always looking to be a better person than I was the day before), but because it's become something almost natural for me, because programming is something that interests me deeply, it's something that's part of of my life. I do not consider work as something external to me, or that there is some kind of mystical barrier between my personal and professional life, programming is part of me the same as craftsmanship is part of the craftsman, or carpentry part of the carpenter, this does not imply never doing anything different, or focusing only on that, but that doing things related to your craft is not a sacrifice but, many times, a pleasure, and I can definitely say that it is a pleasure for me. I can clearly feel the effects that all the years of work have had on me, for me it's clearly noticeable that after all that I'm better than before, not just as a programmer but in many ways as a human being, so I definitely think that, not only hard work, but mainly the feeling of being integrated with what one works on, is the essence of a complete life. I'm not saying that you should focus 15 hours a day on it, or that you need to, but that if you want to put more effort into what you do, improve yourself through hard work, that's something that comes with its downsides, but that also will most likely yield you the expected benefits. At such times, think about the nature of the craft, and that to each man men are "A medium-sized creature prone to great ambition."
    11
  26. 11
  27. 11
  28. 10
  29. 10
  30. 10
  31. 10
  32. 10
  33. 10
  34. 9
  35. 8
  36. 8
  37. 8
  38. 7
  39. 7
  40. 7
  41. 7
  42. 7
  43. 7
  44. 7
  45. 7
  46. 7
  47. 6
  48. 6
  49. 6
  50. 6
  51. 6
  52. 6
  53. 6
  54. 6
  55. 6
  56. 6
  57. 6
  58. 6
  59. 5
  60. 5
  61. 5
  62. 5
  63. 5
  64. 5
  65. 5
  66. 5
  67. Also, I think the "one language for a specific purpose" is both a good take and also in some level bullshit (relating to the title of your video as well). It is good because specialization tends to make better tools fit to their specific purposes, it is good for organization and also allows for more conciseness in what you are trying to express with code. And it is also bullshit because learning more languages do not imply in a loss, it expands your domain over all the languages you've already learned by generalizing the knowledge, having competition is also extremely good and factually one of the most common reasons I heard from people is that they "don't want needing to learn so much" (which is lazyness³; you also don't need to learn everything, because competition exists and thus you can work with whatever you want most of the time), and also because the more specialized you are the more you lose context about the world of other things, and the more you need that 'recurrence' and fragmentation inside one workload. You can see this with people using JSON, but still inventing more and more protocols around it, or with alternative solutions to protobuf that tries to cover logic or some other bs, or even with Lua where there are like dozens of versions of it trying to generalize it for more cases or for performance-based tasks (like lua-jit or Luau [the roblox version of Lua with types and other features]). I'm also not saying this is bad, but specialization can be a good or a bad thing and it is generally harder to know the exact domain of the problems you are trying to solve (the problems you are tring to find in the real world to specialize in) than to make a general-purpose language that can be used in certain contexts more than others. I think we should have even MORE languages, more and more and more of them, because no one will fullfil all the needs of all programmers ever. This is one of the reasons of why I think AI's can hurt the developer environment much more than aid, they are good at specific things they have tons of material to train on, and their general tendency is not to innovate but to homogeinize everything (the wet dreams of the "we already have many languages" person).
    5
  68. 5
  69. 5
  70. 5
  71. 5
  72.  @ThePrimeTimeagen  I can also see why it sucks, but at the same time a part of me understands why they exist. It is that, fundamentally, asynchronous functions are different from synchronous functions, when you write synchronous code you are writing something that will be processed linearly and directly by the processor, you can trust the memory that is on the stack, you can trust that nothing in the program will happen out of your control for that specific context (assuming we're not using threads of course), there may be a number of specific considerations. When a function is async, however, we're dealing with something that is essentially constantly moving around, which will need to be paused and resumed, you can't rely on your stack memory (unless it's copied entirely, which incurs other costs, and the different solutions lead to Pin on Rust), you can't count on the consistency of direct execution, you won't be absolutely sure which thread will execute your code (if we're dealing with async in a multithreaded environment like C# ) and you won't even know when (since that's the purpose of async in the first place), there are a lot of considerations that need to be made when using it (and I also understand that this is part of the tediousness of writing asynchronous code). Of course, that said, I've suffered a lot with function colors, nothing more annoying than realizing that you want to have a "lazy" code in a corner and that to do that you need to mark 300 functions above (hyperbole), I think that in that sense, C# at least manages to partially solve this with the possibility of blocking until you get a result, it wouldn't make a difference in terms of usability if, for example, the entire C# core library was asynchronous, because you can always use a .Result and block until having the result (not that it is the most performative or safest approach, of course, but sometimes it has its purpose to unify the worlds).
    4
  73. 4
  74. 4
  75. 4
  76. 4
  77. 4
  78. 4
  79. 4
  80. 4
  81. 4
  82. 4
  83. 4
  84. 4
  85. 4
  86. 4
  87. 4
  88. 3
  89. 3
  90. 3
  91. 3
  92. 3
  93. 3
  94. 3
  95. 3
  96. 3
  97. 3
  98. 3
  99. 3
  100. 3
  101. 3
  102. 3
  103. 3
  104. 3
  105. 3
  106. 3
  107. 3
  108. 3
  109. 3
  110. 3
  111. 3
  112. 3
  113. 3
  114. 3
  115. 3
  116. 3
  117. 3
  118. 3
  119. 3
  120. 3
  121. 3
  122. 3
  123. 2
  124. 2
  125. 2
  126. 2
  127. 2
  128. 2
  129. 2
  130. 2
  131. 2
  132. 2
  133. 2
  134. 2
  135. 2
  136.  @isodoubIet  > Of course it's a bad thing. It inhibits code reuse It really depends on what you are calling "code reuse", I'd need to disagree with you on this one if you don't show some concrete real world examples of this. > loosens the system modeling as you're now encouraged to report and handle errors even if there's no possibility of such This is a sign of bad API design and not a problem with having errors as values. If you are returning a "maybe error" from a function then it <maybe> an error, it is a clear decision to make. > increases coupling between unrelated parts of the code I mean, not really, you can always flatten errors or discard them easily in an EAV model. > and can force refactorings of arbitrarily large amounts of code if even one call site is modified. Again, this is true for any kind of function coloring, including <type systems> and <checked exceptions> (like Java has). A good designed code should be resilient to this kind of problem most of the time. > You can say "this is a tradeoff I'm willing to make". That is fine. You cannot say this isn't a bad thing. I absolutely can say it is not a bad thing. It is not a bad thing. See? I don't think function coloring is necessarily bad, thus I would not agree with you upfront this is a bad thing. I think being explicit about what a code does and the side effects that it can trigger is a good thing, an annoying thing sometimes I can concede, but I cannot call it a "bad thing" on itself, only the parts that are actually annoying (and the same goes for when you don't have this kind of coloring and then it blows up in your head because of it, it is a "bad thing", not the lack of coloring itself).
    2
  137. 2
  138. 2
  139. 2
  140. 2
  141. 2
  142. 2
  143. 2
  144. 2
  145. 2
  146. 2
  147. 2
  148. 2
  149. 2
  150. 2
  151. 2
  152. 2
  153. 2
  154. 2
  155. 2
  156. 2
  157. 2
  158. 2
  159. 2
  160. 2
  161. 2
  162. 2
  163. 2
  164. 2
  165. 2
  166. 2
  167. 2
  168. 2
  169. 2
  170. 2
  171. 2
  172. 2
  173. 2
  174. 2
  175. 2
  176. 2
  177. 2
  178. 2
  179. 2
  180. 2
  181. 2
  182. 2
  183. 2
  184. 2
  185. 2
  186. 2
  187. 2
  188. 2
  189. 2
  190. 2
  191. 1
  192. 1
  193. 1
  194. 1
  195. 1
  196. 1
  197. 1
  198. 1
  199. 1
  200. 1
  201. 1
  202. 1
  203. 1
  204. 1
  205. 1
  206. 1
  207. 1
  208. 1
  209. 1
  210. 1
  211. 1
  212. 1
  213. 1
  214. 1
  215. 1
  216. 1
  217. 1
  218.  @lpprogrammingllc  See how funny this is? You came here, randomly shitted on the language by saying they had "broken promises" by showing a compiler bug, and you cite Residual Entropy as your font, which I had watched the videos, and he seems to be very understandable on the problem, because he understands the difficulty of what it is in hands; now you are saying I'm on some kind of "rust advocate behavior" like this has ANY meaning whatsoever (spoiler: it don't). You say I'm "assuming bad faith" when "someone doesn't like what I like", and by assuming this, you are also assuming bad faith on me by thinking I'm doing this instead of having actual resons to believe so (and spoilers again: I have, and in my previous comment I cited some of them). No, this is not a language-level bug, because this does not make sense at all, the language does not even have a formal specification to have "language-level bugs", the bug in question is a product of assumptions they needed to make when implementing the current trait solver and obviously it is not an intended behavior by any means (it is literally catched by MIRI, so it should not be an intended thing; <and also> the language, as you said, is <promising safety> in <safe code> it should imply by charity that this is not intended to pass as sound code, while it does because the verifications where not properly implemented in the compiler-level), so it is indeed a <compiler bug>. It is a compiler bug that cannot be easily fixed because doing so requires modifying many assumptions in the compiler, because this is a complex bug, but still a bug that is being fixed (and has already a fix in the new trait system), so calling it a "language-level bug" is just mean. You cited the bug report as a proof that it is a language-level bug and, for zero surprises, it does not imply that anywhere. I'm open as well for proofs that this can be classified as a "language-level bug", but more than that, I'm more interested in know how this change anything to anyone interested in the language when the developers are already dedicating their work to fix this bug. Yes, the bug reports are still marked as open because they are not yet in the stable language and because the new trait solver is not yet stablized, I also don't know when it will (but I've read in their roadmap for it that it will be ready for 2027), but it is being worked on, and as such is not in good faith to say they had "broken a promise" because of such a complex bug existing (a bug that has 0 records of being found in real codebases until now; a bug that can be catched with MIR which is something you should <already be using> for really ensuring your code has no detectable safety problems) that is <actively being worked on> (i.e. this is not a thing they "forgot" or "ignored"). As for this: ["Again, this is orthogonal to the real reason I will not use Rust. Which is the complete lack of trust I have in the entire Rust supply chain, because of people acting like you."] You are free to think the bs you want to think, and to say people responding to your lies they are acting "rust advocates" (when in fact you where lying, you said it was "unlikely to be fixed without serious breaking changes", you had not even readed the material available on the problem to say that and you proved this on your posterior comment). Either way, I'm not "rust advocate", my main language of daily use is not even Rust, it is C#, and I program in many languages, I'm not more "rust advocate" than I'm "truth advocate", and you are indeed acting in bad faith with your comments, you are being weird and shitting on things (you literally started this with your comment by citing something you DON'T understand, you literally pasted only a part of a function that <is not very unsafe> without the other part), this is not the behavior of someone who are really wanting to have a purposeful discussion over a topic.
    1
  219.  @Bebinson_  It seems to me that after you said you weren't going to try to say how the American DMV system should work, but after that you actually suddenly said how the American DMV should work, so your statement seems just discursively empty. This type of problem has absolutely nothing to do with the government having or not delegated a function to a company, it has to do with the nature of that function and the very notion of delegation. Delegating means granting, it means that you pass the authority of a certain task to a third party, if you read between the lines that means that the government has basically moved this burden from it to a company, and that fundamentally does not solve the inefficiencies that would be seen in the government itself. This improves the chances that something good will come of it, as companies tend to be more efficient in the way they do things, but as long as this is still a concession it is still a right to be the only one to do a certain thing, and that's what government is, and it encourages irresponsible and bad behavior like this. So the solution should not be to "nationalize" this task, but to free it up completely and allow each company to provide its own solution to the problem, and allow them to compete on their solutions in order to establish a higher degree of quality than than the government could (or worse, as the case may be). This is the only fair way to establish whether private contractors are really worse or better than the government.
    1
  220. 1
  221. 1
  222. 1
  223. 1
  224. 1
  225. 1
  226. 1
  227. 1
  228. 1
  229. 1
  230. 1
  231. 1
  232. 1
  233. 1
  234. 1
  235. 1
  236. 1
  237. 1
  238. 1
  239. 1
  240. 1
  241. 1
  242. 1
  243. 1
  244. 1
  245. 1
  246. 1
  247. 1
  248. 1
  249. 1
  250. 1
  251. 1
  252. 1
  253. 1
  254. 1
  255. 1
  256. 1
  257. 1
  258. 1
  259. 1
  260.  @TheManinBlack9054  Was it? I searched for it and this was what I got in the sources: ["phi-1 is a Transformer-based model with 1.3B parameters, trained for 4 days on 8 A100s, using a selection of ``textbook quality" data from the web (6B tokens) and synthetically generated textbooks and exercises with GPT-3.5 (1B tokens)"] and ["The language model Phi-1.5 is a Transformer with 1.3 billion parameters. It was trained using the same data sources as phi-1, augmented with a new data source that consists of various NLP synthetic texts."] ["Phi-2 is a Transformer with 2.7 billion parameters. It was trained using the same data sources as Phi-1.5, augmented with a new data source that consists of various NLP synthetic texts and filtered websites (for safety and educational value)."] ["We introduce phi-3-mini, a 3.8 billion parameter language model trained on 3.3 trillion tokens, whose overall performance, as measured by both academic benchmarks and internal testing, rivals that of models such as Mixtral 8x7B and GPT-3.5 (e.g., phi-3-mini achieves 69% on MMLU and 8.38 on MT-bench), despite being small enough to be deployed on a phone. The innovation lies entirely in our dataset for training, a scaled-up version of the one used for phi-2, composed of heavily filtered publicly available web data and synthetic data."] None of the phi models are trained on "purely synthetic data", only mixed training data, so we don't know to which degree they are really being affected. We also don't know if the degradation is sufficiently high in the synthetic data from the early foundational models we have today for it to make a big difference versus what it will be in the next foundational models in the future and/or upgrades on them with more and more recursively fed synthetic data being part of their training. I'm also not sure of any foundational models trained on purely synthetic data that is publicly available for checking, if you have any I would be interested in seeing them.
    1
  261. 1
  262. 1
  263. 1
  264. 1
  265. 1
  266. 1
  267. 1
  268. 1
  269. 1
  270. 1
  271. 1
  272. 1
  273. 1
  274.  @d3stinYwOw  Freedom should even extend to selling asbestos, which doesn't mean that the consequences of its misuse shouldn't be legally penalized. I think even you could agree with me on that. If you're using asbestos in your products and people are unaware of this and its health risks, you're clearly an aggressor and potentially a vile murderer, and it makes total sense for you to be stopped. Now if you sell asbestos making its risks clear, what exactly is the problem? I believe even you should be able to acknowledge that scientific research, for example to make the material safe, depends on the accessibility of this material in some case, and that restricting the use of something is restricting possible innovations that may arise from it and also restricting even the use in preventing misuse of this material (like for example encouraging biological organisms to find ways to solve diseases caused by asbestos). Accountability should be tied to real cases and not to society as a whole. Finally, I'm not saying it's "black and white", I'm saying that this specific issue is a clear problem, which yes can potentially bring more safety in the long term but at the same time, can destroy or cause irreversible damage in the area of software development and free software. These are measures that are made by people who have no idea how the things we do work and yet still want to screw everyone over and use the excuse of "increasing our security". Just look at measures like the UK's "Online Safety Act" (and similar measures that have been proposed over the years in the US itself) to know that not all the security in the world is worth some things, even though it's not all "black and white".
    1
  275. 1
  276. 1
  277. 1
  278. 1
  279. 1
  280. 1
  281. 1
  282. 1
  283. 1
  284. 1
  285. 1
  286. 1
  287. 1
  288. 1
  289. 1
  290. 1
  291. 1
  292. 1
  293. 1
  294. 1
  295. 1
  296. 1
  297. 1
  298. 1
  299. 1
  300. 1
  301. 1
  302. 1
  303. 1
  304. 1
  305. 1
  306. 1
  307. 1
  308. 1
  309. 1
  310. 1
  311. 1
  312. 1
  313. 1
  314. 1
  315. 1
  316. 1
  317. 1
  318. 1
  319. 1
  320. 1
  321. ​ @isodoubIet  My claim is that throwing exceptions is functionally equivalent to a goto. <Functionally> meaning it functions in a similar fashion (i.e. you are jumping from an arbitrary place in your code to a predefined another place, not only it functions but exceptions are also implemented that way in C++). You cannot also exactly "goto arbirary random places in the code" with non-local gotos (in most languages that have it, at least), you need to add a mark to those places firstly. I used specifically the words "(and can even be used for the same purposes many times)" in my argument, so do you want me to show it? First: ``` #include <iostream> #include <csetjmp> std::jmp_buf jumpBuffer; void second_fn() { std::cout << "in :: [second_fn]" << std::endl; std::longjmp(jumpBuffer, 2); std::cout << "will not happen" << std::endl; } void first_fn() { std::cout << "in :: [first_fn]" << std::endl; second_fn(); std::cout << "will not happen" << std::endl; } int main() { std::cout << "in :: [main]" << std::endl; int ret = setjmp(jumpBuffer); if (ret == 0) { std::cout << "calling :: [first_fn]" << std::endl; first_fn(); } else if (ret == 2) { std::cout << "in :: [main] from -> second_fn with : longjmp" << std::endl; } std::cout << "end" << std::endl; return 0; } ``` Second: ``` #include <iostream> #include <string>; void second_fn() { std::cout << "in :: [second_fn]" << std::endl; throw "ops"; std::cout << "will not happen" << std::endl; } void first_fn() { std::cout << "in :: [first_fn]" << std::endl; second_fn(); std::cout << "will not happen" << std::endl; } int main() { std::cout << "in :: [main]" << std::endl; try { std::cout << "calling :: [first_fn]" << std::endl; first_fn(); } catch(const char*) { std::cout << "in :: [main] from -> second_fn with : exceptions" << std::endl; } std::cout << "end" << std::endl; return 0; } ``` The results: ``` A -> in :: [main] calling :: [first_fn] in :: [first_fn] in :: [second_fn] in :: [main] from -> second_fn with : longjmp end B -> in :: [main] calling :: [first_fn] in :: [first_fn] in :: [second_fn] in :: [main] from -> second_fn with : exceptions end ```
    1
  322. 1
  323. 1
  324. 1
  325. 1
  326. 1
  327. 1
  328. 1
  329. 1
  330. 1
  331. 1
  332. 1
  333. 1
  334. 1
  335. 1
  336. 1
  337. 1
  338. 1
  339. 1
  340. 1
  341. 1
  342. 1
  343. 1
  344. ​ @yyny0  > [ Crates like anyhow? Most libraries do not use that crate, and in fact, the recommendation from anyhow author is to NOT use them for library code. This means that the errors returned from those libraries do NOT have stacktraces, and NO way to recover them. ] Oh, now I get it. You want to have <libraries stack trace>, not <stacktrace in general>. Well, first: It is indeed possible to get the stacktrace of a program from errors as values, so your statement was wrong. You can move the discussion to the fact that this is more of a problem with libraries, and I would agree, it is a pain in this specific case, but you <cannot say> it <can't be done>, it can. Now, you said this is a problem for <errors as values>, but this is more of a problem for <Rust>. A language crafted around errors as values that have stacktraces opt-in at the consumer level of libraries would literally solve this problem and then your point would be much less interesting, so it is kinda fragile to criticize errors as values in this regard based on the design decisions of Rust. > [We've had several "fun" multi-hour debug sessions because of that.] So you are using Rust? What, you just said your service has many exceptions and all in the other comment, or are you talking about a different project or something like that? Also, can you explain me exactly how the lack of stacktraces in specific libraries costed you "several 'fun' multi-hour debug sessions"? I literally never experienced a "multi-hour debug session" because of something like that, and I work on a heck lot of projects, so a more concrete explanation will be good for the mutual understanding part of this discussion. > [Also, those crates are opt-in, and even some of our own code does not use anyhow, because it makes error handling an absolute pain compared to a plain `enum`s.] It makes? What??? It literally was made <to make using errors less painful>, given the purpose of this library, why do you find it <an absolute pain> compared to plain enums? You also don't need to use anyhow, see, you can literally easily capture stacktraces in your application by using std::backtrace::capture or force_capture, in the Rust page they even say it is a pretty easy way of gathering the causal chain that lead to the errors, you literally need to implement like 30 lines of code and use it in your whole application if you want to.
    1
  345. 1
  346. 1
  347. 1
  348. ​ @yyny0  > [An error is incorrect by definition, it is NOT a valid representation of your system, in fact, the whole point of returning an error value is to describe invalid program state.] What I said: ["a perfectly valid representation of a specific part of a system in a given time"], an error IS, in fact, a perfectly <valid> (in a formal sense, and sometimes in business rules) <representation> of a <specific part> of a system in a given time. When you do a login and you type the password wrong, the InvalidPassword <IS>, in fact, a perfectly reasonable and valid <representation> of state in your system (as it should be, it should not panic your software, just exhibit a simple message to the user so he can type the correct password). When you call a method to write to a file, and the file do not exist, receiving a monad where the error is represented as a possibility <IS>, indeed, a perfectly valid way of representing your program state. I just don't know why are you saying this. An error could be defined as a "undesired state given a specific goal intended in a closed system", but not necessarily as an "invalid state" if it is conceived as to be a part of the representation of your possible states. dot. Proceeding. > [As for statistics: our production product has ~20k `throw`s across all our (vendored) dependencies (of which ~2k in our own code), and only 130 places where we catch them. Most of those places also immediately retry or restart the entire task.(...)Additionally, 99.9998% of those tasks in the last hour did not return an error, so even if the cost of throwing a single error was 1000x the cost of running an entire task (which it is not), it would still be irrelevant.] You said "it should never happen", and then you showed me your personal data and said it happens 20k times, I would say this is a damn high number to say it "should never happen". Also, you did not give me any timeframe, not a comparison between different services, giving me one personal example is just anedoctal example. How exactly I'm supposed to work with this and say there are not very statistically representative systems that <do> have an impact with throwing exceptions for everything? This is a pretty specific response. > [I would consider that "grinding to a halt".] You consider restarting a program immediately after an error the same as "grinding to a halt"?
    1
  349. 1
  350. 1
  351. 1
  352. 1
  353. 1
  354. 1
  355. 1
  356. 1
  357. 1
  358. 1
  359. 1
  360. 1
  361. 1
  362. 1
  363. 1
  364. 1
  365. 1
  366. 1
  367. 1
  368. 1
  369. 1
  370. 1
  371. 1