Comments by "Anony Mousse" (@anon_y_mousse) on "Theo - t3․gg" channel.

  1. 28
  2. 15
  3. 13
  4. 7
  5. 7
  6. 6
  7. 5
  8. 5
  9. 5
  10. 4
  11. 4
  12. 3
  13. 3
  14. 3
  15. 3
  16. 3
  17. 3
  18. 3
  19. 3
  20. 3
  21. 3
  22. 3
  23. 2
  24. The only thing good about it is the concurrency because it uses BEAM. It's literally a less effective Rust. I wonder how long he had been working on it because I'm still not ready to release my language, but mine will have function overloading and allow using whatever Unicode characters you want. And I still take issue with the fake inference that modern languages have. let as a keyword to define an object, is just dumb. It's basically a weaker version of the auto keyword in C++ because now you're asking for permission. In my language you can infer by just doing foo := bar; The only reason for the : notation in setting a value in my language is to tell the compiler to create something there. It's just a way of catching mistakes, so you won't be likely to do foo = bar; and later baz = foO; and get a value or error you don't expect. It also works as a conditional guard so you don't accidentally set the value in a conditional. I also take issue with the import systems for every modern language, even C++. If I'm importing a library I expect it to be immediately accessible. So import io; and everything in the io subsection of the library is accessible, but you can also do import io as io; and keep it in its own namespace or import io as fluffernutter; if you want. I also took the approach of inferring meaning in cases where the context tells me what you're referring to, such as enum Color { Red, Green, Blue }; Color c = Green; instead of requiring Color:Green to explicitly get at the namespace, though you can still be explicit if you want. I did the same for switch. switch c { Red: foo(); break; Green { bar(); baz(); } Blue { print( "Green doesn't fall through because it enclosed the case in braces.\n Red does fall through because it designates a section.\n" ); } }. I also avoided the Java style keyword duplication by allowing a modifier to enclose a block or start a section. So say you want to make a group of functions public, just do public: and everything until the next section is public, and you can stack them. I still don't understand why everyone wants to copy Java and JavaScript for function syntax. No need for a keyword, just use a regular pattern.
    2
  25. 2
  26. 2
  27. 2
  28. 2
  29. 2
  30. 2
  31. 2
  32. 2
  33. 2
  34. 2
  35. 2
  36. 2
  37. 2
  38. 2
  39. 2
  40. 2
  41. 2
  42. 2
  43. 2
  44. 2
  45. 2
  46. 2
  47. 2
  48. I think that maybe some people missed your overall point, which is a very valid point that applies no matter what language the project was written in or what language they want to move the project to. For my own perspective on this constant RIIR mentality, I've found nearly universally that when rewrites happen, features are lost. Even if the program runs faster, if it does less, as the user of that program I don't care if it's now viewed as a "safer" program. Of course, as anyone who knows more than one systems language can tell you, there's more than one way to skin a cat and Rust isn't any safer than other languages, even C. There are plenty of tools to do analysis of your code and keep your program error free that are far better than the Rust compiler and target languages like C. You can write high quality code in nearly any language, it's just a matter of skill. One could claim all they want that "Rust forces you to do things a certain way", but if you already write your code in that way then you shouldn't have problems with C, and if you don't already write your code that way then you were using C wrongly anyhow. If you didn't already know how to correctly write code, then you probably shouldn't have been a programmer in the first place, or you should just use JavaScript. As for the comments on async, I have always believed it's the wrong solution for the problem at hand. It's an attempt to merge the concept of a coroutine with threads and make them take less effort than threads, but it's still just threads with you having less control over what happens. True coroutines don't require separate threads, but the concept has been bastardized by everyone who keeps incorporating async garbage into languages. I look at them as being unnecessary work because it's better overall to just use the OOP method of saving state in memory somewhere, maybe even on the stack, and then just calling a function to update that state, which is more or less the generator concept. Consider that C already has that in its standard library with FILE's, where you can read some input or write some output and it's buffered. If you really want to read input in the background, then just use a separate thread, which even C has in its standard library now too. Although, I doubt anyone will read all of this, and it's most likely that someone will see this last line and comment that they read the whole post when they didn't.
    2
  49. 2
  50. 2
  51. 2
  52. 2
  53. 2
  54. 1
  55. 1
  56. 1
  57. 1
  58. 1
  59. 1
  60. 1
  61. 1
  62. 1
  63. 1
  64. 1
  65. 1
  66. 1
  67. 1
  68. 1
  69. Many years ago before GH was a thing, SourceForge had way fewer ads. It wasn't until after GH rose up to replace it that SF started having far too many ads. Also, what the fuck is he talking about by saying that "open source" was coined in 1998? I don't know how old he is, but it was a term when I was a youngster back in the 80's. I had subscriptions to several magazines that had printed source code. GNU has existed since at least the 80's too. When I was in high school playing with Linux there were plenty of open source programs. Maybe not the millions that there are today, but thousands at the time is no chump change. Quite a few CVS servers had anonymous access, especially for open source projects. You rarely had to login or use guest credentials, and I tended to just not bother with projects that had such annoyances. Maybe I'm in the minority when it comes to reading changelogs, because I look at that information for nearly every repo that I play with. The `blame` feature of `git` is one of the most fun features that I use on a regular basis, too. I will admit that I like GH being user centric. Quite a few times I've pulled a user's entire set of repos just because they made it so easy to do. I've got a `bash` script which I can just give the user name to it, and it downloads their entire list of repos and yields a flat list of repo URL's. Then I can either just automate downloading all of them one after the other, or I can use `fzf` to select specific repos. One final note regarding Mercurial is that FireFox uses it. It's one of the more annoying aspects of building it because there's a whole complicated process to getting to code. They should have just moved to using Git at some point.
    1
  70. 1
  71. 1
  72. 1
  73. 1
  74. 1
  75. 1
  76. 1
  77. 1
  78. 1
  79. 1
  80. 1
  81. 1
  82. 1
  83. 1
  84. 1
  85. 1
  86. 1
  87. 1
  88. 1
  89. 1
  90. 1
  91. 1
  92. 1
  93. 1
  94. 1
  95. 1
  96. As someone who likely spends too much time learning every new language that comes along, I can safely say that both Go and Rust are a mistake. However, if it was already working in Go, you're absolutely correct that it should've just stayed in Go. Rewriting a project in an entirely new language just because you like it is the easiest way to completely fuck up a project. I would argue that any new project these days should probably be written in a domain specific language, and if it's intended to run in a web browser the only choices are really just JavaScript and TypeScript and since TypeScript just compiles to JavaScript and is fully backwards compatible, it's not exactly different enough yet. If it's a backend application then the only real choice is C++. The reason being that it's a far better language than both Go and Rust combined and runs fast and compiles fast. Every time I use Rust I feel annoyed because it's basically a clone of C++ but with a shittier syntax and slower compile times because it aims to be more strict. Well, C++ compilers have all gotten better in nearly every way, and can enforce a lot of strictness on you while still compiling faster. Also, that part about not being able to interface between code generated with MingW and MSVC really hits hard. When interfacing different languages you have to compete with name mangling and a truly portable interface should be written for C compatibility, not just because you might just use C to interact with it, but because it doesn't mangle names.
    1
  97. 1
  98. 1
  99. 1
  100. 1
  101. 1
  102. 1
  103. 1
  104. 1
  105. 1
  106. 1
  107. 1
  108. 1
  109. 1
  110. 1
  111. 1
  112. 1
  113. 1
  114. 1
  115. 1
  116. 1
  117. 1
  118. 1
  119. 1
  120. I'm pretty sure you don't read these comments, but I'll say it anyway. First, I don't know why you're reacting to a video that's just over a year old. Second, he's absolutely correct that JavaScript ruined the web, but not for most of the reasons cited. Yes, there are bloated frameworks and that's one of many problems, but the fact that many of the standards committees thought that JavaScript would fix deficiencies in HTML and CSS and thus never amended them to make them better is a huge problem. The fact that you can't just use a standard img tag and get good performance and instead have to use an alternate solution is a huge part of the problem there too. JavaScript is garbage and TypeScript which merely transpiles to it is also garbage. Neither language is any good at this point in time and should probably just be wiped off the face of the Earth, along with HTML and CSS. Now, for the statement on Linux, it's an operating system that requires more intelligence than your average JS developer, so if you can't merely upgrade to a new OS version, that's entirely on you. I've had no problems upgrading my OS many versions throughout the past 25+ years of using Linux. And yes, I mean without a full reinstall. That's not to say that you aren't allowed to do it that way if you want, or that when you're a new user you won't ever make such a mistake, but that at this late stage in the game, unless you're doing something really funky, you shouldn't have problems. I also want to specifically hit upon the argument for Electron. I completely disagree regarding dev time and effort when it comes to Electron versus desktop. If you're a more complete developer then you should have at least a modicum of experience in making a desktop app and there are numerous cross platform UI libraries that it should be easy to build a desktop app out. Hell, you could even use Win32 API and just run WINE on Linux, and we all know how many tools there are to generate code for Windows UI. However, wasting CPU cycles to use Electron instead of building a proper desktop app because you're lazy is a huge problem in the industry and far too prevalent of an attitude. I figured I'd add this in since no one will read it anyway, but the current version of Rust I have installed is 1.77 which I downloaded by cloning the repo and building. The entire folder takes up 19GB of disk space. Previous versions, before deleting them in full and starting from scratch because their build system is garbage, have gotten as high as 32GB. Every time I update it I wipe it out and start from scratch on building, which is fine because the binaries are in my home folder, so I can still build the latest version using the prior version, and from scratch always takes a full hour to build. So anyone who thinks that adding a requirement of the Rust runtime and build tools to a project is insane. It's a garbage language, like JavaScript, with a garbage build system, like JavaScript, and no one should use either language.
    1
  121. 1
  122. 1
  123. 1
  124. 1
  125. You probably won't read this and YouTube probably won't send a notification anyway, and the post will likely be shadowed, but I'll tell you why I dislike this entire take. First of all, every argument for Rust is only valid if you're a new programmer and/or a bad programmer. The disingenuous argument that paper makes relating to "bug finding tools" applies even more so to Rust because the compiler doesn't catch nearly as many bugs as the static analysis tools for C, which are far more mature and feature complete, and there are exceedingly few dynamic analysis tools for Rust which mostly don't exist and would need to be written from scratch. The majority of dynamic analysis for Rust consists of bounds checking which slows the code down. You might say, "but you can turn it off", and then it's no better than C dynamically and there are C compilers which can add bounds checking anyway, but here's the kicker, that's a stupid feature anyway because the real problem with most code is the lack of proper user input checks. You can easily work with strings in C if you properly check user input, and using only numbers won't fix improper checks. Rust won't fix bad code, nor will it make a good programmer better. At best it'll force bad programmers to clean up their act. I know I'm yelling at clouds with this because the government is run by morons and it's going to happen whether I want my tax dollars wasted on it or not, and it won't matter in another decade or two when humanity destroys itself, but it still irks me.
    1
  126. 1
  127. 1
  128. 1
  129. 1
  130. 1
  131. 1
  132. 1
  133. 1
  134. 1
  135. 1
  136. 1
  137. 1
  138. 1
  139. 1
  140. 1
  141. 1
  142. 1
  143. 1
  144. 1
  145. 1
  146. 1
  147. 1
  148. 1
  149. 1
  150. 1
  151. 1