Comments by "" (@diadetediotedio6918) on "Next Phase For Rust In The Linux Kernel" video.

  1. 2
  2. 2
  3.  @MrjinZin0902  Yes, C++ has duct tape over its problems, the question should be whether this historically solved the problem of security in software (spoiler: no). If I take 10 random repositories on Github made in C++, from the tests I have done, the probability is that a significant portion of them still use pointers and manual memory management, this is trying to severely modify something that is already established in a way, add a new way of thinking about things. Think about the language's perspective, the amount of educational material built on top of it that doesn't use it, the amount of tutorials and examples, and the amount of additional learning difficulty that using a concept like this adds to the language. Just think about it, if this feature were really effective, we wouldn't be considering using something like Rust in 2022. Rust on the other hand doesn't depend on additional learning, all his material is based on the notion of security, all the things you learn take this fundamental fact into account, if you make a silly mistake the compiler will yell at you and tell you why are you wrong. When I learned to program in C++ many, many years ago, the unique_ptr feature already existed, however I had never met a single person who recommended its use. A few years later I went back to finish learning for some uses, and still I didn't find anyone who recommended these techniques in the countless tutorials I delved under. Rust is also about a lot more than memory safety, it's about concurrency safety, this is something that comes by default, it's incredibly more complicated to write code that incurs concurrency issues with the language than it is with C++, and that's another one of its selling points. There's a lot more to consider than features of a language, you also need to think about the human structure behind it, and you need to think about the code structure it will compel you to do (Rust will compel you to make code immutable by default, will compel you to manage references in a way that is safe to share, will compel you not to leak memory from the first moment you use the language, this is something that cannot be overlooked).
    2
  4. 2
  5. 2
  6. 2
  7. 1
  8. 1
  9. 1
  10. 1
  11.  @dave7244  Well, let's go. I'll take this a little more seriously for the sake of discussion. 1. The hello world program doesn't generate 12mb, I literally created one just to check if that was true, in total we have 5mb including configuration files, debug symbols and the git repository that is created by default. The final release mode binary is around 125kb. 2. This is a pretty ugly thing to assume. See, if the fastest and most efficient programs are the ones compiled in -O3, why isn't that the default? The answer is that there are reasons, reasons that need to be carefully balanced by the programmer when creating a build. I've done C builds that quietly took up over 300kb with just a little code, does that mean there's an inherent problem with the language? Rust has levels of optimization and balance, when you use the standard library you are using code written very carefully and that makes a lot of use of language resources, code that will be statically included in your binaries when compiled, this increases the performance of program execution at a cost of a little more space. Do you know what else increases performance (in their respective cases) at a cost of space? Inlining. Rust also does a lot of inline in the code, not only that, it also focuses on doing a lot of vectorization and loop unrolling where possible, and as expected this results in bigger binaries, does bigger binaries mean worse code in this case? Hardly anyone serious would say that. This is a trade-off of space for time, something quite simple for a developer to understand. Rust also uses LLVM, I've heard in the past that LLVM would produce bigger binaries, for better or worse, it's something that needs to be considered, of course with the new GCC based compiler (when it's stable) we can get an idea if this is also a point to consider. 3. Finally, if size is all you care, we can make the binaries smaller, using some profiling tools to eliminate the static linking of binaries with stdlib, you can make your hw 99kb, a 20% improvement compared to common builds. Your question should be, do I really need to do this? Is it so absolutely important to save 150kb in a world where 1GB of hard drive can cost much less than 0.01 cent of a dollar? That sounds like a pretty hollow critique when we put it into perspective. Assuming you're talking about using this for embedded, one of the language's proposals, that would be a valid thing to say... if it were actually true. As you would with C or C++, if you forgo the conveniences of the standard libraries, you can create very small binaries to use in your embedded ones, so the size of your hw binaries should be close to a mere 9kb. --- That said, I still expect to see positive changes in the switch to the GNU compiler, and time should bring even more exciting things with it.
    1
  12. 1
  13. 1
  14. 1