Comments by "MrAbrazildo" (@MrAbrazildo) on "Low Level" channel.

  1. 4
  2. 2
  3. 2
  4. 2
  5. 2
  6. 1
  7. 1
  8. 5:53, I use to not include in advance. I wait for a compiler error ou the actual use of something by the lib. This way I can avoid unnecessary includes. 6:00, Const Correctness Principle: 1st write 'const', only thinking after a compile error. Thinking is a waste of energy, avoid doing it. And whenever I write { } for a f(), I write its 'return' right away. So I avoid the missing return UB bug. On modern C++, a more consistent defensive method for that is to declare the returning type as 'auto'. 6:26, A << B is the highest level possible thing: B is being threw to A. 7:26, for a 1st approach, I think assert is better here: 1 line/cmd. I know it doesn't close files, but this is the 1st, and not opened. 7:33, you are throwing away 1 of the best features from C++: get rid of things automatically. 7:35, for apps, I use namespace std, to be more productive. 10:30, const auto ptr = std::find_if (new_begin, line.cend(), ::isdigit); if (ptr == line.cend()) break; // There's no digit to the right from where the find started. leftmost = *ptr - '0'; // '0' keeps portability. Don't use a number: unworth to be memorized. new_begin = ptr + 1; 10:59, atoi works with char *, not char. 16:35, std::map does this. I use to implement this as 2 std::array. 16:42, this could had been just: if (gTable[i].str == slice). If str and num where 2 std::arrays, the whole f() could be: const auto ptr = std::find (str.cbegin(), str.cend(), slice); if (ptr == str.cend()) return -1; return num[std::distance (str.cbegin(), ptr)];
    1
  9. 1
  10. 1
  11. 0:02, pretty? Signed var is faster. 0:10, in C++ it's possible to make a class that always implicitly checks if the pointer is nullptr. 1:17, I would not use linked lists neither, which rarely are faster, can't be used in STL algorithms, and forces you to make raw loops. For instance, if 'e' was a std::vector, this whole f() would be dismissed: const auto Result = [](const std::vector <int> &e, const int search) { const auto Result = std::find (e.cbegin(), e.cend(), search); return Result == e.cend() ? null : *Result; } (e, search); 1:24, C++ containers use to have iterators delimiting begin and end, putting the programmer in a range loop by default. For instance, std::forward_list performes a linked list in 1 direction, and has begin/end f()s to give those iterators. 1:55, or just use std::vector, which will free the memory, if its object no longer exists. It hasn't the use after free protection, but it's possible to wrap it in a class, to check that automatically. 3:20, but if that variable must travel alonside f()s, as read-only, and be changed only at the Nth f() called, C can't protect it. C++ has the solution: hide it in a class, making that f() friend of it, so that only it'll be allowed to change the variable. 4:01, C++ has the attribute [[nodiscard]], meaning that a compile error will raise, if the return value is not treated. 5:40, I always use pedantic, because it has good rules. But Werror forbids me to run the app. I always end up cleaning all warnings I turned on, but not always I want to do it right away, which may be less productive. Same thing for implicit conversions. So I don't turn on all warnings. So we can realize that using C++ is a big improvement for defensive code, at least over C.
    1
  12. 1
  13. 0:01, when I code, it's always gorgeous! :elbowcough: 0:08, I disagree. Modern C++ features are higher level ones, easier to use and less prone to errors. It's becoming harder to get things wrong with it! The price is raising chances to write slow-prone code. 0:55, TABs are better: uses 1 char and can be configured equally for all the team. I use TABs with size == 2. 1:05, neither draconian nor pedantic: the team must has the same rule about it, because it's an annoyance to see messed indentations. Once the team agrees about its size, use TABs. 1:20, are you saying that if TAB has the same size, it still may differ from 1 code editor to another?! 2:37, I can agree that, once the f() is done, a typedef is the better returning type. However, during its development, auto is much better: - More productive: it doesn't require to often change its type. - Defensive: if forget to return, it'll be deduced as a void value, raising compile warning if attributed to some variable later. 5:25, I don't use to have this "diamond problem". Mother and Father are not the same person, so each of them should inherit its own exclusive object of Person. Child is yet another person, not tied to parents. So it should inherit Person as well. I use to inherit classes when the derived 1 should has the power of change data of its bases. Except for Person, it's not the case. In real life, a child can change parents behavior by communication, not mental/physical interoperability. So Child should not inherit parents. Instead, its constructors should receive parents objects as 'const &', read-only stuff. Just as in real life: child receives a read-only genetic material from its parents, and is destined to be an independent person. 6:30, interface is too slow and just a bit higher level than a normal class. I don't use it, it's not worth. Plus, for those who use it, a better approach than an abstract class is to make a macro out of it. Then, by composition, call the macro inside the "derived" class, making it a normal class, 15-17x faster! 6:38, if a class has this limitation, I just make its constructors as protected, forcing inheritance.
    1
  14. 1
  15. 1
  16. 1
  17. 1