Youtube comments of MrAbrazildo (@MrAbrazildo).

  1. 18
  2. 15
  3. 11
  4. 7:08, in old hardware, the engine instructions/data didn't fit entirely on the cache. So, depending on how many instructions an action takes, CPU had to seek the RAM, which uses to be 100x slower (maybe less in a console). On modern hardware, all instructions/data are in the cache, which has much more memory than they require, for an old game. However, RAM is still used even nowadays, for multimedia stuff: images, video, audio, textures and other more than 64 KB sized. The optimization for these large things targets to load part of the RAM on the VRAM (GPU cache memory), in a moment the user doesn't care, like a loading scene - i.e. God of War's Kratos passing through some rocks. Sometimes this is used for loading from files to RAM too. 11:58, but he is doing it for modern hardware, isn't he? The video's goal is just to explain why Quake's alg. is not meant for all cases. 13:00, the sad truth is that these pointer transformations are UB (undefined behavour). That's why the guy commented it as "evil": he just wanted to get his job done, leaving the comment for the future masochist who will deal with the potential nasty bug. UB means the operation is not standardized. So, the app may someday start crashing or giving wrong values (out of nowhere!), if any thing change from the original setup: hardware, OS, any imaginable protocol that interacts to the game. Not even old C had an expected action for that, as long as I heard. 13:52, in math, a minus exponent means that the number is divided. So, x*0.5 == x / 2 == x*2^(-1). Instead of multiplying the whole number, it's possible to change its exponent, by sum or subtraction, which are faster operations.
    11
  5. 10
  6. 10
  7. 9
  8. 9
  9. 8
  10. 8
  11. 8
  12. 7
  13. 7
  14. 7
  15. 6
  16. 6
  17. 6
  18. 6
  19. 5
  20. 5
  21. 5
  22. 4
  23. 4
  24. 4
  25. 4
  26. 4
  27. 4
  28. 4
  29. 4
  30. 4
  31. 4
  32. 4
  33. 3
  34. 3
  35. 3
  36. 3
  37. 3
  38. 3
  39. 3
  40. 3
  41. 3
  42. 3
  43. 3
  44. 3
  45. 3
  46. 3
  47. 3
  48. 3
  49. 2
  50. 2
  51. 2
  52. 2
  53. 4:20, I can't even remember if I ever had a serious bug using pointers. Here go my tips for everyone, who use to have problems with that, to get rid of this issue once and for all: - If you have to allocate memory, don't do that directly: use containers from the standard library , STL. They have their size hidden from you, and manage it automatically. - When traversing those containers, use their own iterators (OO pointers). Member-f()s 'begin' and 'end' provide them for you. Just keep a model like this: for_each (container.begin() + offset, container.end() - premature_end, some_algorithm); With offset <= premature_end, and both >= 0. If you just want to run all the way (default is copy, but you can reference that, with &) : for (auto &a_var_not_ptr: container) some_algorithm (a_var_not_ptr); In any of these cases you deal with pointer directly. - Reallocations may invalidate previous iterators : std::vector <Type> container; auto it = container.cbegin() + K; container.push_back (M); //May invalidate. auto x = *it; //May crash. There are 2 main solutions for this: a) Just "refresh it", after push_back: auto it = container.cbegin() + K; // ≃ F5 in a webpage. auto x = *it; //Guaranteed to work. b) Recommended: reserve memory right after container creation: //Chandler Carruth: "We allocate a page each time. This is astonishing fast!" container.reserve (1024); container.push_back (M); //Just added M, not reallocate. auto x = *it; //Ok. There won't has a new reallocation, as long as container.size() <= container.capacity() . This is much faster and generates a much shorter bytecode. - If you need to allocate directly, use smart pointers instead, to do that for you on the backstage - as well as free them. - If, for any reason, you need a C-like pointer (raw pointer), wrap it inside a class , together with a variable for its size, both hidden (no Java setter methods!) and a destructor, to automatically free the memory of a specific object, once that specific object ceases its existence. - In this case, you will have to write a copy constructor for it, to avoid "memory stealing" , from 1 object that has copied its content, and had its destructor activated. If you already wrote that class, and are in a hurry to use it before writing this constructor, you can still be safe by temporarily deleting its attribution operators (each_of_their_declarations = delete): if any copy is made, it will arise a compile-time error . - I read on GCC (compiler) documentation, that the OS may not provide a pointer aiming before the container . So, if you intend to use reverse iterator, keep that in mind. - Just like index, pointer keeps its step as large as the size of the type it's pointing to. I once had a code running on Windows and Linux, both 32 bits, using 24 bits (+ 8 bits alpha) BMP images. They were read by pointers of type 'long', the max/platform size, to keep portable for the future, automatically growing up to pixel size and OS. When I migrated to Linux 64 bits, it started to get unstable on Linux only. It took me a couple of minutes to figure it out: pointer step became twice the intended size. Easy. - Let's say a f() received pointer for object of class 'animal', and it accesses also the 'dog' class supposedly in it, a class that inherits animal. But this may be just a pointer to 'animal', not dog + animal. To make this downcast, use dynamic_cast: it makes a runtime check , to see if there's "ground for pointer landing". - Above all things, watch out for undefined behaviour . 1 of the many tricks C++ uses to get faster is to not bother the compiler, about the order it must execute things. There's operator preceding, but if that is respected, any order is accepted. So, don't do messy things like this: pointer[index++] = ++index*5; Sure, it will execute [], multiplication, =, those 3 in this order. But what index it will treat 1st? You must get used to have a special look for too compacted commands. Instead of that, unroll the command in the order you are thinking (read-only instructions are 100% safe, even for multithread) : ++index; pointer[index] = index*5; index++; ======= // =========== This is about all the universe of a pointer, all tricks it might trick on you. If you keep yourself lucid about those topics, each time you deal with pointers, they will never be more-than-minutes-to-solve bugs for you. PS: pointer is much faster than index, because it memorizes "where it is", meanwhile index goes all the way from the begin, each time it is called .
    2
  54. 2
  55. 2
  56. 2
  57. 2
  58. 2
  59. 2
  60. 2
  61. 2
  62. 2
  63. 2
  64. 2
  65. 1:07, by that do you mean "data racing" (more than 1 thread writing the same data, at the same time) ? This is easily solved since C++11, with STL <atomic> library, at compile time. The remaining issue is the "false sharing": when you have different threads changing different memories from the same cache line. So when 1 write at its portion, it "freezes" the entire cache line, not allowing the other thread to write, during that brief moment. This is a performance issue, not a bug. It's still solved by hand, having to align the data, leaving each thread to its own cache line. 1:24, what exactly Rust solves here? Those pointers are meant to acquire an opened resource, freeing it later automatically. A common C++ skill issue here is to use those pointers for data that could easily fit the caches. Since people are used to call 'new' in other languages, in C++ it'll get that memory far away, on RAM or an even worse place, becoming at least 100x slower, unless the compiler saves the junior dev. Why C++ made life harder on that? That's because it actually made life easier: it assumes 1 wants the data on cache, thus by default it dismisses us from even having to use 'new'. 1:55, I don't know about unique_ptr. But what I know and saw, more than 1x, is that compiler is smart enough to put an entire std::vector on a cache. Assuming unique_ptr is part of it, it's prone to be free too. But of course, it depends of the memory it's holding: if it exceeds the caches sizes, it'll stay on RAM. I think there's nothing Rust can do about it. 17:12, I thought he would say that C's pointers are the same concept from Assembly. Now I'm confused, since I don't deal with it for a long time. C++ iterators do some compile time checks, while pretty much the same speed.
    2
  66. 2
  67. 2
  68. 2
  69. 2
  70. 2
  71. 2
  72. 2
  73. 2
  74. 2
  75. 2
  76. 2
  77. 2
  78. 2
  79. 2
  80. 2
  81. 2
  82. 2
  83. 2
  84. 2
  85. 2
  86. 2
  87. 2
  88. 2
  89. 2
  90. 2
  91. 2
  92. 2
  93. 2
  94. 2
  95. 2
  96. 2
  97. 2
  98. 2
  99. 2
  100. 2
  101. 2
  102. 2
  103. 2
  104. 2
  105. 2
  106. 2
  107. 2
  108. 2
  109. 2
  110. 2
  111. 2
  112. 2
  113. 2
  114. 2
  115. 2
  116. 2
  117. 2
  118. 2
  119. 2
  120. 2
  121. 2
  122. 2
  123. 2
  124. 2
  125. 2
  126. 2
  127. 1
  128. 1
  129. 1
  130. 1
  131. 1
  132. 1
  133. 1
  134. 1
  135. 1
  136. 1
  137. 1
  138. 1
  139. 1
  140. 1:38, I used to dislike #ifdefs. Nowadays, I think they are quite nice, because they help to debug. For instance, if a block of code won't be used in some compilation, that code won't actually exist, even raising a compile error in case some piece is missing. So this is already a check, a confrontation vs what the coder is thinking. And it's possible to keep switching the switches, getting a quick statistic effect about any bug. Codeblocks IDE can "blur" blocks not targeted to compile, a pretty nice visual effect. 4:19, I agree with you, because people use to think that Single Responsibility Principle is technically only 1 thing, but I think it may be semantically, instead of technically. So a f() may do several small/tiny technical things, to achieve 1 goal. This way, outside the f(), the rest of the project can look at that f(), thinking about that goal only. It's already an isolated functionality, despite the fact it takes more actions internally. 4:31, I completely disagree here. I already wrote tons of words in a video of his, uploaded by other channel. If someone is interested, I may write some words here too. 6:28, sorry, dude, we are waiting for Carbon. If only it changes that bad syntax... 14:35, I think this is much more important than it looks. I can't prove it, but I feel like spending more energy when travelling vertically. So this should be avoided, whenever convenient. 18:02, I personally omit { }, because I love compact code. But I wouldn't argue vs this standard. I would put them on the same line, though. 18:21, in C/C++ the if isn't ruined by comment, even without { }.
    1
  141. 1
  142. 1
  143. 1
  144. 1
  145. 1
  146. 1
  147. 1
  148. 1
  149. 1
  150. 1
  151. 1
  152. 1
  153. 1
  154. 1
  155. 1
  156. 1
  157. 1
  158. 1
  159. 1
  160. 1
  161. I guess by now everybody understood TDD is good. You should now become more technical. Let's say I have a f() that makes several steps to reach a goal. It has several steps because they are private to it, they would not make sense outside of it - it could be prone to errors if called outside . And that f() also updates variables in other classes, because otherwise this should be scheduled to do later, and before some certain other actions take place - and would be prone to errors too, if this schedule fails to complete or, in worse cases, fails to follow a certain order too . By what I understood so far, TDD would demand that a f(), which would be written like that, be splitted in several small f()s. So, to avoid a disaster, I think 2 possible solutions: a) Each f() like this should become a class, with all those private steps being private f()s from it. Tests would have plenty of access to those f()s, most of them not tagged as 'const'. Fortunately, tests are small, and compile-time prone to be implemented. So, tests are unlikely to cause bugs - and easy to fix if that happens. Pros: each test would be executed only 1 time . Cons: kind of a "risky" design . b) Any f() like this stays the same, but its steps could become lambdas, in order to be tested, and this would be made inside that "big f()", each time anyone calls it. To avoid repeated tests execution (hence consuming performance), they could only communicate in case of failing. Since they are compile-time (according to your examples), the optimizer could solve them, realizing that they work, meaning they would do nothing, and then it could decide to eliminate them at compile-time, "because they are useless" (unreachable code). For the tests that accuse an error, the programmer should fix them right away, in order to them become potentially "useless" too - thus eliminated too. Pros: design continues to be as safe as before, despite tests intrusion . Cons: tests would be processed several times, unless the optimizer decides to wipe them out .
    1
  162. 1
  163. 1
  164. 1
  165. 1
  166. 1
  167. 1
  168. 1
  169. 1
  170. 1
  171. 1
  172. 5:31, I don't know what is so bad about C++, like these kind of people use to say. It has all those types you mentioned, like classes that are not defined at compile time (interfaces), "simple structs with methods" (classes), foreach as algorithm and in the language core too, optional types, and so on. 7:15, C++ has the optional type, for a type that may be valid. But has a better solution, if 1 adds the GSL library, the NotNullPtr class (or some name alike), providing Zig's not nullable. It's also possible to develop your own pointer like that, and it doesn't take much longer. 7:49, so there's no C++'s namespace on those languages, huh? It works like a surname for a library. Each 1 has its own, so name conflicts never happen. It's also possible to dismiss yourself from typing that all the time, if you are sure it won't conflict. 8:51, copied from C++, which also has them as default parameter, meaning that 1 doesn't need to explicitly send them on initialization. 9:00, and if "you forget to clean things up", it'll do that for you, no messages needed. 10:05, it means 1 doesn't even need to do a deallocation. 11:20, people are contradictory: they love the "error or variable", for a variable, but at the same time they are afraid of "NULL or a pointer", for a pointer! What's the logic?! 18:57, yes, undefined means it'll initialize that memory taking the "trash values" left in there, from other variables previously freed (no longer exist). C/C++ has this by default (faster initialization), meaning 1 doesn't need to lose time typing ' = undefined'. 23:26, actually it's much better, because it's supposed to has several public f()s, only asking you to type 'public' keyword 1 time only! 23:43, 1) f() is not attributing something necessarily, so there's no need for the = operator. 2) f() doesn't know what the user will do with the variable (changing it, for instance). That's why it doesn't use to be 'const', although it's possible in C++, just not recommended. 3) Specifying the returning type improves compilation time. In my experience, it's better/safer to declare it automatic (fn, var, auto, depending on the language) during development, and switching it to its explicit type, after finished. 27:08, C++ is smoother, dismissing you from typing usize, deducting sum and the returning type as int, due to its default for integer literals.
    1
  173. 1
  174. 1
  175. 1
  176. 1
  177. 1
  178. 1
  179. 1
  180. 1
  181. Other things I often use on C::B, and I don't remember you had mention: - Bookmarks: Ctrl-B on different locations. Then hold Alt, and PgUp/PgDown (hold or not) to navigate 1 or more times through them (without release them, in this last case). - Applying current syntax highlight to any file extension opened. It "doesn't make sense", but I find it to be much pleasant to read that way. I just need to add that extension in a menu, for applying any current highlight I have at the moment. Easier to memorize, due to key colors on key locations. So, often I use to open files on this IDE, instead of text editors. Because these apply highlight according to the file extension, with iron fist. - Abbreviations: Ctrl-J expands some user custom keywords to code. It's possible to set variables dynamically, so that it'll open a menu to type what variable will be placed in what location, among the code expansion. I can put an entire big fat class there, with plenty of variables. - Jump between words, horizontally: hold Ctrl, and Right/Left (hold or not). Let's say I have an abbreviation for a for with const iterators, and for some reason I don't have the nonconst version. So I type rngfor and Ctrl-J. It expands to: for (const auto some_itr = obj.cbegin(); some_itr != obj.cend(); some_itr++); So I want to cut const and the 'c's. I can't do that in 1 replace. Plus, it could erase other 'c's. So it's faster to go Ctrl-RightRightRight... till reach any word to delete. Then Ctrl-Shift-Right (select word), Del (all with the right hand). - Default code: if I have a template of a project, this is faster than load from some other project, and saving as...
    1
  182. 1
  183. 1
  184. 2:24, it's hard to figure out how this can be better than Codeblocks. Cursor up/down I can do with arrow keys. Insert mode is default, I can type anytime. Visual mode doesn't need special keys: it already starts selecting all the block by holding Shift-Down (can be done by just the right hand). Ctrl-S (can be left hand only) to save file, Alt-F4 (left hand) to quit. 4:36, holding Ctrl-Right (can be right hand only) does that. 4:44, same thing with Ctrl-Left (right hand too), for backwards. 5:09, this is nice. I don't know how to do this 1. But holding Up/Down or using the mouse I will take 1s more. So it's not big deal to me. 6:21, for that I need Home (1 or 2x, depending on its config, to reach leftmost position) , Shift-Down (select line), Del. Or 4x left-click with mouse (2x for selecting the word, other 2 for the line) and Del. 6:26, Ctrl-Z/R is enough. 6:40, that was fast. I would need hold Shift-Down till the return line, then Ctrl-Shift-Left, to "undo" selection till return's left side, and Del (all can be made with right hand). 6:54, holding Ctrl-Shift-Right (to select how many words), and Del (to delete all at once) - right hand recommended. 7:04, same thing for backwards, exchanging Right by Left. 8:00, I can type left or right that 0 without worrying about modes. I just need to use Right/Left arrow to move the cursor. 9:00, holding Ctrl-Shift-Right it goes selecting by words. I don't need mouse, despite it being an extra option. 9:20, Ctrl-C/V is almost as fast, can be made with left hand, and won't make another line, because the new line character wasn't copied on the selection. 9:34, C::B is faster on this 1: hold Ctrl-D to duplicate the line how many times desired. 9:47, Shift-End select the line, but the new line character. Then Ctrl-C and go to the other, Shift-End and Ctrl-V. 10:00, that was a nice exchange. I've Ctrl-X to delete and memorize, but could not do that on an exchanging. I would have to paste the selection above, and then Ctrl-X on the what should be replaced over, to memorize it as the new 1. 10:35, I use Home to reach the leftmost side, then hold Shift-Down, till selecting all the block + the other piece. Then Ctrl-D to duplicate it.
    1
  185. 1
  186. 1
  187. 1
  188. 1
  189. 1
  190. 1
  191. 1
  192. 1
  193. 3:30, abstractions are the best thing, but can also turn back against the dev. For C++, I take a FP approach by default, until some variable can cause too much damage, if changed wrongly or from a wrong place. Then it goes to a class, to control everything about it. I 1st start with free things, then tie down some critical things - "decoupling" is not welcomed for those cases. So my code has many more free f()s than classes. Complexity is not inside only 1 f(). If it's certain that a bug is inside 1 f(), it's just a matter of (short) time to be solved, doesn't matter how complex that f() is. It's like a lion trapped in a cage: just study it, and tame the issue. The nightmare happens when 1 needs to travel throughout the project f()s, searching where it might had started. This is the main reason to write classes to restrict who can change critical data. Let's say someone is coding a football (soccer) game. It could has a class for ball, players/actors. To coordinate when a goal is made, and its consequences, changing variables in more than 1 class, I use to have a class to tie those things together. It could be called referee. So public Referee::verify_and_change_if_goal would be the only or few f()s allowed to call private f()s Ball::goal_restart (to put ball in the middle of the field) and Player::goal_restart (to put players into their half of field, in certain locations, with some random variance towards that location, to seems more realistic, less robotic) . So that Referee public f() can change the world, from any point where its object appears. Bad design! Actually, no. The verifications would be made inside Referee (lion in the cage), only changing variables in case of goal. So doesn't matter if it's called several times, even by mistake: the worst possible thing is to loose some performance; it won't ever bug the game. It doesn't even matter if the code grows up to 1 billion LoC: those things will continue locked. But let's imagine the worst scenario: some internal error happened inside this chain of calls, and junior dev decided to shortcut it, creating his own way to change variables after the goal: 1) He would get compile errors because, let's say, the main f() who calls the Referee public f(), and now is calling junior's, is not 'friend' of those classes. Junior turnaround it: 2) made the main f() friend of all those classes, so that he can write his own way. On the next error, some senior will see the class definition, and think: "Wait: why main is friend?!" . But let's make it more complex. Instead of that, junior: 3) pulled Ball::goal_restart and Player::goal_restart to public area. A senior may think those were always public. This is awkward, because some error might happen, by calling 1 f() and not the other (i.e: Ball's but not Player's), since they are now decoupled. But this could be avoided, if they had comments on their classes declarations: DO NOT MADE THIS PUBLIC! 4) Junior rebels: made all the classes public, deleting those comments. FP rules once again! The security system is now completely destroyed! Well, senior devs should suspect all is wrong: 'everything public' is the sum of all fears!
    1
  194. 1
  195. 1
  196. 1
  197. 1
  198. 1
  199. 1
  200. 1
  201. 1
  202. 1
  203. 1
  204. 1
  205. 1
  206. 1
  207. 1
  208. 1
  209. 1
  210. 1
  211. 1
  212. 1
  213. 19:05, still about this code. Its returning value is inverted, considering that 0 is false, otherwise true, converted to int as 1. This is in the core of C/++. So this can (and will) cause bugs. I can imagine: if (test()), hoping that the test passed, when it got a NULL! And UB! Ok, sanitizer would get this fast. But let's not be bad programmers just because of that, shall we? I know that, in ancient C, the 'main' f() got this absurd convention for some reason. And someone could say that this 'test' was made an "entry point", thus trying to follow main convention. But 1st, (at least) C++ has EXIT_SUCCESS/FAILURE, to let us forget about this. 2nd, I'll assume this was just a no-excuses mistake. So, how to fix it? It's not possible to just exchange those values, since bugs would start to poping-up. If I would alone in this project, I would just create a C-enum, like: enum test_ret { TEST_FAIL=0, TEST_PASS }; (The explicit 0 is due to 1 had once be the default. So I don't trust enum defaults) . The important thing is to tie the failure to 0 (false). This would be enough, since I respect global constants. Not just because it's a C++ Core Guidelines rule, but also because I have personal experience about that. People underestimate literal numbers danger. However, working in a team, it'd has people writing things like: if (test() == 0), and the enum would be implicitly converted to int, generating bugs, if nobody hunt those call and change them by hand. It's what I would do, after the enum. If they were too many, risking the team write more of them than I could fix, I would change the enum to 'enum class'. It'd cancel the implicit conversions to int, causing compile errors. So people would be forced to see the enum class declaration, and its global constants - any IDE would open its file and location. Even so, there would be people just taking a glance at it, thinking "Ah, some idiot changed it to enum class, thinking it'll make any difference" . So if I start to see many casts to int, like if (0 == (int) test()), the issue still would not be solved. Then a more drastic solution should be taken. I would change the 'int' returning type of test to something not declared before: CALLING_A_MEETING_TO_REASON_ABOUT_THE_STUPID_TEST_RETURNING_VALUES. Compile errors popping up. The idea would be to stop the entire production line, making the new-feature-addicted boss freaking out, risking my job. But it should be made before this gets out of hand - some decision of not messing with working code. To get the boss hallucinating, could even put the time: MEETING_AT_10_30. He would appear sweating, pointing me a knife: "Guess what? Nobody steals my job!" "I don't give a crap about your sh##y job. I'm paid to defend the company goals, which are above you. So I'll keep that, until a get done with this sh##ness, and quit to wash dishes, which is a better job, thus paying more!"
    1
  214. 1
  215. 1
  216. 1
  217. 5:53, I use to not include in advance. I wait for a compiler error ou the actual use of something by the lib. This way I can avoid unnecessary includes. 6:00, Const Correctness Principle: 1st write 'const', only thinking after a compile error. Thinking is a waste of energy, avoid doing it. And whenever I write { } for a f(), I write its 'return' right away. So I avoid the missing return UB bug. On modern C++, a more consistent defensive method for that is to declare the returning type as 'auto'. 6:26, A << B is the highest level possible thing: B is being threw to A. 7:26, for a 1st approach, I think assert is better here: 1 line/cmd. I know it doesn't close files, but this is the 1st, and not opened. 7:33, you are throwing away 1 of the best features from C++: get rid of things automatically. 7:35, for apps, I use namespace std, to be more productive. 10:30, const auto ptr = std::find_if (new_begin, line.cend(), ::isdigit); if (ptr == line.cend()) break; // There's no digit to the right from where the find started. leftmost = *ptr - '0'; // '0' keeps portability. Don't use a number: unworth to be memorized. new_begin = ptr + 1; 10:59, atoi works with char *, not char. 16:35, std::map does this. I use to implement this as 2 std::array. 16:42, this could had been just: if (gTable[i].str == slice). If str and num where 2 std::arrays, the whole f() could be: const auto ptr = std::find (str.cbegin(), str.cend(), slice); if (ptr == str.cend()) return -1; return num[std::distance (str.cbegin(), ptr)];
    1
  218. 1
  219. 1
  220. 1
  221. 1
  222. 1
  223. 1
  224. 1
  225. 1
  226. 1
  227. 1
  228. 1
  229. 1
  230. 1
  231. 1
  232. 1
  233. 1
  234. 1
  235. 1
  236. 1
  237. 1
  238. 1
  239. 2:09, 1 of my biggest bugs in life came when I thought: "3 lines for these 4 ternaries... I guess I will wipe this into 2, elegantly" . I reviewed it by mind and... approved! Those lines held about 8 possible combinations. It happened that 1 or 2 of them were wrong, well disguised among the others. So those use to seldom happen. To get things worse, there were 2 other parts of the code that were more suspicious of being guilt, so I took a while looking closely to them. Automated tests would had get that easily. 3:40, I guess there was code before and after that break. The problem is that in C/C++ 'break' jumps out from blocks of switch, for, while, do-while , but doesn't has this power over if/else ones, as the coder unconsciously thought at that specific moment. So the break was applied to the 1st block above those ifs: the switch, jumping over the processing of incoming message. I once got a bug from this. I never wrote a tool for this 1, since it never was a recurring 1. For this AT&T there were some solutions to replace the else-block, trying to not duplicate the code to where it should jump: - Make it a f(). Bad design, since the rest of the project would see it, and may call it by accident. So boilerplate code should be added, to remediate this. - Make it a macro f(). Despite I don't use to have problem with macros, I agree that it would be noisy/dirty code, depending on its size. - Use a label after the END IF, to be reached via goto. Better, but this goto could still be called from any place from this case, at least. - Lambda f(). I think this is the best 1: break would result in compile error, and return from any place would exit at the right spot. However, this was C, and neither C++ had lambdas at that time.
    1
  240. 1
  241. 1
  242. 1
  243. 1
  244. 1
  245. 1
  246. 1
  247. 1
  248. 1
  249. 1
  250. 1
  251. 1
  252. 1
  253. 1
  254. 1
  255. 1
  256. 1
  257. 1
  258. 1
  259. 1
  260. 1
  261. 1
  262. 1
  263. 1
  264. 1
  265. 1
  266. 1
  267. 1
  268. ​ @timmartin325  Repeatability is relevant because, as I said: "3) A change in the code can lead to a bug that once passed in a test". I agree that human tests catch bugs that are impracticable by automated tests. Once I noticed a bug that took 2 years to arise, in a user point of view. It was caused by an overflow in some bits of a variable, that had pass through an optimization rework. That was expected, indicating the hit on the wall. However, the variable worked with +1 for reading and -1 for writing, to fit the bits field. Outside the class, a local variable (representing the field) in a f() work with the values normally. So, when hit the wall, some tasks were made, and the value has been writing back to the var. The point is that later I implement future checks for that memorized value, but it was wrote not with the overflow value anymore (because it wouldn't fit) - instead of it, a reset value due to bit truncation. But this was not enough to raise the bug, because the +1 for reading make it come back to an acceptable value, at the beginning. And combining that with certain characters alignment, consequences passed to be acceptable in a broken geometry: starting (only) from the end (triggering the overflow), completing it at the beginning! (Geometry could be broken, but the alignment should stay in the same direction) . So the victim became unmovable. Plus, a bad luck of characters too close hid the cause. To appear, it has to attend to several steps: the overflow not entirely solved locally, the reset (which could crash or lead to absurd) being hidden by the +1 (for read), some specific alignments, certain characters, starts with character(s) in the "wall", complete it with character(s) at the beginning. And a bad luck make me took more time than I should. I solve it fast, however - I may be inspired. I baptized it as Age of Aquarius Bug: "When the moooooon is in the 7th Hooooouse And Jupiter aligns with Mars Then peace will guide the planets And loooOOOOve WILL STEER THE STARS!"
    1
  269. 0:02, pretty? Signed var is faster. 0:10, in C++ it's possible to make a class that always implicitly checks if the pointer is nullptr. 1:17, I would not use linked lists neither, which rarely are faster, can't be used in STL algorithms, and forces you to make raw loops. For instance, if 'e' was a std::vector, this whole f() would be dismissed: const auto Result = [](const std::vector <int> &e, const int search) { const auto Result = std::find (e.cbegin(), e.cend(), search); return Result == e.cend() ? null : *Result; } (e, search); 1:24, C++ containers use to have iterators delimiting begin and end, putting the programmer in a range loop by default. For instance, std::forward_list performes a linked list in 1 direction, and has begin/end f()s to give those iterators. 1:55, or just use std::vector, which will free the memory, if its object no longer exists. It hasn't the use after free protection, but it's possible to wrap it in a class, to check that automatically. 3:20, but if that variable must travel alonside f()s, as read-only, and be changed only at the Nth f() called, C can't protect it. C++ has the solution: hide it in a class, making that f() friend of it, so that only it'll be allowed to change the variable. 4:01, C++ has the attribute [[nodiscard]], meaning that a compile error will raise, if the return value is not treated. 5:40, I always use pedantic, because it has good rules. But Werror forbids me to run the app. I always end up cleaning all warnings I turned on, but not always I want to do it right away, which may be less productive. Same thing for implicit conversions. So I don't turn on all warnings. So we can realize that using C++ is a big improvement for defensive code, at least over C.
    1
  270. 1
  271. 1
  272. 1
  273. 1
  274. 1
  275. 1
  276. 1
  277. 1
  278. 0:01, when I code, it's always gorgeous! :elbowcough: 0:08, I disagree. Modern C++ features are higher level ones, easier to use and less prone to errors. It's becoming harder to get things wrong with it! The price is raising chances to write slow-prone code. 0:55, TABs are better: uses 1 char and can be configured equally for all the team. I use TABs with size == 2. 1:05, neither draconian nor pedantic: the team must has the same rule about it, because it's an annoyance to see messed indentations. Once the team agrees about its size, use TABs. 1:20, are you saying that if TAB has the same size, it still may differ from 1 code editor to another?! 2:37, I can agree that, once the f() is done, a typedef is the better returning type. However, during its development, auto is much better: - More productive: it doesn't require to often change its type. - Defensive: if forget to return, it'll be deduced as a void value, raising compile warning if attributed to some variable later. 5:25, I don't use to have this "diamond problem". Mother and Father are not the same person, so each of them should inherit its own exclusive object of Person. Child is yet another person, not tied to parents. So it should inherit Person as well. I use to inherit classes when the derived 1 should has the power of change data of its bases. Except for Person, it's not the case. In real life, a child can change parents behavior by communication, not mental/physical interoperability. So Child should not inherit parents. Instead, its constructors should receive parents objects as 'const &', read-only stuff. Just as in real life: child receives a read-only genetic material from its parents, and is destined to be an independent person. 6:30, interface is too slow and just a bit higher level than a normal class. I don't use it, it's not worth. Plus, for those who use it, a better approach than an abstract class is to make a macro out of it. Then, by composition, call the macro inside the "derived" class, making it a normal class, 15-17x faster! 6:38, if a class has this limitation, I just make its constructors as protected, forcing inheritance.
    1
  279. 1
  280. 1
  281. 1
  282. 1
  283. 1
  284. 1
  285. 1
  286. 1
  287. 1
  288. 1
  289. 1
  290. 1
  291. 1
  292. 1
  293. 1
  294. 1
  295. 1
  296. 1
  297. 1
  298. 1
  299. 1
  300. 1
  301. 1
  302. 1
  303. 1
  304. 1
  305. 1
  306. 1
  307. The author showed good skills, regarding clean and DRY code, automated style (including safety checkings) and knowledge of technicalities about C. But to me what really matters about senior or ace worker is the concern toward safety. He didn't mention this, at least not by words. 19:05, for instance, in this code I would point out some things: - 1st, I would "rewrite everything in Rust"... Nah, it'd be in C++, which is indeed an improvement over C, not lefting anything behind. If the boss didn't agree: https://www.youtube.com/watch?v=O5Kqjvcvr7M&t=22s - I would pay attention if linked list would be the best choice. It's only faster when there are too many insertions in the middle - that means a sorted list, somehow. Otherwise, a std::vector-like is much, much faster. For instance, if it's just a database, this sorted linked list would be slower than an unsorted vector-like, adding to the end, and removing in the middle by replacing it by the last. Or am I wrong? - I would study the idea of changing that boss raw ptr to a unique_ptr or something higher-level: more elegant and safer. - I would change that person::name, from C-array to std::string: more comfortable to work and almost no chance for UB, leading to cleaner code, since it'd require much less if-checkings by the user. But main advantage is that std::string is not a primary type (it's a class instead). So it's possible to later change it, by a user-defined faster container, keeping the same syntax to communicate to outside - no tons of time refactoring throughout the entire project . This would not be achievable with C-array, unless all its uses/calls where made via f() or macro - which nobody uses to do for it. And would worry about that only if that std::string was a bottleneck, which is unlikely. But ok, let's imagine the worst scenario: it needs to be replaced by a fixed-size array, which uses to be 20% faster on the heap, only. Since it is not flexible as a std::string, does that mean it'd break its syntax, needing refactoring? Actually no, there's a turnaround: a tiny user-defined class, inheriting std::array (same speed as a C 1), and writing by hand all std::string specific functionalities, like += for concatenating. So all the work would stay inside the class. In case of bigger name being assigned to 'name', an internal check would be made, as Prime pointed out. But not via assert, which would break the app - it could be 1 of those ongoing apps. Just an if, truncating the name to the size, writing an error to std::cerr. But probably a fixed-size array could not be used: it has a limit of total memory per app. Since this code is making allocations, it suggests there are a huge number of persons. So it'd get a seg. fault. So std::string would be indeed the best choice.
    1
  308. 1
  309. 1
  310. 1
  311. 1
  312. 1
  313. 1
  314. 1
  315. 1
  316. 1
  317. 1
  318. 1
  319. 1
  320. 1
  321. 1
  322. 1
  323. 7:07, I think C++ fits in this article even better than C#. However, it requires the user to develop "some feelings", if he chooses to use default behavior/resources, instead of developing his own defensive tools. For instance, my 1st thought after certain action(s) are set to: - Check the result of a f() or algorithm right away. - If what matters is index where a pointer stopped after an algorithm, I get rid of that pointer at once. - For things that I'm used too, I write as fast as I can (favouring productivity). When something starts to be unique, I proportionally start to get slower and more reflexive about (favouring defensiveness). When things get complex or I'm failing often (for whatever reason), I stop everything to write a tool to lock the right behavior forever, going back to be faster/productive. So things go inside f() or classes (yes, including setters as nonpublic) , only 1/few way(s) to reach them, putting as many layers of protection as needed (thanks to C++ high functionality), whatever needed to reach productivity once again, because I value doing things without thinking twice, in crazy fast typing fashion. So C++ fits in this article purpose of proportionality production, according to its complexity. An example: I was working from picking values from a string, by pairs. I decided to use its default behaviors. I wrote fast, everything works predictably. No need for fancy tools nor languages. Then I decided to optimized it, by using a pointer that made 2 steps per cycle. It got expressively faster. It was also fast to develop, and worked flawlessly. So I left the computer, with my 15487th easy victory using C++. But my gut feeling said to me that I wrote too fast something that I'm not used to. So, calmly drinking a coffee, I made a brief reflection about it. Mentally I discovered that the 1st step from the pointer was immediately checked, as I use to do, but not the 2nd. So it would step beyond array boundaries on the last 1, in some cases, whenever the f() didn't return before. Easy check, easy fix, I just added 1 line check for that.
    1
  324. 1
  325. 1
  326. 1
  327. 1
  328. 1
  329. 1
  330. 1
  331. 1
  332. 1
  333. 1
  334. 1
  335. 1
  336. 1
  337. 1
  338. 1
  339. 1
  340. 1
  341. 1
  342. 1
  343. 0:37, to GitHub, doesn't matter your license: it'll always be free. 2:28, interesting, in a sense of freeing things from memory. The problem is that, when you finish a small task, it'll eventually need to be broke, to attach a glue to it, in order to serve to a bigger "ecosystem", throughout the project development. So 1 will keep going back to that task, to improve it. Another strategy is to get the project in working state, even if in kind of a bad design. This gives information of how things should interact. Some things are only known in practice. And if things are decently isolated, like it's possible to do in C++ (private setters, for instance), details can easily be improved without fearing the rest of the project influence. Nowadays, mankind is experienced enough to know certain strategies, to advance fast through a project, like automated tests, keep same patterns for everybody, decent encapsulation (as I mentioned, private setters), and so on. 3:00, autism is a severe condition, making the person almost incommunicable. You hadn't it, you were just introspective. 3:36, this is the proof: these are extrovert features. Just train those for a little while, giving them their deserved value, and anyone can acquire them. Autism would never be solved this way. 7:08, there's a mindset switching issue. It's unpleasant to do that, and thus may be hard when the time for the most unpleasant task arrives. So, if it take longer than should for 1 person, I agree. On the other hand, if it's not the case, it's better to start by the easy things, for motivation. When coding, I prefer this last method. For life chores, I think the other is more recommendable. 10:05, I prefer blocks made of 2h of tactical work, 1h of strategic work. But I guess you are not being entirely honest about timed work. There are small distractions, and if you stop the timer for those, being brutally honest, it'll yield almost double of that! By my experience, 2h -> 3:30, 4h -> 7:15-20. 11:00, maybe the headaches are caused by trying to suppress those distractions completely. I don't do that. Instead, I just stop the timer, think/do whatever triviality I want at that point (even if just a brief thought), and then back to work. Before reactivating the timer, I get a little focus 1st. I never had a headache in life!
    1
  344. 1
  345. 1
  346. 1
  347. 1
  348. 1
  349. 1
  350. 1
  351. 1
  352. 1
  353. 1
  354. 1
  355. 1
  356. 1
  357. 1
  358. 29:24, I agree, but I would never rewrite STL because of that. My f()s and classes use to not be generic, whenever possible. For things coming from STL, I use typedefs. 30:15, this is a Java thing. Well coded C++ use friend keyword, to allow just a few f()s to access nonpublic data. 31:43, std::stringstream is just a higher level scanf. And faster, according to a measure I took. I would only argue against it if performance was at stake on it: pointer, for instance, is much faster. It's syntax is not ugly to me. And std::to_string does the trick, if this is the only reason of using this stream. 32:25, 1 line f() could fit inside the class definition. And using std::clog, to not be generic, would dismiss receiving ostream and another class too. Result: auto show_val ( ) {return std::clog << val;} Plus, even using the overload, it could be made via: std::clog << "blablabla" << object.get_val(); I think the code in the video is beautiful, if 1 desires what it offers: throw strictly the object (not 1 of its members) to an object that is or inherited a std::ostream. What is stupid and ugly is to write a f() like that (in the video), when 1 would be satisfied with the already existing 'std::ostream::operator << (int)'. And printf is better only when several values are being read at once - otherwise it's less productive, due to type specifying ( + warnings) and demanding more typing on keyboard. So their thesis of condemning streams fell flat. 35:25, I have a tolerance about 1 line f() definition (below its header) inside class definition, because I can still put { } on the same line. For more than this, if { } is used normally, it starts to push the code downwards, looking noisy to me. I also try to align returns, f() names, 'this' specifier, and f() definition, whenever they are "attached" (1 right below the other). Of course, I don't put these many stupid unnecessary spaces between the "fields". (35:39, giggles) About the horizontal look, it's ideal for eyes, since they are widescreen. Code is ideally meant to be looked by eyes, not with help of hands, unnecessarily travelling vertically.
    1
  359. 1
  360. 1
  361. 1
  362. 1
  363. 1
  364. 1
  365. 1
  366. 1
  367. 1
  368. 1:28, to avoid forget to close (, {, [, I kept the option of automatic closing them, right after I opened them. The same for return instruction: whenever I start writing the f() header, I put the return right away. Other solution is to declare the returning type (at left of the header) as auto. 1:30, I only use C-array when it's const an already initialized with values, that I'll access through enums. At this case only, it provides advantage over C++'s, due to shorter notation. Otherwise, I always use this last 1. 2:11, this includes building defensive tools, to make it safer, far from its default. 3:20, I'm learning to use MQL5 for finance, and they use a C++ inspired language, more defensive by default. I also heard about some people using Java, to make often changes with less risk. But I heard too, in a presentation, that "C++ is the language of choice on this subject" . 5:50, once 1 get used to those, they become easy to manage. If bug happens, it's no more hard to find. 8:53, I heard C++ is starting to replace C on there too. 11:47, since I don't like to configure compilers to attach them to code editors, I only recently got the complete C++17. In Linux, I finally get C++20. Android is barely at C++14, at least with SDL dialog to Java JNI. The good news is that, to become massively more productive, just C++11 is needed, and, for a few blasting features, C++14. C++17 (and I guess C++23 too) is weak, but for C++20 it's said that it's the new "changed our way of coding" . 12:57, it abstracts the low level by default, it's middle-level, and can jump easily to high level, if 1 develops his own classes, working exactly the way he wants and the project demands. 13:03, all of this from C++11. Lambdas are nice: I just type [ ( { (, and end up with [ ]( ){ }( ), completed by the IDE, which is the hard part. Or I can just type lbd + Crtl-J, Codeblocks will expand this according to an abbreviation I previously wrote, that could be like that 1 in the video. To avoid conflicts vs the capture, just type [&], and it'll capture everything as a mutable reference, also dismissing having to receive f() arguments.
    1
  369. 14:08, there's a way to get rid of all those if checks, safe and easy that even C can handle: get a file only to handle the nodes chain/tree. There, some private content will handle the control over the nodes. For public access, only public setters. (This is 1 of the cases this FP approach can enjoy the same safety level from OO. It only evens because those setters are supposed to be called from anywhere) . So the use would be like: create_node(); // Let's assume it failed. Optionally, an err msg could go to some log output. goto_next_node(); // It automatically checks the next 1's validity, thus doing nothing. Another err msg to log. int a = read_var_A(); // Validity check is made here too. Since there wasn't a "next node", it'd return variable from the current 1. But since the list is empty (automatically checked too), a literal value would return. Log should report all of this. goto_previous_node(); // There's none, so it'd not go anywhere. So, this is pretty safe. Log could even get better, by using a trick I saw in a Eskil Steenberg's video: each of these f()s could be a macro call to their actual versions. i.e.: create_node_ (__FILE__, __LINE__); // It'd call this behind the scene. By reporting the current (at the calling moment) _FILE_ and _LINE_ to log, it'd not matter how large the project would be, the exact location of the error or missing explicit check by the user would instantly be known, making debugging tremendously easy. The price is that, despite the code would be clean of explicit checkings, the generated bytecode would has lots of implicit checkings, leading to much more branches, thus slower code. But this is easily fixed: once the app is safe, follow log's instructions about locations, adding explicit ones. Once no more of those err msgs appear, just change 1 macro line, which controls whether or not the implicit checkings are compiled. And then recompile the whole thing.
    1
  370. 1
  371. My way of doing things is 2h for tactical work (coding in computer), and 1h of strategic work, away from computer. Quality software is done by both of these things. 4:11, if "perfection" is taken as some previous broad plan (strategy), it rarely will be perfect. Tactic is needed to give feedback. The same way, nobody should just code, because a good plan can save us from getting obscure paths. 3:15, a meeting to schedule things is a complete idiotic idea. The reason is obvious: there's no deadline in software. However, meetings to configure/update ways of how a team should work, establishing standards, rules and overall strategies is something that should be done, even often. 3:47, don't schedule, schedule is mistake. I can prioritize things (despite not ideal for me), but I'll finish them when the time comes, period. No schedule, no deadline. 5:14, even not having distraction by phone nor other big noticeable things, often appears some small/tiny things, like thoughts of jokes, emotions, interesting ideas (not related to the work), memories of something you can't miss (and have an idea to enforce it right now), and so on. So, the right solution for this is timed work. And if I'm completely honest, stopping the timer whenever these things come along (or I go to the bathroom), my 2h becomes 3:30, in which 1h was spent for strategic approach (paused timer). 6:41, but I'm pitching it. 4h is ideal. And by that I mean 2h timed, with honesty. I hate 8h, I can't take this any longer! This lets me with plenty of energy for doing things free of charge, like a meeting (demanding useful ones), writing a report (mostly from the current day), discuss some ideas (strategy in an amplified view). Of course, not as an obligation, but there's a tolerance and a good sense for these extras.
    1
  372. 1
  373. 1
  374. 1
  375. 1
  376. 1
  377. 1
  378. 1
  379. 1
  380. 4:13, this is strange. If it's maintainable, it means above all things that it hasn't UB (thanks mostly to the high level language you are using) and that it's not changing variables on wrong places as well right places too, in different f()s (well structured). But if it's good on this, it should be prone to be readable too. If I would risk a guess, I would say some variables are been changing on wrong places only, damaging the meaning of things (strategic view). 5:43, this may endorse what I'm saying: the goals of each f() were properly defined: they are mixed. Some f()s are doing job of other, when they shouldn't. 6:05, my "giant" f()s use to be 5 screen sized. They have a preparation of data (that only makes sense if used internally) to be used later, so this takes space. Strategically, it's easy to see: preparation 1st (2-3 screens, let's say), a main loop later, processing them. I keep things inside the f() due to encapsulation: I don't want the rest of the project having direct access to that functionality, since it would not make sense. But if the f() was long enough, making me start to forget what was done earlier, even strategically, then I would start chopping it. I would make a class, having the f() as public, and several other f()s private, as its internal content. No way I would let it with 200 lines, if I would start to loose understanding over it. 6:23, the problem with FP is that it's too optimistic about its safety. For example, to work with multimedia I like to use SDL2. It's pure FP. So I get some of those things, that no alien should mess with, and I put in classes. So I think in case of confrontation, OO should force FP to adapt, because it brings issues to code safety.
    1
  381. 1
  382. 1
  383. 1
  384. 1
  385. 1
  386. 1
  387. 1
  388. 1
  389. 1
  390. 1
  391. 1
  392. 1
  393. 1
  394. 1
  395. 1
  396. 1
  397. 1
  398. 1
  399. 1
  400. 1
  401. 1
  402. 1
  403. 1
  404. 1
  405. 1
  406. 1
  407. 0:01, I'm using Linux Mint 21.3 right now, watching this video. 0:10, and Arch is a s##, btw. 6:18, you should launch a course about making MangoHUD work on Linux Mint, for both 32 and 64 bits apps. I took a week to achieve it! 7:00, I use to omit the sudo for Linux cmds, get the error, and only then I write it. This is meant to get used to behave in a safer way. Like classes that I write in C++: everything is private by default. I let data like that (even knowing it won't work), get compile errors, and only then I either put each as public (rarely), when time comes, or give the "invasive" f() a VIP card to access it, if it is actually 1 of the few deserving it. 7:35, PassWorD. Stupid name! 9:46, I avoid using terminal: writing is prone to errors, more demanding of energy and memorization. I use right mouse button->proprieties and click on them, whenever possible. Flatseal is a flatpack app to manage permissions, at a more higher level. I recommend it. On the same vibe, when coding I use code completion or even Ctrl-C + Ctrl-V. Anything to avoid writing the whole thing. 10:37, how to kill or dodge 1 of these processes, when it got the fullscreen (out of a terminal focus), and Alt-Tab doesn't work? And when Ctrl-Alt-Del / Backspace only throw the user to the "end of section"? How to continue the "section" on desktop or another app? 11:06, Mint is not that kid's stuff. I'm currently with Lutris not working and, for Heroic: Space Ace gets black screen when hit the play button, Trine doesn't want to launch, and Dragon Age - Inquisition doesn't even install!
    1
  408. 1
  409. 1
  410. 1
  411. 1
  412. 1
  413. 1
  414. 1
  415. 1
  416. 5:38, as long as I remember, this behaviour is just a default. You will have to consent access to change variables, otherwise it would be extremely limited. And it's at this point where bugs arise. 7:17, C/C++ have the idea of advancing in block, covering "all" about some subject or not advancing at all. It's all or nothing. C is not much more than a portable, multiplatform kind of middle-level assembly language. A hashmap is a whole software building, focusing on a specif purpose. So, or C will has "all" these kind of high-level software structs (at least the most common ones), or it won't has not even 1 of them. Being an extremely shrinked middle-level language, those things would sound too high level. Although I respect this kind of economic thinking, in my opinion C++ seems to has a much better tradeoff, despite still being an "all or nothing" language. For instance, it embraced those far too known classes. 7:38, (giggles). I can understand that. But I think it would be worst if not using C++. I mean, when 1 blew his whole leg off, he were facing dragons, not bugs, because these use to be killed for breakfast. 8:00, I heard that Rust has a focus on memory safety. This is pretty handy, but it's a focus that made the language loose ≃30% speed. C++ won't handle that, unless its "all or nothing" philosophy decides to throw itself on defending all kinds of well known areas: memory, types, common mistakes, and so on. There's a set of optional slow runtime diagnosing tools, called RTTI. But the best thing is that, unless I'm wrong, the language is the best for making tools, and those user-defined tools can have a focus on defensive strategies. 9:52, nowadays, Fortran is translated to C++. It used to be faster in the past. 10:55, Pascal had the idea to tie the beginner to good practices.
    1
  417. 1
  418. 1
  419. 1
  420. 1
  421. 1
  422. 1
  423. 1
  424. 1
  425. 1
  426. 22:36, C++11 also had for range loops: for (myType &value: vector_of_type). But I kept using the old for, through a macro like: myfor (it, vector_of_type), because I felt counter-productive to have to specify the type used by the container. I only embraced for range loops in C++14: for (auto &value: vector_of_type), using auto to kill that faulty feature. 25:02, I disagree, because these things are keeping themselves generic enough to work with any type. And everything is separated by scopes, that's why so many :: operators. C++ even has a kind of dry syntax, compared to how many things it's handling on those libs. 1 has to compare beauty to what it's trying to achieve. 27:00, macro can clean this code. While the lib has it in its generic form, to hold all possible user configs (and I think it's beautiful, because it achieves that, with likely minimal syntax) , the user don't need to do that. If it happens to has several arguments, just make a macro or typedef about its meaning: #define ParamsForHashDecl typename KeyType, typename ValType, // ... the rest. #define ParamsForHash KeyType, ValType, // ... all the rest. using ProjectsOnlyHash = HashTable <ParamsForHash>; // Alias. Since this is made only 1x, similar f()s headers would be like: template <ParamsForHashDecl> const ValType &ProjectsOnlyHash::getValue (const KeyType key) const; getValue is a function from the class 'ProjectsOnlyHash' (1 doesn't even need to read its template args) , that receives a KeyType (that can't be changed) and returns a reference of some ValType, which can't be changed either. The f() also can't modify data from the ProjectsOnlyHash class, except those declared as 'mutable'. At any time throughout the project, if the user wants to remember what is ProjectsOnlyHash, or its template parameters, just leave the mouse over the respective word.
    1
  427. 1
  428. 1
  429. 1
  430. 1
  431. 1
  432. 1
  433. 1
  434. 1
  435. 1
  436. 1
  437. 1
  438. 1
  439. 1
  440. 1
  441. 1
  442. 1
  443. 1
  444. 1
  445. 1
  446. 1
  447. 1
  448. 1
  449. 1
  450. 1
  451. 1
  452. 1
  453. 1
  454. 1
  455. 1
  456. 1
  457. 1
  458. 1
  459. 1
  460. 1
  461. 1
  462. 1
  463. 8:35, I agree. But I don't know if debugger is overkill. I use to make unity tests. Maybe some prints. This solves +95% of everything. If bug persists, which is rare, I also use a technique I created, called "hacking the solution": I change the code a little bit, test, see results. Then put things back, repeating the process in a different way. This puzzle points me to the right direction. 10:02, I do that too. I think TDD is a bit invasive, when I'm developing the f() signature: I still don't know exactly what it should receive/return, so I want a bit of a freedom. As soon as this is established, I write the tests. Once both are made, the rest of the f() development/fix can reach a pretty fast speed, as it becomes oriented by the tests. 10:10, but I never delete test, unless it can be replaced by 1 that tests what it intended to, in a more edge case way. C/C++ also allow conditional compilations, including or not the tests. So their presence can be configured by changing just 1 line. 17:27, the same thing happens with all those asserts: if #define NDEBUG 1x before, all of them suddenly disappear. So the programmer is not condemned to their presence. 17:53, and compilers evolved too. I saw more than once std::vector (variable length size array) being faster than a fixed size 1! 19:30, it's possible to write tests that just emit reports/logs, showing errors, but not shutting down things. 20:18, I'm too. But I'm 2 workplaces on Linux because, when in development environment, I don't want other minimized windows annoying me, from the rest workplace. I also use the Cube, to give a nice effect when switching between them. 24:40, 1 of the reasons why I use Codeblocks IDE is that because, either on Windows or Linux, I just install (1-2 minutes), pass my pre-configured archive (some minutes maximum), and I'm already coding, with everything I want.
    1
  464. 1
  465. 1
  466. 1
  467. 1:10, does anybody can explain to me what's the logic in calling enum a "sum type"? What's a sum type btw? 1:26, isn't std::tuple enough? 1:53, it's more comfortable than structs, due to possible accessing members via indexation. But I don't use it, because it generates too much assembly code, which leads to slower code. 2:00, are you sure? I know that acquiring/freeing resource are, and this is the main (and should be the only) goal for smart pointers. But raw pointers/iterators are pretty fast for memory accessing. All STL algorithms demand to use them (a few times as references). 4:58, but I'm, and I say it's the best language for crafting tools (more functionality/freedom). Whenever I face something risky around the corner, I build a tool to deal with that. 6:50, you can put declarations and definitions on the same header. I do this for small projects. 7:25, there's no issue about the private data being on the header file. It'll continue being forbidden for public access, unless otherwise expressed. 8:55, I rarely forget to type const. But I agree that const by default is useful. However, it's possible to create a convention for the team: 1) Create some prefix/suffix meaning const for the type, like 'int_'. 2) Do the same for 'mut'. 3) Config. to highlight them in the code editor. 4) Config. to NOT highlight the conventional types in the code editor. This way, the team will notice, at once, that something is wrong when they type 'int', and it doesn't highlight. 10:20, I think const_cast is always a mistake. There's an optimization the compiler does, by exchanging all const by literals, since the beginning, which might colide to that. Better is to go right away to the f(), fixing its const-less issue.
    1
  468. 1
  469. 10:45, I guess the maintainers just opted for keeping the non-const iterator because it's a clue, of what the iterator will do inside the f(). 11:53, "Move was a mistake. It should not be standardized. The compiler can see the allocators" - said a compiler maintainer. It can manage the movable things for us. 12:20, std::function is not mandatory. I would never use a runtime exception just to have it. 1 can use pointer to f(). Its only disadvantage over that class is its kind of annoying type syntax declaration, that can get worse. However, there are some turnarounds: a) auto function = my_func_name; // By omitting the ( ), it's already a pointer to f() . b) If receiving it in a template f(), doesn't even need to know what the type is: just pass the f() name, when calling the template f(). c) If it's not a template, and needs to know the type to declare it, just type another thing, not convertible: the compiler will halt, saying the type it deducted. Then copy/paste it right from the log. 12:42, as I said before, write a tool. For instance, why not put a printf on copy constructor? It'd warn you. And non-primitive types can be send by copy. 1) Compiler may arrange things for you. 2) The size is more relevant than the type. For instance, I once had a 1-byte class. It had its f()s, but only 1-byte data. Passing it by copy on the entire project wasn't slower. 13:25, I don't think these are valid issues. It's just a call to a f(), when returning. If that's a concern, 1 can keep those values in a struct, dismissing calling f()s at returning. These "out parameters" may have a performance cost.
    1
  470. 1
  471. 1
  472. 1
  473. 1
  474. 1:36, I think that would better fit as definition of productivity. I think maintainability is more related to not mess what was already done, whenever you need to change part of it, either by making new feature, refactoring, upgrading, optimizing, exchanging strategy, and so on. 1:53, I don't have this issue. I may forget a bit about how things communicate between themselves, but I get the idea from some minutes to some hours. I may also change the style a little bit. What's the secret for this? I don't know exactly, but I keep the code commented in every tiny thing suspected to cause problems in the future. And I keep documentation about the project as a whole, outside the code too, for both strategy and tactic approaches. 3:35, this shouldn't be happening. Despite it's impossible to know how the entire project works tactically (the exact variables and how they change) , it's viable to have both the strategy (how it works at all, high level) of overall data structs, how they communicate and their goals; and the tactics inside each f(). For this last 1, I use to write its goal, comments in almost every line and even 1 or 2 edge examples, in case of a complex f(). I discover this need by making a brief "brainstorm" about it, mainly focusing on the future readability. This is 1 of the key reasons why I have a "spaghetti" style of horizontal code, putting comments at its right side. People take this as ugly and unreadable at 1st glance. But I think it's compact (most of my f()s can be seen in 1 screen) , encapsulated (no unnecessary extra f()s are made, just for the sake of been "readable") , and commented enough for the future, without pushing the code downwards (which I think it's awful and damages readability) .
    1
  475. 1
  476. 1
  477. 1
  478. 1
  479. 1
  480. 1
  481. 1
  482. 1
  483. 1
  484. 1
  485. 1
  486. 1
  487. 1
  488. 1
  489. 1
  490. 1
  491. 1
  492. 1
  493. 1
  494. 1
  495. 1
  496. 1
  497. 1